Events, Technology, Information Exchange Brandon Greenberg Events, Technology, Information Exchange Brandon Greenberg

WEBINAR: Practical Tools for Better Decision Making

There are many different decision making tools available in the marketplace. These tools serve many purposes including information sharing, multi-criteria decision making and mapping.

On Wednesday, February 21, 2018, the National Information Sharing Consortium (NISC) is hosting a webinar that will provide an overview of several solutions G&H International (the company I work for) has developed to address specific client problems, which include:

  • Managing large-scale events;
  • Integrating data silos to enhance local decision-making; and
  • Developing a virtual exercise facilitation capability.

Here is the official blurb:

There are many different decision making tools available in the marketplace. These tools serve many purposes including information sharing, multi-criteria decision making and mapping.

On Wednesday, February 21, 2018, the National Information Sharing Consortium (NISC) is hosting a webinar that will provide an overview of several solutions G&H International (the company I work for) has developed to address specific client problems, which include:

  • Managing large-scale events;
  • Integrating data silos to enhance local decision-making; and
  • Developing a virtual exercise facilitation capability.

Here is the official blurb:

"The G&H International Services webinar is the sixth webinar in the NISC's Mission-Focused Job Aids Webinar Series that reviews tools, techniques, and standard operating procedures that NISC partners in the homeland security, emergency management, public safety, first responder, and healthcare preparedness communities use to facilitate and manage information sharing. For more information about the webinar series and the NISC, visit the NISC website at www.nisconsortium.org. To become a member of the NISC, click here to join, membership is free for all users!"

Read More
Technology, Information Exchange Brandon Greenberg Technology, Information Exchange Brandon Greenberg

Information Requirements for Crisis Response – A Radio Perspective

I take the position that differing and contradictory viewpoints or perspectives help shed light on the many gaps and issues the industry faces. As such, I invited Terry Canning to provide a guest post in response to my recent post on redefining information requirements for disaster response. The views he expresses are his own. We welcome your thoughts in the comments below!

A couple of weeks ago Brandon wrote a thoughtful and thought-provoking blog describing how the information requirements for successful crisis response is being redefined.  He opened with “Developing information requirements for crisis response is a tedious and flawed process filled with many uncertainties…”  In a reply, I agreed with his postulation that it can be a tedious process (although I proposed fastidious rather than tedious) but disagreed that it is flawed.  Brandon then challenged me to write a response to fully explain my position on this issue – and I have accepted.

I take the position that differing and contradictory viewpoints or perspectives help shed light on the many gaps and issues the industry faces. As such, I invited Terry Canning to provide a guest post in response to my recent post on redefining information requirements for disaster response. The views expressed are his own. We welcome your thoughts in the comments below!

A couple of weeks ago Brandon wrote a thoughtful and thought-provoking blog describing how the information requirements for successful crisis response is being redefined.  He opened with “Developing information requirements for crisis response is a tedious and flawed process filled with many uncertainties…”  In a reply, I agreed with his postulation that it can be a tedious process (although I proposed fastidious rather than tedious) but disagreed that it is flawed.  Brandon then challenged me to write a response to fully explain my position on this issue – and I have accepted.

To put my comments in perspective I have been a volunteer fire fighter for over 35 years and a chief officer for 15 of those years, having retired in December of 2013.  For the past 16 years I have been engaged as a radio communications consultant with the Province of Nova Scotia, Canada, where I was responsible for coordinating emergency communications.  My role also included ensuring radio interoperability for twelve provincial government departments, two regional municipalities, four federal government departments, several NGO’s with public safety roles, the provincial police service (RCMP) and 285 volunteer fire departments. The volunteer fire service encompasses over 9000 volunteer fire fighters.  All of these users share a common, single 700 MHz, province-wide trunked radio system, operating at 86 sites.  My focus on the radio ‘tool’ is intentional, as that is my background and strength; there are certainly other tools that contribute to success. 

In order to achieve full situational awareness (the ultimate objective of gathering, storing and sharing information) for crisis response, all engaged response parties must be able to communicate directly with all others in real time, as required, and as authorized.  This is the foundation of the successes realized by the many agencies and orders of government utilizing the second generation trunked mobile radio system in Nova Scotia.  Rather than competing for limited precious radio spectrum and even more elusive capital funding, an attitude of cooperation and system resource sharing has created a model for information sharing and universal situational awareness.

This may seem to be only moderately related to the topic of redefining information requirements for crisis response, however my point is that with real time interagency communications using the one-to-many capability of two-way radio, there is much less need to gather and store information.  Instead my suggestion is that the parties with the information essential for an effective crisis response be brought directly into the picture utilizing the radio system – thus every stakeholder is aware of all pieces of the puzzle.

The Nova Scotia approach has resulted in much less time defining requirements and dramatically more accurate and timely information during a response. There are basically three components employed in the Nova Scotia model:

1) A process of post incident analysis

Engage all incident stakeholders to perform a thorough, frank and inclusive debriefing after every significant multi-agency incident, and, ensure the learnings from these analyses are incorporated into go-forward response plans.  Of course each of the typical incident response agencies maintains their own standard procedures and protocols, but they are developed and refined in light of the information gathered from the analysis and debriefing process. 

2) A stakeholder interoperability lessons learned forum

To emphasize the positive learnings, the province hosts an annual Interoperability Forum, attended by key agency representatives, where incidents of the previous year are reviewed and discussed from a communications perspective and the attendees are invited to interact and learn with and from their counterparts. 

3) A formal interoperability advisory group

The Radio Interoperability Nova Scotia Advisory Council (RINSAC) is made up of designated municipal, provincial and federal agency representatives to consider, vet and advise on government initiatives to optimize the provincial radio system.  RINSAC members may also present proposals from constituent users to the provincial radio authority for consideration.  Through these three channels, a suite of best practices and most effective information sharing approaches are developed.

I fully endorse Brandon’s categorization of the three types of information surrounding crisis response and his assertion that they are types, not levels of information.  It is impossible to accurately predict which party will require what piece(s) of information at any particular point in time during a response. Thus, a fully interoperable radio communications system encompassing all stakeholders, is key to ensuring those who hold required information can promptly and accurately communicate it to those who need it during a crisis response. As a result, the requirements for pre-incident information collection and storage is reduced, eliminating noncurrent information and minimizing inaccurate information. 

A Radio Case Study

From my perspective, the responses to significant crises situations involving multiple agencies almost always have ineffective, underutilized, or non-existent interoperable voice communications paths or protocols amongst responders, resulting in much less efficacy in the crisis response. The penultimate objective of information management must be to overcome the information vacuum (or at least the gaps) that accompanies many crises situations. The advent of the Nova Scotia shared Trunked Mobile Radio system has resulted in less post-incident debriefings that that point to ‘communications’ as being the biggest failure in the response – a huge achievement.

Obviously there are other approaches to glean and share crisis response information, but I would argue that there are probably no better or more effective, or more timely methods, than the use of system wide, shared talkgroups.  Every one of the almost 10,000 radios on the Nova Scotia provincial system is required to have the standard suite of interoperability talkgroups: eight provincial ‘mutual aid’ talkgroups and two interprovincial ‘mutual aid’ talkgroups shared with users in the neighbouring provinces of New Brunswick and Prince Edward Island.   

The other key ingredient to effectively sharing timely and accurate information during a crisis response is regular and repeated user training.  Radio user training in Nova Scotia is provided by a dedicated provincial trainer who provides training directly to the users, or disseminates knowledge through a ’Train the Trainer’ approach.  All too frequently when shared radio systems are implemented, user training is provided to familiarize the users with their new ‘tools’ and technology, but post implementation, training programs are eliminated or dramatically down-sized.  Experience would suggest that with the rate of turn-over of emergency response personnel (particularly in the volunteer sector) an ongoing training and refresher program, including table-top exercises, is of critical importance. 

A very valuable educational ‘tool’ has been the development of a communications module attached to the ICS 200 program.  This module takes about 25-30 minutes to deliver and helps the command level responder to focus on aspects of communications that is – or should be - of most concern to her/him.  It emphasizes the shared nature of the trunking system, the range of agencies that use it, and the established methods of ensuring all potentially involved users are aware of the shared talkgroup assignment and its purpose.

To quote Brandon again, “We are doing ourselves a disservice if we focus on predictable information needs in an environment where the most valuable information is unpredictable!”  I fully agree with this premise, and suggest that rather than struggling to gather, store, then quickly share information in response to the unknown, unexpected or unprecedented crisis, we do ourselves a much greater service by making the effort to develop cooperative, collaborative, shared radio communications systems and policies that enable real-time sharing of any information relevant to any response party engaged in the crisis.  

Read More
Information Exchange, Learning Brandon Greenberg Information Exchange, Learning Brandon Greenberg

Redefining Information Requirements for Crisis Response

Developing information requirements for crisis response is a tedious and flawed process filled with many uncertainties about the situation and the response. While we can take an honest stab at knowing what different responders need, when, and how, our unilateral focus on needed information stymies the best of intentions: historical learning is only as good as a similar future, which is rarely the case; and visioning workshops are only as good as the ability to identify the uncertainties that lie ahead, a very difficult task with severe consequences if something is missed. 

While decisions can be made without needed information based on expertise and experience, this is far from ideal in a complex adaptive system such as...

Developing information requirements for crisis response is a tedious and flawed process filled with many uncertainties about the situation and the response. While we can take an honest stab at knowing what different responders need, when, and how, our unilateral focus on needed information stymies the best of intentions: historical learning is only as good as a similar future, which is rarely the case; and visioning workshops are only as good as the ability to identify the uncertainties that lie ahead, a very difficult task with severe consequences if something is missed. 

While decisions can be made without needed information based on expertise and experience, this is far from ideal in a complex adaptive system such as crisis response (another important topic, but no room in this post!). Every move one makes (small or large) can have significant positive and/or negative impacts on system performance, not to mention possible interaction effects of different decisions and actions. Information is therefore a lifeline for decision makers when evaluating the consequences of different decisions and actions. Information provides important cues that help decision makers develop accurate representations of the system and the situation in order to better leverage their expertise and experience.

In more certain work environments with repeatable tasks, decisions, and problems (e.g., manufacturing), information requirements can be refined through thorough investigation and iterative development. But crisis response is far more uncertain about the tasks, decisions, and problems that will be encountered. Planning activities can help, but they will never be 100% ready. Unanticipated situations will always be encountered for which one must react in the moment. Additionally, information are often not created and available until a crisis occurs, so it is hard to plan for its use.  

We need a dedicated strategy and approach to information management (collection, processing, and sharing of data/information) that balances flexibility with standardization and that extends beyond technical interoperability (similar to our response management paradigms). People, policies, programs, processes, and products all need to align to inform and improve the handling of the known-knowns (e.g., will set up a point of distribution), the known-unknowns (e.g., how public will react), and the unknown-unknowns (e.g. unforeseen circumstances) encountered during a crisis response. 

This is not an easy endeavor and requires radically different thinking that embraces the uncertainty associated with crisis response. We are doing ourselves a disservice if we focus on predictable information needs in an environment where the most valuable information is unpredictable! 

Tackling this issue will likely take the better part of my career, but it is important to start somewhere. As you consider your information requirements, I suggest you consider the following information requirement types:

Type A - Clearly Needed Information

First, it is important to outline the information that is clearly known to be needed. Bite off the top layer of information needed by each role. These are the absolutes that you know the role(s) need to have. Be judicious, though, as your information management plan will most definitely provide you with ALL this information and you don't want to overload responders. 

Type B - Likely Helpful Information

Second, consider what information should not be delivered, but rather immediately available to responders if they decide they need it. This is information one could presume might be needed, but is hard to define when, where and how it will be useful. This information should be made available and easily accessible to responders without distracting or overloading them.

Type C - Supporting Information Sources

Lastly, because it is unlikely that you will have envisioned all possible information needs, consider how your responders can access different sources of information that will allow them to find the information they need on the spot. This is hard as you need to build relationships and technical integrations ahead of time to execute well.  

There are two things to notice about my suggested information requirement types. First, I call them types rather than levels. This is because the relationship between them is dimensional, not linear or hierarchical. Type C information can be just as important as Type A information. Second, they assume a "role-based" perspective on information requirements gathering. Collecting information requirements at an organizational level obfuscates the information needs of individual responders who are the true consumers of information. Plus, if you know the role-based information needs of individual responders, you can more easily discern the organization's overall information needs through aggregation and comparison of all the information required in each role. This then sets you up to develop an information system that meets organizational needs through knowledge of individual responders' information needs.   

I hope this helps you expand your understanding of requirements gathering and rethink what is "needed" in light of the many uncertainties that crises bring. The goal here is to intentionally and strategically approach information management such that you are giving your responders the best possible chance of obtaining available information they need, when and how they need it. I don't address the timing aspects and delivery methods of information needs here, but they are indeed also very important (perhaps another blog post!). 

I look forward to your comments!

Read More
Exercises, Technology, Information Exchange Brandon Greenberg Exercises, Technology, Information Exchange Brandon Greenberg

UPCOMING: National Information Sharing Exercise

Information sharing exercises are rare and hard to put on, but are important to learning about how to improve information sharing in disasters. 

I am passing on this information about an upcoming information sharing exercise. Participation is open to many different organizations in the EM community and I encourage your to sign up and participate as soon as possible. The exercise will take place on May 11, 2016.

Below are the details that were provided to me:

Information sharing exercises are rare and hard to put on, but are important to learning about how to improve information sharing in disasters. 

I am passing on this information about an upcoming information sharing exercise. Participation is open to many different organizations in the EM community and I encourage your to sign up and participate as soon as possible. The exercise will take place on May 11, 2016.  

Below are the details that were provided to me:

"In May 2016, the National Information Sharing Consortium (NISC) will conduct CHECKPOINT 16, a virtual tabletop exercise that will allow participants to test, evaluate, and download for daily use various model web applications, tools, and data models for situational awareness and decision support.   Dozens of organizations have signed up to participate in CHECKPOINT 16, with participants coming from state, local, and Federal government, non-profits, private sector companies, and academia.  Participants can choose their level of participation, from being an observer, participating in a limited way using NISC-provided tools, to being a full-play participant integrating CHECKPOINT 16 tools into their own native operating environment throughout the exercise. 

The exercise will take place from 11 am to 4 pm ET on May 11, 2016.  For information on the exercise and to register, you can visit www.checkpoint16.org.  So far the NISC has conducted two training events for the exercise, and these trainings can be viewed on the checkpoint 16 webpage; the next event will take place on April 21." 

Read More

Building Better Disaster Response and Resilience with Information and Technology

For nearly five years I have been in higher education exploring how information and technology can improve disaster response and resilience. I have explored complex issues in great detail and I have learned a lot about the challenges and opportunities being faced by communities, organizations and people trying to leverage information and technology to better respond to disasters and build resilience.

But as I begin my transition back to the working world in the near future, I am forced to reflect on how I can apply this new knowledge to help address current problems while also preparing for an innovative future beyond what we can imagine today. I find myself writing about my philosophy on leveraging information and technology to improve disaster response and resilience...

For nearly five years I have been in higher education exploring how information and technology can improve disaster response and resilience. I have explored complex issues in great detail and I have learned a lot about the challenges and opportunities being faced by communities, organizations and people trying to leverage information and technology to better respond to disasters and build resilience.

But as I begin my transition back to the working world in the near future, I am forced to reflect on how I can apply this new knowledge to help address current problems while also preparing for an innovative future beyond what we can imagine today. I find myself writing about my philosophy on leveraging information and technology to improve disaster response and resilience. This philosophy will guide me in my career and allow me to apply and transform my knowledge into pragmatic and sustainable change that pushes disaster response and resilience to achieve better outcomes with information and technology.

My Philosophy

I subscribe to the notion that a specific approach helps focus change and improvement. The approach of having good people, processes and products is essential to guide small businesses through significant growth and change toward profitability. For disaster response and resilience, focusing on the following five initiatives will help communities, organizations and people achieve better outcomes with information and technology: 

  1. Understanding the value that information and technology provides to different people in different situations.
  2. Improving policies that better enable data and information sharing while preserving privacy and security.
  3. Developing better programs that incentivize sustainable disaster information and technology innovation, research and education.
  4. Designing scalable and consistent ways to process (e.g., collect, manage, analyze and share) data and information across a variety of information and technology systems.
  5. Creating new products (technical and non-technical) that deliver significant value to communities, organizations and people responding to and affected by disasters

Beginning to address these complex initiatives starts with a paradigm shift in thinking that focuses on the value of information and how information systems, separate from technology systems, can improve disaster response and resilience. In addition, it requires concurrently aligning policies, programs, processes and products to overcome the unique nuances and complexities of disaster response and resilience.

Origins of My Philosophy

My philosophy on improving disaster response and resilience with information and technology is based on five years of intense study and reflection that culminate in new paradigms and theories. It represents my foundational beliefs that are influenced by two primary issues:

1) Information systems are different from technology systems

An information system is a conceptual understanding of who needs what information and when, and how it needs to be delivered to them. It helps describe the larger organizational systems that are being supported and understand the unique nuances and complexities of disaster response and resilience. An information system is also technology agnostic as it is about understanding why, how, when and for whom information is needed. Unfortunately, disaster information systems have received little attention over the years in both research and practice.

A technology system is a specific tool that helps manage information as it moves from its raw form (or original location) to its relevant and actionable form for the consumer. The value of technology systems is that they primarily help with time and effort intensive processes such as collecting, managing, analyzing, and sharing data and information as well as perform functions that humans can’t do (e.g., analyze big data).

However, if an information system is not well defined or understood, the supporting technology systems will only provide marginal benefits. This is, in part, why we have seen limited adoption and diffusion of new and innovative technologies despite there being a plethora of ideas and innovations. New and innovative technology systems need to reflect the real-world complexities of disaster response and resilience information systems; otherwise their adoption and diffusion will be slow with marginal benefits. Someone needs to be looking out for how technology systems integrate with information systems.  

2) Disaster information and technology policies, programs, and processes are misaligned

Disaster response and resilience is a complex industry and profession that has not done a thorough job looking strategically and comprehensively at the impediments to effective information and technology systems. This has resulted in misaligned policies, programs, processes and products that stall innovation and hamper sector-wide progress and achievement. For example, attempts to develop and track meaningful response and resilience metrics are hampered by the inability to get reliable data and information about those metrics quickly and easily. The impediments though, are not due to a failure of ideas or technology. Rather the impediments are due to a complex working environment/profession that:

  1. Lacks understanding about the discrete value of information for different situations as well as different communities, organizations, and people.
  2. Has policies that primarily focus on how to protect and secure rather than share data and information.
  3. Lacks grants and programs that specifically and adequately focus on information system projects, research, and curricula.
  4. Develops custom and ad hoc processes to collect, manage, analyze and share data and information that result in missed opportunities for leveraging economies of scale and in high sunk costs that disincentive change.
  5. Seeks out technological solutions that conform more to existing policies, programs, processes and products rather than fundamental need.

The Importance of Sharing My Philosophy

It is important to share my philosophy because it helps inform employers, clients, partners, readers, etc. of my approach to leveraging information and technology. This approach, combined with my expertise and strengths, is why I am attracted to positions that help challenge the status quo and lead to innovation and systemic change. These include disaster information and technology positions related to:

  • Strategy and policy
  • Program/project management
  • Public-private partnerships
  • Product management
  • Education and training
  • Applied research and evaluation
Read More
Technology, Information Exchange, Learning Brandon Greenberg Technology, Information Exchange, Learning Brandon Greenberg

ISCRAM is Conducting Survey for Masters Degree on EM Information Systems

ISCRAM, the international academic-practitioner group focused on Information Systems for Crisis Response and Management is conducting a survey that may be of interest to many people. The survey is looking for input on a Master's level degree in EM with a concentration in information systems.  

I like ISCRAM's approach because it is not just about a particular type of technology such as GIS. Information systems for EM is sorely underrepresented in higher education and something I believe should be in included in every degree program.  This topic is also near and dear to my heart as I have not only written about information and technology in EM, but is also the subject of my research and future work. 

You need not be an expert in information systems, information, or technology to respond to this survey.  In fact a non-technical EM expert may provide some great feedback! 

ISCRAM, the international academic-practitioner group focused on Information Systems for Crisis Response and Management is conducting a survey that may be of interest to many people. The survey is looking for input on a Master's level degree in EM with a concentration in information systems.  

I like ISCRAM's approach because it is not just about a particular type of technology such as GIS. Information systems for EM is sorely underrepresented in higher education and something I believe should be in included in every degree program.  This topic is also near and dear to my heart as I have not only written about information and technology in EM, but is also the subject of my research and future work. 

You need not be an expert in information systems, information, or technology to respond to this survey.  In fact a non-technical EM expert may provide some great feedback! 

Please respond by clicking this link or the button below:

Here are the details of the survey provided by ISCRAM:

The Education Committee of ISCRAM (Information Systems for Crisis Response and Management), under the leadership of Dr. Murray Turoff, is seeking to establish guidelines for a Master's level degree in Emergency Management with a concentration in Information Systems (IS) for Emergency Management (EM). 

This survey is designed to solicit the opinions of EM professionals, practitioners and academics, as to what such a curriculum needs to have. Even if you are not an information systems practitioner or researcher, your opinion is valued. The results of this survey may be used for other scientific research by the ISCRAM Education Committee as well. 

The survey is comprised of four sections. The first section addresses general emergency management courses for the program; the second section addresses information specific courses for the program; and the third section addresses which, if any, information systems for EM focused courses should be included in all Master's Degree programs in EM, regardless of the focus of the program. We recognize that specific content of certain courses might be influenced by the country in which they are taught such as what disaster types, risks and threats, response organizations are emphasized. The final section consists of a few general non-identifying demographic questions. 

Participation in this survey is voluntary and anonymous. Identifying information will not be collected and individual responses will be kept confidential. If you choose to provide your email so that we can send to you results of the survey and/or invite you to participate in future work on this project it will be kept separate from the data used for analysis. 

There are no known risks to participating in this survey. You must be at least 18 years of age. If you have any questions or concerns, you may contact Linda Plotnick at lplotnick@jsu.edu.

Read More
Technology, Information Exchange Brandon Greenberg Technology, Information Exchange Brandon Greenberg

Disaster Technology is Built All Wrong

Technology is a great asset for organizations. It facilitates communications and helps simplify complex tasks. This is great when you have complete or majority control of your operating environment, which is common in business and day-to-day operations.  

The problem in disaster response, though, is that unique and temporary organizational structures (e.g., ICS, JFO, ESF, etc.) form during a disaster that differ significantly from day-to-day operational structures. And roles within these temporary structures are filled by various people at different times, some professional and some volunteer.

For example,

Technology is a great asset for organizations. It facilitates communications and helps simplify complex tasks. This is great when you have complete or majority control of your operating environment, which is common in business and day-to-day operations.  

The problem in disaster response, though, is that unique and temporary organizational structures (e.g., ICS, JFO, ESF, etc.) form during a disaster that differ significantly from day-to-day operational structures. And roles within these temporary structures are filled by various people at different times, some professional and some volunteer.

For example, a Public Health Analyst at the Public Health Department may move to ESF-8 Lead in the County EOC for Shift A, which has a different operational structure from the Analyst's day-to-day job. And another Analyst from the Hospital Association will likely support ESF-8 during Shift B. Now the analyst is part of two different organizational structures (employment and response) with separate technologies for communicating and fulfilling functional responsibilities.  

But many technologies on the market today are developed for and sold directly to single organizations for their given missions and responsibilities. Little attention is paid to when the Public Health Department needs to collaborate with and share data with Law Enforcement or vice versa. Significant time and effort ends up being spent on reconciling information inconsistencies between systems as well as ensuring one has the most up-to-date information...by hand.   

Or, these technical systems end up being back-hacked for a fee to the vendors or consultants. However, "back-hack" connections are mere patches to larger information sharing problems. They may solve your immediate information sharing problem, but not the systemic problems. This is critical in disasters where many different organizations need to work together as one or in coordination with each other.

My main argument is that technology products in the disaster response industry are geared toward a single enterprise deployment. This is not representative of the way disaster responses are managed or coordinated. The next generation of technology needs to recognize that it needs to serve both organizational AND inter-organizational information needs with relative ease and reliability.

In addition, in looking to the immediate future, technology needs to incorporate citizen participation in disaster response in practical and process-reducing ways. The public are key assets to response that are underutilized in part because technologies don't address the additional process burdens that naturally occur with managing and coordinating volunteers and using information from the public. I see way too many analytic and visualization tools that give little thought to how the information can be collected and leveraged in a compressed time frame in a way that adds value to the response.  

Technology of the future needs to give more thought to how it captures organizational affiliations while still enabling inter-organizational and citizen collaboration in less process-intensive ways (e.g., not having to administrate five different systems with different sets of users).

What do you think? What are you gripes with buying and administrating new technology? 

Read More
Technology, Information Exchange Brandon Greenberg Technology, Information Exchange Brandon Greenberg

How Disaster "Mesh" Networks Provide Critical Value in Disasters [A Primer]

Mesh networks have been around since the Department of Defense starting playing around with the idea of exchanging data and information in remote and infrastructure-compromised locations.  In recent years, mesh networks have been applied to disaster operations to enable the exchange of data and information regardless of Internet access.  

However, mesh networks are quite technical to setup and use.  A non-profit and open source technology called LDLN makes this a lot less technical so nearly anyone with some basic tech skills can set up and use a mesh network. Before I dive into how LDLN does this, I want to provide a primer on mesh networks, how they work, and the problems they solve.  

A couple weeks ago, I published a post extolling the virtues of a nonprofit and open source technology called LDLN. I wanted to highlight the importance of such an endeavor, which is more than most people realize.  

After publishing the post, a colleague and long-time emergency manager I greatly respect replied to me, "Whereas I love the fact that you bring new technology to the forefront of disaster management, I often find myself not really understanding what exactly is being discussed. The average non techy emergency manager like myself, who may want to further explore options like LDLN, needs to have an example of its use in the hospital or other environment that is concrete and that can put the technology in prospective."  

In reflecting on this, I could have done a better job explaining the problem and how mesh networks such as LDLN play a critical role. This is a complicated but important subject that I want to make sure people understand. So I decided to write another post explaining mesh networks and the value of LDLN.


Mesh networks have been around since the Department of Defense starting playing around with the idea of exchanging data and information in remote and infrastructure-compromised locations. In recent years, mesh networks have been applied to disaster operations to enable the exchange of data and information regardless of Internet access.  

However, mesh networks are quite technical to setup and use. A nonprofit and open source technology called LDLN makes this a lot less technical so nearly anyone with some basic tech skills can set up and use a mesh network. Before I dive into how LDLN does this, I want to provide a primer on mesh networks, how they work, and the problems they solve.  

The Relationships Betwen Networks, Servers, Routers and the Internet

Let's talk networks, servers, routers and the Internet in a very over-simplified way. Servers are basically supped up computers that can manage the storage of and access to data and information. In some ways, your personal computers access as a server, but when I say server I am talking of machines whose sole purpose is to store and manage access to its data and information. You know that share drive you have access to at work? It is hosted on a server. You know that application that you have access to on the web or only when you are at work? It is hosted on a server. Servers host and store applications with their associated data and information.

In order to access the applications as well as data and information, servers are connected to networks, both wired and wireless. Think of your home network where you can connect your computer, mobile phone, tablet, etc. (also called a "client" in tech terms) via an Ethernet cable or via WiFi. Corporate networks are principally the same, but a bit more complicated in practice. What you need to know is that networks connect you to servers.  You rely on this access almost 90% of the time, though you may not realize it. Connecting to your employers wireless network creates an unspoken relationship between your personal computing device and the servers. Outlook is a classic example where the application and the data and information can live on your computer, but all that information is backed up and synced to servers operated by your organization or a third-party vendor.   

Now what happens when multiple networks exist or you have to keep an network up across a wide geographic area? It is not so simple for the application with its data and information on your computer to find the relevant server that it needs to sync with. To help direct this digital traffic are routers. The professionals who typically manage this traffic for organizations are called "network engineers." You are an amateur network engineer when you set up your home wireless router, which helps you print to your printer wireless and connect to the Internet. Routers operate in the background to help manage the digital relationship between your computer and servers, printers, the Internet, etc. Routers are especially important when you have many computers and devices on a network that need to exchange data and information. Otherwise, the network would be overloaded and no one would be able to access the servers.  

The Internet is like a meta-network that gives you access to the outside world. Many web-based applications live on servers hosted by vendors (or third-party data centers), but are accessible via the Internet because they allow such access. When it comes to Internet access, though, you may have access to your servers via your network, but unless the network is connected to the Internet, you will not be able to access anything external such as web-based applications. For example, you can input patient records into your computer, but you won't be able to access TMZ.com to get the latest dish on Kardashians. So you need to remember that network access and Internet access are related, but separate. You can have network access without Internet access, but not the other way around.  

If you don't understand what networks, servers, and routers are and how they work together, the following may be a little harder, but not impossible to follow. 

Options for Accessing and Syncing Data and Information

In disasters (and in most of the world), their are generally two ways to exchange data and information no matter what applications you use: 1) a private network, and 2) or the Internet.  

Private Network. Before the Internet was a thing, this is where organizations focused their efforts. Organizations set up their own servers, networks and routers at their employment locations to ensure employees have access to and could exchange data and information. All applications along with their data and information remained in complete control of the company and separate from the Internet.  

In modern times, a private network plays an important role in data security and control by being able to create a digital wall around data and information (does "firewall" ring a bell?). As you can image, when such an ecoystem is set up with the goal of keeping information in, trying to share data and information across networks becomes extremely challenging. Virtualization and VPNs help mitigate these challenges, but are not perfect and can create some critical and complicated interdependencies. Disaster recovery managers (the IT-focused kind) help plan for and manage these interdependencies so they do not impact operations.    

The Internet. The Internet acts much like the networks mentioned above, but in a more public way. Servers are still there and routers help manage the digital traffic in the meta-network called the Internet. The exchange of data and information across the meta-network becomes significantly easier as their are less geographic restrictions. However, using the Internet to exchange data and information creates an extremely critical interdependency. For example, many applications that we have come to love and enjoy on our phones or through our web browser are dependent on Internet access and consume a lot of bandwidth. No Internet means no exchange. Period. 

The Problem. These are basically two terrible options for exchanging data and information in a disaster! You can either build applications that work on your private network or build them to work via the Internet. The former limits how far away you can exchange information or across networks, such as hospital-to-hospital or hospital to EOC, and requires the application live on a physical server in your network (e.g., bring a server to the disaster location). The latter creates a critical interdependency on Internet access, which can be a luxury in a disasters. 

Mesh Networks in Disaster

Mesh networks allow for the sharing of data and information across wireless networks when no Internet is present. The "mesh" part comes because of the way these networks are typically deployed. A typical deployment model is to "daisy-chain" networks together in such a way that each network shares the data and information with the network it is next to, which then shares it to the next one, and the next one, etc. (think overlapping WiFi signals that link up to each other). And sometimes, if another network has Internet access, you may be able to get Internet access in your network. But the quality of wireless signal drops dramatically the further away you go. Setting up this type of environment is also very technical and difficult in practice!  

LDLN and Mesh Networking

LDLN's software and hardware acts as a combined network, router and server. Instead of having to have data and information sync from your computer or mobile device to "the cloud" (servers accessed via the Internet) or internal servers (servers accessed only through your private network), LDLN becomes the best of both worlds. LDLN provides the technology that lets you physically move your phone or computer from one private network to another private network and seamlessly exchange data and information regardless of Internet access.  

For example, you have information on your computer that was created while you were in your hospital's network.  Now you moved to the municipal EOC that is on a different network, but neither network has Internet access to sync up data and information. With LDLN, as you move to the other network, your computer or mobile devices automatically uploads the data and information on your device to the hardware in that network.

That part is not exactly innovative.  What happens next is more innovative:

  • If other devices are on the new network, your data and information will automatically be synced to their devices and their data and information will automatically be synced to your devices (regardless of the Internet situation)
  • If the new network is connected to the Internet, your data and information will also automatically be synced to the "cloud" for people in other networks to access and their data and information will automatically be synced to your devices.
  • If your network is connected to other networks (called daisy-chaining), your data and information will also automatically be synced to those networks and all the devices in those connected networks.

In essence, LDLN has mastered issues that arise when syncing occur asynchronously and distributed across different networks and servers. It will not produce errors when all devices become synchronized with their own as well as each others data. This is huge. Many software solutions don't know what do with conflicting or asynchronous data and information, which causes lots of problems. The software can't reconcile what is the latest information or that it is the same information from different locations such as two receiving hospitals tracking patients.  

Gmail handles data conflicts well, but still relies on the Internet for syncing. For example, I might run through my email on my mobile phone while on a plane with no WiFi. I archive some emails, star others, etc. Then I forget and compose an email in my tablet and archive some of the same emails. When I get to the ground, Gmail reconciles what I did on my phone as well as my tablet and doesn't produce any errors. But what if I was in the air with no Internet access and wanted my Gmail on my phone to sync with my Gmail on my tablet? LDLN solves this problem in a disaster environment.   

LDLN's Value Proposition

The biggest value for LDLN is to be embedded in various applications, servers and routers. For example, during Hurricane Sandy, this technology could have been integrated with NYU's electronic health records system to share critical patient information stored on their servers with the other receiving hospitals. A person could have physically moved to one of the receiving hospital locations with his or her laptop that had the latest data and information automatically downloaded and synced. That person could have then have electronically shared health records with the receiving hospital. Simultaneously, that person could have kept track of who is at what hospital and have that information be automatically shared back to NYU emergency management personnel.  

This is of course a hypothetical example that is over simplified. It merely demonstrates the power of LDLN. Issues such as technical integration, HIPAA and data security would still have to be navigated when setting up this technology.  However, I think that can be worked out in the future.

Questions, comments, concerns?  I would love any feedback you have on this topic and article.  

Read More
Information Exchange, Data Science Brandon Greenberg Information Exchange, Data Science Brandon Greenberg

Developing Accurate, Complete and Current Information is a BAD Idea

This quote comes from a seasoned emergency manager in a recent Emergency Management Magazine article.  Simply said, I don't agree with this key point.  This kind of thinking leads us down a very dangerous path as it builds up false expectations and breads unrealistic thinking.  

"Accurate, complete, and current" information is a nice goal, but entirely impractical and unrealistic in reality.  In a recent email listserv conversation, a number of very experienced information managers discussed the difficulty in simply keeping up with the flow of information during a disaster.  Perhaps this can be better achieved 

Every single decision EOC responders make depends on accurate, complete and current situational awareness.

This quote comes from a seasoned emergency manager in a recent Emergency Management Magazine article.  Simply said, I don't agree with this key point.  This kind of unilateral thinking leads us down a very dangerous path as it builds up false expectations and breads unrealistic thinking. To quote a colleague, "the mythical quest for perfect authoritative data can be paralyzing."  

"Accurate, complete, and current" information is a nice goal, but is entirely impractical and unrealistic in reality.  In a recent email listserv conversation regarding the Nepal earthquake, a number of very experienced information managers discussed the difficulty in simply keeping up with the flow of information during a disaster.  Perhaps this can be better achieved in the future, but in current practice it is near impossible to manage and achieve, even before a disaster. 

The more important aspect of this is to understand HOW accurate, complete and current the information is. For example, if you know information is two hours old, you can assess the value of it for your own decision making. When you receive information, you should know who it is from, how old it is, and what is addresses.  This "meta-information" is critical for good SA/COP and is actually quite informative for a decision maker.  Granted the information is not ideal, but one can make a more educated decision based on his or her own assessment of the data or information.  

Plus, you  can spend an inordinate amount of time trying to gather accurate, complete and current information.  This can produce a effort-to-outcome imbalance where you spend more time gathering and organizing less valuable information.  Time can be better spent on working with the  information you have, regardless of its condition, which can be more valuable for decision making and action taking during a disaster. 

Also, while I agree with the author's points about thinking through the process for gathering and disseminating SA, the existing processes for developing information requirements are equivalent to throwing a dart at a dartboard blindfolded. While you were pointed in the right direction to begin with, you still have no idea where you are aiming. The result is a set of information put into SA/COP, but not understanding of its relative value across responders and different disasters.

In fact, the decision approach to identifying information requirements gives a false sense of security as you only define information requires you may face and can identify ahead of time. There are usually a host of unanticipated critical decisions the need to be made during a disaster. Developing information requirements is arguably more important for the unanticipated decisions.

Read More
Technology, Information Exchange Brandon Greenberg Technology, Information Exchange Brandon Greenberg

Data and Info Sharing with No Power or Internet? - Meet LDLN

There is an organization that I have wanted to introduce people to for a while.  It is a game changer, provided it can be applied more and baked into operations and various technologies. 

In disaster operations, the Internet is the predominate way to share data and information across people, organizations, and geographies--when it is available.  It is a critical failure point to inter-organizational and region-wide operations that need to share across wireless networks.  When access to the Internet is compromised, cascading effects occur such as having to reconcile what the latest data and information is. In fact, data and information sharing is often reduced to files on USB sticks that are physically traded.  Version control becomes essential, but hard to maintain.

There is an organization that I have wanted to introduce people to for a while.  It is a game changer, provided it can be applied more and baked into operations and various technologies. 

In disaster operations, the Internet is the predominate way to share data and information across people, organizations, and geographies--when it is available.  It is a critical failure point to inter-organizational and region-wide operations that need to share across wireless networks.  When access to the Internet is compromised, cascading effects occur such as having to reconcile what the latest data and information is. In fact, data and information sharing is often reduced to files on USB sticks that are physically traded.  Version control becomes essential, but hard to maintain.

LDLN

LDLN (pronounced "landline") is a robust open source mesh network that goes beyond network reliability issues for connecting to the Internet.  LDLN combines the best of sync technology to enable data and information sharing with or without Internet across networks, organizations and geographic areas.  Most importantly, this is all done without duplicating content or complicating reconciliation efforts.  Simply said, it is designed to just work without the need for constant management and oversight.  Pretty cool, huh?  

Normally, I would limit conversation of this type of technology to the techies among us who work to maintain network operations.  But I believe everyone should be aware of its existence and capabilities to inform strategic and operational thinking before and during disasters. There are two key components to LDLN's potential success, it's software and it's hardware.  These two aspects are are discussed below in the Q&A with LDLN's founders.  

This technology is not a far cry from Ushahidi's BRCK, which is a small all-in-one portable router (wired and wireless), server, and multi-modal Internet connection device with a long-lasting battery back-up.  However, BRCK's focus is on single-point data and information exchange while LDLN allows the carrying of data and information on any device to another access point, which then automatically shares the most current data and information with all devices connected to the new access point.    

BRCK and LDLN are complementary technologies that enhance each other's value proposition.  LDLN could potentially be implemented on BRCK as well as any other hardware such as existing servers and routers.  The hardware, though, does not need to be limited to the RaspberryPi devices mentioned below.  While these possibilities are not yet in LDLN's current product, they could be a next step for them.  

I am glad LDLN was able to respond to my Q&A request.  Check out the entire Q&A below, which includes pictures of LDLN's software.  

LDLN Question & Answer

What is your name and role in the product/solution/company? 

  • Matt Grasser - CEO
  • Emily Duong - CCO
  • Sam Krueckeberg - Engineering Lead
  • Arthur Chen - Chief Legal Counsel
  • Joyce De Vera - Head of Marketing
  • Nick Ihm - CTO
  • Christopher Guess - COO
  • Kristine Austria Sanchez - Designer

What does LDLN do?

LDLN is a communications system for disaster relief organizations when there is no power, internet or wireless connectivity. LDLN allows organizations currently reliant on notebooks and pencils to have a fully synced, fully backed up, reliable communications system over any size of operation theater.

What/who inspired you to create LDLN?

We were inspired by our team members’ experiences across the world. Chris first conceived of the idea while working with Occupy Sandy during the Hurricane Sandy response, and suggested it up as an idea at a Philippines resilience hackathon. From there the team pooled ideas and skills to create a system responders would want to, and easily be able to use in the field with minimal training.

What challenge does LDLN help users overcome?

When a disaster relief organization deploys to the field in 2015, they are for the most part still equipped with the same tools that were available in 1991. Every major tool and recovery method used today currently relies on two things that are, to put it mildly, in short supply: electricity and internet connectivity. In absence of connectivity, relief workers fall back to the stalwarts of pads of paper and pencils.

In short, we provide the software and hardware necessary to form a modern backbone for communications and document synchronization. This resilient, decentralized, "designed-for-disasters" network does not depend on cellular technology, Internet connectivity, satellite up-links, or even an external power grid where others would.

How does it do this?

The LDLN Base Station, a tiny computer about the size of a deck of cards, combines the networking features of a traditional decentralized mesh network node with the reliability and storage capabilities of a traditional web server.  Standing in stark contrast to expensive, bulky, custom-built satellite trucks and enterprise solutions, the Base Station is inexpensive, extremely portable, and consists of open hardware.

Building on this network of Base Stations, mobile apps powered by LDLN's SDK afford the familiar interfaces of a modern mobile experience, using of peer-to-peer protocols to skirt the need for a centralized network. Apps and Base Stations work in harmony to push and pull pieces of encrypted data across the network, ultimately displaying information in a natural and easy-to-read format.

What is next for LDLN?

LDLN is very proud to be have accepted into The GovLab Academy Coaching Program at New York University for the spring of 2015. We are looking forward to engaging with thought leaders and experts in the fields of disaster relief and government action.

In parallel, LDLN is currently developing our second generation of base station and mobile technology. This will allow in-the-field customization of the data collected and reported along with massive speed improvements on the base station side.

How can people get in touch, learn more or test LDLN?

Anyone interested can get in touch with our team through our website or social media, or via email.

Since our software is open source you can also take a look at the code at our Github site: https://github.com/LDLN/

Read More
Technology, Information Exchange, Data Science Brandon Greenberg Technology, Information Exchange, Data Science Brandon Greenberg

Curate Dashboards NOT Documents in Disasters

The goal of any information or intelligence unit  in a disaster is to produce information useful for decision makers.  Information managers, though, curate and analyze information into static and overly-standardized reports that are hard to interact with and update with new and different data and information.  

Instead, information managers should focus on publishing information into dynamic dashboards that can be further manipulated by disaster decision makers at their convenience.  This is because decision makers may want to quickly...

The goal of any information or intelligence unit  in a disaster is to produce information useful for decision makers.  Information managers, though, curate and analyze information into static and overly-standardized reports that are hard to interact with and update with new and different data and information.  

Instead, information managers should focus on publishing information into dynamic dashboards that can be further manipulated by disaster decision makers at their convenience.  This is because decision makers may want to quickly probe information directly if they find something potentially alarming.  If it requires more analysis, sure, it can be sent back to the situation or intelligence unit.  But a 1 minute prob may have just satisfied all of the disaster decision makers concerns, especially when time is a luxury.  

On the plus side, your customer base likely won't change as you much as you think.  In fact, most of what should change is your mindset on how to convey data and information. For example, instead of creating five reports for five groups of people, you are now working to curate five dashboards for the same five groups.  The tools may differ, but the process of creating useful information outputs will be similar.  Information managers may still need to collect, organize and analyze data and information, but now there are new and better ways to present it.   

Tableau Dashboard

Tableau Dashboard

There are plenty of software solutions that support dynamic dashboards, both online and offline.  Tableau, Splunk, and Palantir are some of the leading providers.  The danger, though, comes when you develop a dashboard before a disaster and have no plans to optimize and update it during a disaster.  This optimizing and updating must be incorporated into your response operations in order to provide more useful dashboards based on real-time feedback.

This real-time curation and updating mindset is a shift from the report publication cycles that are often aligned with operational periods.   It enables information managers to provide the most up-to-date information to disaster decision makers.  This is especially needed when operational periods differ across the many organizations involved in a response.  

In many cases as well, you are able to develop automated processes that streamline the collection, organization and analysis of data an information.  This allows information managers to focus on presenting available information that is most useful to disaster decision makers rather than spending significant amounts of time processing data and information. 

Anyone who has dealt with data understands that data and information processing (e.g., obtaining, scrubbing, exploring, modeling and interpreting) is very time-consuming, but necessary.  Any chance to automate processing allows you to focus more on presenting available information in more useful ways to the people who need it.    

If dashboards are not yet an option or on your radar (for whatever reason), consider getting into this mindset in your next exercise or response.  How would you become more "dynamic"?  How would present information in more useful ways?  What tools would you use?

Read More
Information Exchange Brandon Greenberg Information Exchange Brandon Greenberg

Information Management & Sharing...the Right Way

If you have ever responded to a disaster, you have likely made an infinite number of decisions and taken an infinite number of actions.  Information has informed these decisions and actions in some way.  However, had the information been delivered in the right way at the right time, you probably would have been more efficient and effective with your time.  Having the information in the right way allows you to spend more time mastering your objectives rather than mastering the art of data and information management.  

Situation Reports (SitReps) are a great example of information delivered in a more usable way.  However, SitReps were created in an era when paper documents reigned supreme and when that was the best way to convey information to a large group of people.  As technology becomes better and more data is available, though...

If you have ever responded to a disaster, you have likely made an infinite number of decisions and taken an infinite number of actions.  Information has informed these decisions and actions in some way.  However, had the information been delivered in the right way at the right time, you probably would have been more efficient and effective with your time.  Having the information in the right way allows you to spend more time mastering your objectives rather than mastering the art of data and information management.  

Situation Reports (SitReps) are a great example of information delivered in a more usable way.  However, SitReps were created in an era when paper documents reigned supreme and when that was the best way to convey information to a large group of people.  As technology becomes better and more data is available, though, the mass approach to information sharing is no longer sufficient to support the infinite and diverse number of decisions being made and actions being taken.  

There are so many stakeholders involved in disaster response that it is natural to think that their information needs vary greatly.  While a SitRep may convey useful information to a decent sized audience, stakeholders' information needs are much greater than a summary report of activities and intentions.  They want to use your information to strategize, coordinate, and identify gaps so they can help too.  This requires detailed information that is not always easy to come by unless there is a pre-established process already in place.  (Sometimes this can get unweildy and expensive to manage) 

This lack of access to detailed information severely in real-time also limits emergent groups who have the capacity and capabilities to support disaster response efforts.  What if Occupy Sandy had more information from NYC OEM?  Could they have focused their efforts better?  Could Team Rubicon's skills be better utilized if they know the local emergency management agency has designated a particular neighborhood as a priority?

Some people might say this information is available.  But I would contend that it is either buried in a person's head, an email, or a PDF report.  This is NOT effective information sharing because it places additional burden on others to find, sort and track all this incoming information.  Imagine the last time you received hundreds, if not thousands, of emails during a disaster response.  Was it overwhelming to just keep up with your inbox?  

This is where technology can help.  First, technology can help you publish data and information in more usable formats for others.  If everyone does this, there is a net benefit to everyone involved in a disaster response.  Second, technology can help you find and manage relevant data and information so you can spend more time on your objectives rather than mastering the art of data and information management.  

Imagine you log into your disaster management application in real-time and select a few pre-populated check boxes of internal and external information that may be relevant to you given the situation you currently face.  Then you shift over to your dashboard to find this information is now neatly displayed in an easy-to-use interactive format.  You ultimate decide to deploy resources to that area and with the click of one more button, others who may be affected by this decision are immediately notified of your actions within their own dashboards.  

From a technical standpoint, this is entirely possible.  The real challenge is who will take the lead to update information policies that allow more practical information sharing?  Who will demand that their software vendors all have good data management schemes (based on existing standards) with open APIs?  Who will build the marketplace for the easy integration of systems? 

Read More
Information Exchange Brandon Greenberg Information Exchange Brandon Greenberg

Disaster Information is Like Duct Tape

You may be wondering what these two things have in common.  Believe it or not, they have a lot more in common than you think.

There is a lot of discussion these days regarding how information can help in disasters.  But is hard to pinpoint exactly how it can help.  This is a lot like duct tape.

You carry duct tape around, maybe in your car or in your basement.  It is there because one day you might need it.  It is such a versatile product that you must have it available just in case something happens.  

Information is similar in that ...

You may be wondering what these two things have in common.  Believe it or not, they have a lot more in common than you think.

There is a lot of discussion these days regarding how information can help in disasters.  But is hard to pinpoint exactly why or how it can help.  This is a lot like duct tape.

You carry duct tape around, maybe in your car or in your basement.  It is there because one day you might need it.  It is such a versatile product that you must have it available just in case something happens.  

Information is similar in that you want to keep as much of it as around just in case you need it.  You may not know why or how you will use it, but you know you will one day.  You want to be prepared when that day comes.

But what if you could have a little better idea of why or how that information (or duct tape) is needed?  This would help so much with optimizing what you collect in the first place so you are not spinning your wheels collecting and managing useless information.  You could also have more relevant information available to you when the time comes rather having to dig through a digital information haystack to find a needle.  

To draw an analogy, what if you knew that one of the reasons you would need duct tape is to cover electrical cord for an impromptu emergency operations center?  Could a light duty grey duct tape do the job?  Sure, but having a heavy duty duct tape that is red or yellow would be more helpful and practical.  The added color and reliability of heavy duty tape helps improve your safety precautions.  Now you know you should have at least a few roles of heavy duty colored tape.  

What is the lesson here?  Try to figure out in as much detail the most useful information you might  need for a disaster and focus on developing processes and systems that help collect, manage, and share this particular information.  Start small and grow from there.  Don't try to capture every possible piece of information, it is a daunting and unrealistic task.  It is better to have 20% of the right information than 100% of the wrong information.  

 

Read More