Is Interoperability Really Needed?

Most people will respond to this question with an emphatic "YES!"  And I would concur with this statement to a large degree.  However, I would argue that saying "yes" is quite an incomplete response.

There are so many facets to interoperability.  Two of the main facets include: 1) how our crisis management structures work together, and 2) how our technical systems for data and information sharing communicate.

But before we get to making these two facets compatible, we must balance the need for information with the availability of that information.  We need to define the types of information we truly need to respond in order to better prioritize progress on interoperability.  We can just say we are going to make everything "interoperable."  In reality, interoperability is quite a large subject and with many many micro-issues.

Often, we first ask the question: "What data/information is available?"  However, to obtain better data and information, we need to start with the question: "What data/information do we need?"  This helps guide design and development toward the most impactful milestones.

As we approach big data, this will become even more important as we have to process and make sense of the massive amounts of information that will become available to us.   Ultimately, we may have the great majority of data we asked for; but if 50% is not useful, we just wasted a lot of critical time and effort sifting through it all.

WiFi as Disaster Aid...via a Balloon!

Yup, you heard that correctly...WiFi via a balloon. This is a very exciting endeavor and one that is definitely within the realm of possibility. And not only is this possible, but it is also quite complementary to my colleague's idea of smartphones as disaster aid.

For the past number of years, the advances we have made in software and hardware have been amazing and quite useful, including helping us to better sharing and analyze information in real-time and across domains and organizations. However, the advances have significantly increased our dependence on internet connectivity.

My interest in WiFi balloons was first piqued when Google first publicized Project Loon. Project Loon helps bring internet to world, especially for 3rd world countries. The project is very much in pilot phase, but they have a couple successful test flights under their belt.

http://www.youtube.com/watch?v=m96tYpEk1Ao

As a quick aside, this project also falls in line with the belief that internet access should be a human right, not a privilege. This debate will likely continue for a while, but for now, there is a huge need in disaster operations for this kind of technology.

In fact, The University of Michigan is working on such a project.  Aaron Ridley and a team of researchers are looking at how high-altitude balloons can be launched within an hour of a disaster and carry WiFi routers to impact zones. According to the University of Michigan:

The balloons would become platforms from which Internet-to-ground signals could be sustained and controlled throughout emergencies. This kind of rapid response and reliable, real-time communications with first responders could mean the difference between life and death for otherwise helpless victims.

http://www.youtube.com/watch?v=RS_-gesFjT4

This is quite promising as it has the potential to build capacity and redundancy for one of our critical dependencies...Internet access.

How do you see this being deployed in a disaster?

UPDATED: Looking for Disaster & Emergency Management Journals? Look no further!

January 24, 2014: Since posting, I have received comments on how valuable the list is as well how one can contribute other journals. I have uploaded the list to Google Docs and created a form to contribute additional journals.  

Back in May 2013, Professor Ali Asgary from York University in Canada wrote a great article in the IAEM Bulletin. He discussed the state of academic and research journals in disaster and emergency management. His finding were based on his own research with his master's student that produced a list of over 125 core journals. They categorized the journals by:

  1. Business Continuity
  2. Disaster and Emergency Management
  3. Hazard
  4. Risk

  Here are some of the key findings:

From Hazard Science to Disaster and Emergency Management. Core DEM journals can be classified into these categories: risk and risk management (37.6%); disaster and emergency management (28.8%); hazard science and mitigation (28%); and business continuity (5.6%).

Status and Format. Of the 125 core EM journals, 106 journals are currently active. About 21 journals are published online only, and about 104 journals appear in print editions only or are published in both print and online formats. The first online only DEM journals appeared in 1997, with an increasing number emerging in recent years.

Publishers and Country of Publication. About 80 publishers from 189 countries are involved in the publication of the core DEM journals. However, as with any other discipline, major publishers, such as Rutledge, Inderscience Publishers, I.G.I. Global, Emerald Group Publishing Ltd., Elsevier Ltd., and Wiley-Blackwell Publishing Ltd., publish the majority of EM journals. Most such journals are published in the United Kingdom (45), the United States

Growth and Change. The first DEM-related journals started in 1957, with the publication of a risk-related journal called the Journal of Risk and Insurance. This trend continued with the publication of a hazard-related journal in 1964.

Overall, he found significant growth in journals starting in the 1990s. In that time, journal focus also shifted from being mainly hazard-specific to more disaster and emergency management related.

The Crisis Leader [NEW BOOK]

I am very pleased to announce a great book by a great colleague.  As of today, The Crisis Leader: The Art of Leadership in Times of Crisis is available for purchase from Amazon.

It is a book that explores the very tough, but very real problem of leadership in crisis.  This leadership skill is distinct from other types of leadership and Gisli tackles to the subjects with a direct approach and simple writing.  He interweaves stories from his vast personal experience to highlight the complexities of the subject.

The Crisis Leader: The Art of Leadership in Times of Crisis

Gisli draws on his vast experience as a crisis leader having worked in numerous crisis situations.  Here is his full bio:

Gisli Olafsson has been the Emergency Response Director of NetHope since November 2010. In his current role he is responsible for emergency preparedness and emergency response activities related to information and communication technology (ICT) within the fortyone NetHope member organizations.

Prior to that role he worked as a Disaster Management - Technical Advisor for Microsoft Corporation from September 2007 to October 2010. In that capacity, Gisli was responsible for providing guidance to international organizations, such as UN, IFRC, World Bank, Commonwealth, USAID and NATO, on the effective use of ICT to enhance response to natural disasters.

Gisli has over 15 years of experience in the field of disaster management and is an active member of the United Nations Disaster Assessment and Coordination (UNDAC) team, a team of experienced disaster managers which are on stand-by to deploy anywhere in the world on a 6 hour notice to coordinate the first response of the international community to disasters on behalf of the UN Office for Coordination of Humanitarian Affairs (OCHA).

Gisli was also a team leader for Iceland's international Urban Search and Rescue team (ICE-SAR) which is classified as a medium USAR team by the UN. Gisli was the team leader for ICE-SAR in the Haiti Earthquake in 2010. Gisli has years of experience as an incident commander and served as part of Iceland's National Search and Rescue Command for years. Gisli was a lead member of King County's Emergency Operation Centre's Support team while living in Seattle and took part in coordinating over 500 disaster management and SAR incidents.

In recent years Gisli has participated in disaster field missions in connections with floods in Ghana (2007), Cyclone Nargis in Myanmar (2008), Hurricane Ike in Texas (2008), Sichuan Earthquake (2008), Pandemic Outbreak (2009), West Sumatra Earthquake (2009), Haiti Earthquake (2010), Japan Earthquake/Tsunami (2011),Horn of Africa famine (2011), and Typhoons Bopha (2012) and Haiyan (2013) in the Philippines.

2013-2014 Business Continuity Management Benchmarking Study

I just received word that the 2013-2014 Continuity Insights and KPMG LLC Global Business Continuity Management (BCM) Benchmarking Study has just been released for participation.  This is the premier benchmarking study in the industry.

You can participate now by following this link.  The study will close February 21, 2014.

According to Continuity Insights:

All study participants will receive upon request a complimentary copy of the study results:  valuable information to enhance your program and benchmark your organization against various industry metrics. To view a copy of the past 2011-2012 BCM Study, please visit: http://bit.ly/1klXrVu

The study digs deeps into today's most critical business continuity challenges such as BCM performance measurements; adoption and implementation of global regulations and standards; budget status/development/allocation; supply chain issues; and a great deal more!

NEW FEMA Social Media Jobs

FEMA has just taken yet another giant leap forward in progressing its social media presence.  In the coming month, FEMA will be hiring 9 new public affairs specialists to focus solely on social media.   (Thank you Kim Stephens for the lead!) These are brand new positions and will have a huge role in shaping the future of social media at FEMA.  In fact, this a trait that is desired.  Jason Lindsmithe, Social Media & Mobile Lead at FEMA states:

We’re looking for people willing to push the envelope, be creative, and set the gold standard for digital engagement before/during/after disasters.

They will work on disaster-related projects and priorities, so they’ll be fast-paced and work on highly visible initiatives.

The positions have 2-year terms, with the possibility for renewal following the two years, depending on available funding & need.

The positions below will expire on USAJobs on Tuesday, November 14.

  1. Public Affairs Specialist Social Content (CORE) GS-1035-9/11 (Link)
  2. Public Affairs Specialist Digital Engagement Mobile Platform (CORE) GS-1035-9/11 (Link)
  3. Digital Engagement Training Specialist (CORE) GS-1089-11/12 (Link)
  4. Public Affairs Specialist Digital Engagement-Multilingual (CORE) GS-7-9 (Link)
  5. Writer (CORE) GS-1089-9/11 (Link)

Other positions coming soon:

IT Specialist (CORE) Digital Engagement Programmer GS-2210-11

The incumbent is charged with enhancing functionality of the agency’s existing and new digital engagement channels to better reach those impacted by a disaster or emergency.

Public Affairs Specialist Digital Engagement Web Designer (CORE) GS-1035-9/11

The incumbent is charged with creating visually appealing digital products and websites as an important part of telling FEMA’s story, communicating critical safety and recovery information, and quickly impacting people who may be in the midst of an emergency.

Public Affairs Specialist Digital Engagement Web Content (CORE) GS-1035-9/11

The incumbent is charged with developing, implementing and evaluating digital communication plans and tools that contribute to improving FEMA communications operations and objectives through the effective use web tools and platforms.

Public Affairs Specialist Digital Engagement Social Listening (CORE) GS-1035-11/12

The incumbent is charged with effectively listening through social media channels to provide improved situational awareness during disasters, result in better messaging from ESF 15 during crises, and increase information sharing among FEMA its disaster response partners.

How You Can Help 'Crowdsource' Typhoon Yolanda Response (UPDATED)

Update. This blog post has been updated since its original posting to provide additional background on MicroMappers' two primary initiatives (TweetClicker and ImageClicker) and provide additional explanation.  

Update 2. As of 9am Eastern on 11/13, no more Tweets and images are being added to the applications. However, you can still view results on the crisis map.

Typhoon Yolanda hit the Philippines this past Friday as one of the largest and most powerful storms ever recorded on earth. Many initiatives are underway to support response efforts. However, if you would like to support response efforts with your time and energy rather than donating, MicroMappers, at the request of the United Nations Office of Humanitarian Assistance (UN OCHA), has stood up two applications to help quickly identify ("tag") information from tweets and images relevant to disaster responders.

TweetClicker and ImageClicker are both simple to use "microtasking" applications to verify Tweets and images gathered from social media. The goal is to leverage the "crowd" to help sift through the massive amounts of data collected. Each application requires no technical expertise and can even be used on your computer or mobile device. The application runs you through  a simple tutorial before beginning. Each message takes about 3 seconds to review and will get reviewed by two other people, so your selections will be validated by others as well.

NOTE: If you encounter a "100% complete" notice when navigating to the pages, keep checking back every hour. The applications are adding new messages and images to verify continuously. 

The results of this effort are being displayed on a live crisis map supported by the StandbyTaskForce and GISCorps, which are both members of the Digital Humanitarian Network. Each of these groups are network of people and organizations with missions to support the formal and informal response.

In the response to Hurricane Yolanda/Haiyan, they are digitally skilled volunteers acting as force multipliers. Conceptually, they are similar to Red Cross's Digital Operations Center that leverages digital volunteers to support response efforts. However, describing these organizations and how they operate is a separate post.

Leading this effort, though, is MicroMappers.  The initiative (loosely defined) is a partnership between QRCI, CrowdCrafting, and UN OCHA  and is led by a number of industry technologists including Patrick Meier, Ji Lucas, Luis, Daniel, Ariba Jahan, Christine Jackson, and Daniel Lombrana Gonzalez.

For more background and continuous updates on Typhoon Yolanda/Haiyan response efforts using TweetClicker and ImageClicker, check out this blog post.

Why is Crisis Mapping So Popular?

I was recently asked this question by a colleague.  I didn't have a full answer at the ready, so I thought about it some more.

Crisis mapping is usually conducted with the aim of producing "maps" that have key geographic data relevant to a response.  According to Wikipedia,

Crisis mapping is the real-time gathering, display and analysis of data during a crisis, usually a natural disaster or social/political conflict (violence, elections, etc.)."

So why is crisis mapping so popular?To understand the popularity, we have to look at when "mapping" was first popularized in the Haiti Earthquake on January 12, 2010.  Prior to Haiti, crisis mapping did exist, but primarily with the resources and motivations of National Geographic only.

To support the response effort, a group of "mappers" no where near the earthquake used an open source tool called Ushahidi to begin mapping tweets and other information collected from the Internet to provide better situational awareness.  At one point, Craig Fugate, the Adminstrator of FEMA, praised the Haiti Crisis map as "the most comprehensive and up-to-date map available."

https://twitter.com/CraigatFEMA/status/8082286205

The "crisismappers," as they became known after, though, were just a group of unaffiliated and spontaneous volunteers.  Most had no prior mapping or GIS experience.  They worked independent of any one authority to produce maps that would be useful to on-the-ground responders and coordinators.

Ushahidi was designed around the needs of a consumer and a problem, not a list of technical requirements given to them by an organization.  As a result, the software was developed for non-technical people to use.  This enabled people not formally trained in mapping and GIS to support mapping efforts and launched a slew of publicity for Ushahidi as the go-to crisis mapping tool.

Of course, as with every platform, each has its limitations.  Still, Ushahidi has worked hard in recent years to improve the software and even released a hosted version called CrowdMap.  Similarly, other tools such as MapBox have devoted considerable effort to developing easy-to-use mapping tools.

However, easy-to-use tools while important, are not the only reasons for the popularity of crisis mapping.

Consumerization

This "consumerization" of technology is now enabling mapping to shift from an EOC support function to a skill of the modern emergency manager.  Without the support of a technical specialist, emergency managers can begin to answer their own questions faster and easier through a response.  They can get further detailed in their analysis and research to better understand the situation before them.

This was a critical factor in allowing the crisis mappers to utilize Ushahidi during the Haiti response.  They were able to easily adjust their work based on the expanding needs of on-the-ground responders without much technical knowledge and support.

Consumer-based technologies help reduce interdependencies, add efficiencies, and enable emergency managers and responders at all levels of the response to take more ownership of their functional area.  Emergency managers get to focus on their domain and answer their own questions as the response progress while the GIS specialist is freed up to work on more complex geo-spatial needs applicable to a broader audience.  Pretty soon, there will be no need for a GIS specialist as everyone will be a GIS specialist!  The skill is becoming commoditized and ubiqitous.

Availability of Data

Getting data from multiple sources is becoming easier and easier as governments and organizations devote more resources to "freeing" data from their closed, antiquated and locked databases.  The shift in thinking has moved from protecting all data from outsiders to recognizing the value of certain shared data across different organizations.  In the case of Haiti, the crisis mappers were able to pull public data via social media and a special texting shortcode that had implied consent.  However, a lot of great data still exists in the silos of organizations.

In early 2011,  hired a Chief Digital Officer to help navigate the complex polices that have prevented such access to data before.  To help disseminate data, NYC launched an Open Data Portal where you can easily access flood zone, shelter data and fire station locations in a variety of formats.  Better yet, you can actually bring this data into your own systems mash up against other data to produce more value-oriented analysis and solutions.  Prior data and real-time data need not be mutually exclusive anymore.

The more data that is available, the more you can do.  In creating your risk profile, you can easily see and map which of your buildings or offices are in designated flood zones.  Have to discharge patients before, during or after a disaster?  Check to see if they may be in a designated flood zone prior to discharge so alternative arrangements can be made.

Adoption Costs

I have always said that technology should be intuitive for the person who knows his or her job well.  This helps reduce costs in two ways:  training and efficiency.  If a tool is intuitive, less time and money needs to be spent on learning how to use this tool.  Additionally, the more the tool is intuitive and matches the needs of the functional area, the easier it is for the designated person to get his or her job done faster and with less errors.

Ushahidi was designed with quick adoption in mind and enabled the crisis mappers to quickly adopt it as their tool of choice.  Little training was needed on the tool itself and the mappers were able to focus more on how to get the data into the system for added value and insights.  The simplicity of the system enabled them to work quickly (as humanly possible) without fretting over the large and expansive feature sets and options that bog down so many tools.  In a way, Ushahdi was an "expert" system that focused on the best practices in crowdsourcing rather than giving the user all the options in the world.

Conclusion

Crisis mapping, while the popular concept of the day, is well on its well to becoming a defacto skill in the industry.  The lessons from crisis mapping are still being extracted, but the rise in popularity has started giving us a blue print for what other technologies should embrace.

We are beginning to better understand how technology is helping us do our jobs better.  The easier that tools are and that do their designated function well, the better off we will be in the future as more data becomes available.

What are Decisions Makers' Needs in Sudden Onset Disasters?

One of the greatest problems we face in disaster management is understanding the type and breadth of decisions that we make during a disaster.

So much goes into decision making that we need to devote significant research and effort to putting this skill in a better perspective so that better tools and approaches can be developed. Long gone should be the days of making decision "off the cuff." Decisions, despite their impending urgency and seriousness, should be as purposeful, collaborative, and as science-based as possible.

Andrej Verity, a disaster responder and Information Management Officer for UN-OCHA just released a report from a workshop on Field-Based Decision Makers' Information Needs. Here is a link to the full report.  The main authors included leading researchers Erica Gralla (GWU), Jarrod Goentzel (MIT), and Bartel Van De Walle (Tilburg). Check out Andrej's great introductory post on Demystifying decisions makers' needs in sudden onset disasters.

The report focuses heavily on the decision-makers' perspective.  It asked what decisions are typically made and then separately, what are the information needs in sudden onset disasters? Ultimately, the decisions and information needs will be linked in future research.

One  goal  of  this  workshop  was  to  help  Volunteer  and  Technical  Communities  (VTC)  to  understand  the  information  field  decision-‐makers  require  to  make  the  best  possible  decisions.  These  results  lay  a  foundation  for  this  understanding,  by  providing  (1)  a  framework  and  set  of  information  required  by  field-‐based  decision-‐makers,  (2)  categories  and  types  of  decisions  made  by  decision-‐makers,  and  (3)  a  large  set  of  brainstormed  decisions  from  workshop  participants.  VTCs  and  others  seeking  to  support  humanitarian  action  by  providing  and  organizing  information  can  utilize  these  results  to  (a)  prioritize  their  efforts  toward  important  information,  and  (b)  organize  their  information  in  a  manner  intuitive  and  useful  to  humanitarian  decision-‐makers

Check out pages 7-8 for great pictorials of the following findings regarding decisions and information requirements:

Decision dimensions and categories are broken down by timeframe, scope, locus/authority of decision-making, criticality, frequency/duration of decision, information gap (confidence), and function.

Information requirements are broken down by context and scope, humanitarian needs, responder requirements, meta information, capacity and response planning, operational situation, coordination and institutional structures, and looking forward.

Does this resonate with your work?  Why or why not?

Getting Started on a Emergency Management/Business Continuity Program

The disaster domain is huge. The level of detail and specificity to which you can get is almost infinite. As such, it can be an overwhelming experience for businesses and nonprofits to get started with preparing their organizations for disasters.

In response to an email I just got from a former MPA classmate, I wanted to share some helpful thoughts on how to get started.

The Actions to Take

When discussing this topic, there are four main actions that organizations can take:

  1. Prepare for a Disaster (through planning, training, exercise and equipment)
  2. Plan for Response/Continuity of Operations (responding in the moment/maintaining operations, if possible)
  3. Plan for Recovery (getting back to normal)
  4. Mitigate Impact (stop things from happening in the first place)

Implementing into the Organization

There are many approaches and models to implement these actions (think program management vs. project management). However, the process typically starts with leadership forming a disaster committee of some sort to begin addressing the organization's disaster needs and corrective actions.  The committee then establishes a path forward.

Typical agendas are a variation of the following:

  1. Identify Risks and Gaps
  2. Develop Plan(s) to Address Risks and Gaps (keeping in mind the four actions mentioned above)
  3. Train and Exercise on Those Plans and Purchase Required Tools/Equipment
  4. Redo Steps 1-3 annually (or at designated intervals).

Given the typical resource constrained environment of organizations, there is a lot of potential to address "low hanging fruit" once risk and gaps are identified.  This is not perfect as the approach should be as comprehensive as possible, but it is helpful nonetheless.

The important thing is to not fall into a false sense of security because you have only addressed some of the risk and gaps.  The coordination of effort and understanding your strengths and weakness is vital to a successful disaster management program.

High Value Resources

Here are a few high value resources on what nonprofits can begin to do. Grant making institutions should consider baking some of these principles into their grant requirements.

Domain Headings

If you are looking to do more research in this area, especially as your disaster management program matures, you should look for resources in the following domains:

Getting Started

As a starting point, I highly recommend the following priorities:

  1. Develop a disaster committee led by someone willing and able to champion the effort
  2. Decide if it is best to shut down, continue operations at full or reduced scale, and/or respond to the disaster (i.e., support the community)?  This will help clarify how detailed the planning should be for all scenarios.
  3. Identify 3 targets for the next year (i.e., establish committee, develop a program plan, develop a plan)

It is easy to get overwhelmed.  Focus on establishing realistic goals and moving forward.  Any forward movement is better than no movement at all.

Can Evernote be a Planning Tool? Training? Evaluation?

I am usually very excited when new tools disaster tools come out on the market. But I am equally excited when everyday tools can be applied to the disaster context to better meet our needs and more often than not achieve significant cost savings.

In the past year, I have used Evernote religiously to capture my thoughts, research and any other type of information I can think of. I can then search Evernote with its powerful search features to inform my blog posts, support my PhD research and consulting clients, manage class assignments, and take notes...for everything.

Evernote has an easy capture tool for clipping things from the web (including PDFs) and an easy to use architecture that can easily link and/or publish notes within the program. Additionally, I can use it on ANY of my devices with online and offline capabilities and integrate it with MANY other applications. Needless to say, I am a big fan of the tool.

But I really wonder if Evernote can be used as an emergency response or continuity planning tool. According to Wikipedia:

Evernote is a suite of software and services designed for notetaking and archiving. A "note" can be a piece of formatted text, a full webpage or webpage excerpt, a photograph, a voice memo, or a handwritten "ink" note. Notes can also have file attachments. Notes can be sorted into folders, then tagged, annotated, edited, given comments, searched and exported as part of a notebook.

To put this a bit into perspective, Evernote 's motto is:

Remember everything. Capture anything. Access anywhere. Find things fast.

Hmmm....sounds a lot like a lot of our fundamental planning needs for disasters? We need to collaborate well and then access our information easily and fast. Evernote Business provides many of the collaboration features missing in the consumer product.

The incorrect approach, though, would be to ask Evernote to do everything our word processor does. Conceptually, it is an entirely different tool  that must be approached in a new way.

For example, what if we could have each note represent a chapter and all linked back to a Table of Contents note?  What if we could create a notebook solely for our base plans and then have other notebooks dedicated to our functional annexes? Add supplementary or supporting PDF, Word, PowerPoint, Excel Documents with ease?

In another case, what if your incident commander could easily look up and reference relevant procedures and protocols directly on his or her phone or tablet?  Better yet, can it provide a checklist for action within seconds?

Or what if you could get real time information back from the field by having them taking pictures, record audio or mark up a screen shot of a map directly from their phones and tablets?

Evernote is such a powerful repository of information that it can do all the things mentioned above.  I am just wondering what the workflow is for organizations with emergency response and business continuity planning needs.  Does it end up being more expensive than other tools or are there any work arounds?

What are your thoughts?  Would you consider Evernote for your organization?  Why or why not?

Can We Help Find, Map and Visualize Data for Syria?

wpid-img_20130909_1038351.jpg

The OHI Code Sprint is gathering technologists and subject matter experts the week of Sept 9-13 to work on data, mapping and visualization problems in the humanitarian and disaster response space.

wpid-img_20130909_103835

This week, we are using Syria as a use case.  There is still time to get involved.  Check out the evenbrite page for more information!  We have a hackpack available as well.  

Here is a good overview of what is going on:

While data is generated during post-disaster humanitarian efforts, it is rarely shared between organizations.  The Open Humanitarian Initiative is a technology incubator and accelerator that will enable the sharing of data across various platforms by engaging NGO's, tech companies and academics.  Learn more at: http://ohi.nethope.org/

Purpose of Code Sprint Experts with mapping and data tools will be joining forces Sept 9-13 in Birmingham, England; Arlington, VA; and remotely via the Humanitarian Toolbox. We'll be working with a scenario to see how data moves from one group to another and using the dedicated work time to create novel ways for spice data to flow.

Should I Attend? We are creating an event with tangible outputs.  Real work will be done with a sense of urgency. If you want to participate in such a thing, bring a laptop and your brains. OHI and our event sponsors: ESRI, Splunk, and Aston University will provide food; and the OHI team will faciliate the event.  If you work on mapping, data structures, humanitarian assisted disaster relief, design, etc this is the event for you.

We have several sequential goals over the week, which include:

  1. Identify/Gather Baseline Data
  2. Define the Who-What-Where of Existing Efforts
  3. Establish a Situational Overview
  4. Define Operational Gaps
  5. Define Operational Overlaps
  6. Identify Funding Paths
  7. Complete a Real-Time Operational Planning Exercise
  8. Complete a Real-Time Needs Analysis
  9. Secure Ad-Hoc Data Collection

As we push forward, if you can offer skills or expertise, let us know!  We hope to accomplish as many of these goals over the week as possible.

White House Poised for Further Innovation with "Design Jam"

I had the distinct pleasure attending a White House design jam (think "design-a-thon") on Disaster Response and Recovery with over 90 colleagues from all over the tech and innovation space last Tuesday. Honorable mentions include MicrosoftGoogleNYC Digital, Twitter, Airbnb, Twilio, TopixLiquidSpace, Reddit, Rackspace, Palantir, DirectRelief, Recovers.org, APCO International, and Singularity University to name a few.  And yes, FEMA was there along with a couple White House Presidential Innovation Fellows!

Here is a quick description of the event:

The event, to be led by Todd Park, US Chief Technology Officer, and Richard Serino, Deputy Administrator of FEMA, will convene leaders in technology, design, academia, entrepreneurship, and philanthropy, as well as local and state officials to develop ideas for innovative solutions to emergency management challenges.
Participants will brainstorm creative new solutions and ways to support the development of prototypes for some of the best emerging ideas. Solutions will focus on: empowering disaster survivors; enhancing the ability of first responders as well as Federal, state and local officials to conduct critical recovery and restoration activities; and supporting integrated, whole-community efforts to better prevent, protect, mitigate, respond to, and recover from disasters.

We spent most of the day "jamming" to not just discuss, but actually create designs.  We worked through a cycle that included problem definition, design & build, test & evaluate, and iterate.  At the end of the day, we chose team captains to spearhead ongoing development efforts.

There were a number of fabulous projects that, if continued, could really help us leap frog forward.  Here are a few:

  • DisasterRSS - Creation of a "disaster.txt" publishing standard & ontology for websites (like RSS for blogs).  This simple idea is for any organization that has data or information useful in disasters.  The organization would create a .txt file on its website that would have all relevant information for data geeks and others to access its data.  Here is a very basic example.
  • SMS Survivor Survey - Designed to get specific information from specific population groups, the simple prototype simulated sending a short text message survey to a list of durable equipment owners with a tree of questions asking for their current location and the battery needs for their life-saving medical devices.  That information is then saved for disaster responders to deliver aid for the folks that need it.  This model can be adapted to a variety of use cases .  Check it out by texting (415) 236-3575.
  • Disaster Response Data Interchange - Geographically aware data interchange that will intelligently aggregate disaster recovery information from social media and other sites. The system will include Customer Relationship Management (CRM) functionality to autonomously engage “customers” to connect the “haves” with the “wants” across multiple sites. Additionally, it will have an Application Programming Interface (API) that will allow third parties to push/pull information automatically into and out of the data interchange.
The big question on many peoples' minds, though, is "so what's next?"  Innovative ideas are simply not enough to leap frog us forward.  We need action-oriented and sustainable projects supported by a correctly aligned policy and operational environment.  Additionally, resources including funding and expertise are also needed.  While these sentiments were echoed throughout the day, this may take time to realize.  I am hopeful as we push forward and the "design jam" format certainly seemed to be pushing us in this direction.

Check out the full Storify here.

So what is your opinion on what we need to go from innovative ideas to action and sustainability?

What Should Researchers Know About First Responders?

I have been invited to speak next Thursday on a panel at the National Geospatial Intelligence Agency Academic Research Symposium.  The title of the panel is "Social Media Research for First Responders and Analysts" and it's goal is "...to help researchers understand what operational capability gaps need to be filled."

In hopes of informing my panel talk, I want to ask you what should researchers know about the operational needs of first responders?  Especially as it relates to social media!

I am excited about this workshop because it starts to put practitioners with academics in hopes of aligning the priorities of both worlds.  In fact, a new term is emerging called the "pracademic."  The pracademic has experience as both a practitioner and an academic and chooses to work to align the worlds so that academic research can be as applicable as possible.  Patrick Meier captures this well as "scholar-practioner" in Advice to Future PhDs from 2 Unusual Graduating PhDs.

Some prior practioner-based gap analysis work has already been done on this by DHS's Virtual Social Media Working Group (of which I am a member).  In June of this year, the VSMWG released Lessons Learned: Social Media and Hurricane Sandy.  The report highlighted many of the success and learning points regarding social media.  On page 29, it highlights a number of technology, process, and policy gaps requiring further attention.  The major themes included:

  • Big Data
  • Compliance and Requirements
  • Funding
  • Standards, Training, and Guidance
  • Policy and Process
  • Partnerships
  • Technology, Tools, and Features

I will undoubtedly speak to these gaps, but other feedback and thoughts would be helpful and greatly appreciated!

5 Considerations for Technology Today

Purchasing technology for disaster management is sometimes a costly and drawn out process for a variety of reasons.  From procurement to training to incorporating into plans and procedures, implementing new solutions is not always easy or cheap.  And in recent years, there has been an explosion of disaster-specific and non-specific applications that can be used by disaster management organizations.  Open source technologies have gotten better and new market-based solutions have been developed.  The market is really starting to grow and mature.

As it does though, we must prepare ourselves to see technology as dynamic and evolving rather than static and stale.  The socially adept world in which we are now living is causing a cultural shift in our approach to technology adoption.  And this is even more true now to social media and other social technologies are changing the way we operate from solely a formal response to also enabling an informal response.  Consequently, this change in mindset affects many things in our organization including: IT & response policies, how we plan, training, purchasing, operations, etc.

Here are a five considerations as you look at new solutions to help you do your job better.  You should consider applying them in this order as well.

1)  Does it really meet my core needs?

First, can a change in process or people be more effective?  Sometimes solutions are as simple as a few personnel or process changes and can be more effective in the long run.  If not, you should conduct an environmental scan to identify the tools that address this problem.  Touch base with your network and put the word out on social media that you are looking for a new solution to address this problem.  You will be amazed at how many leads you will get from your friends and your network.  After that, compare and contrast the tools against usability, scalability and support.   Ask, what will meet my organization's and community's needs best now and in the future?  Ultimately, you want a solution that will evolve as your needs evolve.

I am seeing amazing solutions being developed everyday.  Many of them have whiz-bang features that are awesome.  But at the end of the day, the solution still needs to help you solve a core problem whether it be efficiency, information management, communications, etc.  The ability of the solution to address your core problem should be of paramount importance with the rest of the features as bonuses.  Careful thought should be applied to defining your current challenges.

2)  Is it usable?

I like to use the rule of thumb that the solution should be intuitive for someone who knows his or her job well.  If it is not intuitive, red flags should be raised immediately.  If the person who has primary responsibility of using the solution can't figure it out easily, how do you think someone thrown into the position last-minute will react?  We often pride ourselves on our dynamic capabilities during a disaster.  Usability is a central component to remaining dynamic and to scaling effectively.

But usability also has some added benefits.  Better usability leads to lower training costs and easier adoption.  No longer are full classes required just to learn how to use a system. The ubiquity of online tutorials and quick tips can help someone learn a system faster and become highly proficient without much training.  If the vendor doesn't provide easy access to support material, creating your own is easy as well.  Some simple software can do the trick.

As you adopt a more adaptive technology adoption strategy, your organizational and community culture will adapt to the point where the digital divide becomes less and less of an issue and just in time training can be more effective.  In fact, there will be less skills degradation over time as people take it upon themselves to learn new technologies rather than waiting to be taught.

3)  Can it scale?

During non-disasters times, we only have a core group of people using a platform.  But disasters are never a solitary response.  We call upon many people to help coordinate, manage, and execute a response.   They come from various organizations and backgrounds to help support.  Solutions should support this scaling of people with easy additions of members and the ability to work across organizational boundaries using the solution.

In addition to easily added and managing members, the solution should effectively manage the massive amounts of data and information to provide meaningful decision supporting information.  After all, a giant feed of unstructured information is rather unhelpful when deciding to deploy resources.  You have specific questions about road conditions, prior decisions, decisions from neighboring communities.  You also want to know the latest information and updates and see them in both their aggregated and non-aggregated forms.  Solutions should understand this and be developed in such a way that functionally specific questions can be answered easily.  Robust search and dynamic charts are key things to look for

But people are not the only consideration.  As more people use the system and data is inputted from a variety of sources, you need to know that the solution can scale from a technical perspective.  Is the server going to get overloaded?  Is there failover in place?  Is there enough bandwidth?  Many hosted solutions address the first two concerns very well since they are on the internet in a well protected environment outside the impact area.  They have the resources to build in scaling architecture into their systems.  As a result, you don't have to worry about the software crashing on you as your main priorities become only maintaining the power and Internet infrastructure.  Some solutions even have offline applications that cache data until the Internet connection is back up.

4)  Can it integrate?

Integration is the future of any disaster or non-disaster related application.  It is highly unlikely that we will ever see one solution that meets all of our needs.  In fact, it is bad for innovation as the economic incentives for the vendor to constantly improve the system are mal-aligned.  If you are locked in to their proprietary solution, why should they devote resources to improve the system?  Needless to say, I am not a fan of vendor lock-in.  The more competition that is out there, the more innovation and improvement we will see as vendors and open source projects compete to be the best solution for you.

Because of a recent focus on big data and integration, a technology ecosystem for disasters is developing that will enable tools to talk to other tools.  The more data that can be mashed together, the more we will achieve better process efficiency, better insights, and ultimately a better operating environment.  But integration from a solution should not be a custom add-on.  This should be part of the core offering of the solution.  In some cases, these are integrated applications (app stores), data standards (HXL, EDXL, etc.) or application programming interfaces (APIs).  Whatever it is, integrating the solution with other modern technology should not  be a cost intensive process.  Of course, if you are dealing with antiquated internal systems, this is a different story.  But your choice of technology today should account for the ability to easily integrate.  This will pay off dividends in the future.

5)  Does it have support?

Support is a relative term and does not always mean vendor support.  Having a technologist on staff or an application guru that can troubleshoot may be all that is required.  Whether you are considering an market-based or open source solution, consider what resources are available to troubleshoot and assist.  They can range from online self-help forums to online tutorials posted by users or the developers themselves.  Some solutions are so mature that bugs are nearly non-existent.  As such, having a person on staff who is the application guru to offer advice and troubleshoot users' problems may be a good workaround.

Inventory your staff for their skills and interests.  You may find a number of technology evangelists that can support you better.  The added benefit is that they know your policies and procedures well.  Calling someone on the phone or chatting with someone online inherently increases time, energy and frustration as they are not well acquainted with your specific operating environment.

Conclusion

There are many solutions emerging in the market.  From market-based to open source solutions, you can probably find what you are looking for.  In addition, there are many solutions for other industries that are very applicable to disaster management.  For example, social media aggregators like Hootsuite are often built for a wide range of industries and can be applied to the disaster use case fairly easily.  In some cases, a white label social network may serve as your primary EOC software.  Incorporate non-disaster solutions into your environmental scan.  And lastly, don't be afraid to forgo a solution in favor of a change in staffing or process.  In fact, a number of organizations that already utilize Google Apps have had much success using its existing applications collect and manage data.