Evaluating Preparedness at Different Levels of Analysis

I recently conducted an academic/practice-based research project that provided a better understanding of preparedness evaluation. One interesting thing to come out of this research was a capabilities-based exercise framework for a Federal regulatory agency that links capabilities to design concepts to evaluation criteria. I will post an overview of this research once it is officially published. 

Another interesting aspect of the research confirmed how preparedness evaluation is still a complicated and difficult process that doesn't always yield the best results. We still have many more questions than answers when trying to link evaluation to our learning and preparedness objectives. For example, how do you understand how much "more" prepared you are between exercises and disasters? Current assessment processes tends to be ad hoc and very subjective.

HSEEP, noble in intent and still a benefit, lays the ground work for evaluating preparedness. However, it stops short of a well-aligned and pragmatic process that helps us learn from each exercise or disaster response while also tracking cumulative learning over time. Many of the issues and gaps mentioned above are still experienced in the after action and corrective action processes following exercises and disasters.

It is often mystifying how we develop and track our findings. For example, are these findings really the most important? Are they the right set of findings? How do we capture andarticulate very real complexity issues such as network or interaction effects? Are we investing in certain performance capabilities when we should be rethinking how the system is designed in the first place?  

There are no simple answers to these questions, but below are a few different dimensions of capabilities to think about when evaluating your exercises or disaster responses. These dimensions can also be considered "levels of analysis" in research parlance. However, don't be confused by "levels"; there is no hierarchical relationship between them. 


Individual ability is the backbone of a good disaster response. While I share that people should not be singled out for poor performance, understanding responders' overall knowledge, skills and abilities is an important level of analysis that needs to be captured. You should be asking: In order to have performed better during the exercise or response, what knowledge, skills or abilities should the responders had prior to the event? You may also separate individuals into different groups by role level such as senior leadership, management/coordination, tactical; or by function such as medical, fire, mass care, etc. The goal in this level of analysis is to improve individual abilities. 


To put it bluntly, well-organized groupings of people make things happen. They are organized by specialty or by organization and bring resources and capabilities that are much needed in a response. Understanding team or organizational capabilities helps to identify critical response gaps that are needed in future disasters. You should be asking: What were the organizational or team capabilities that contributed to the success or failure of the exercise or disaster response? Why/How? Were there any that weren't used?  Redundant? Ill-planned or -defined? This level of analysis is important for ensuring your "group" is ready to respond with the capabilities needed in the future. However, you must also think critically about other capabilities that may also be needed in the future and for which there is no precedent. 


The system is the least understood, but in my opinion, most important level of analysis. The difference between this and the team/organizational level of analysis is best captured in the following statement: Just because you got the job done doesn't mean getting to that point was easy, efficient or effective. This level of analysis addresses the complexity of a multi-agency response and requires a different set of questions to investigate.

For example, when you look back at all the different entities that supported the response, how did it go?  Where did breakdowns occur? How was the coordination and information sharing across people and organizations? If you are tackling these questions, you are on the right track. I would add that you should also think about the impact of information asymmetry, how the network as a whole performed and the cascading effects of different decisions or actions. Understanding this level helps you question the efficacy of your response system so that you may improve preparedness at a more fundamental/global level.     

I anticipate many of you will intuitively understand the levels of analysis that I just articulated. You may have even experienced these issues first hand as I have over my many years developing and evaluating exercises. This post is meant to help frame your thinking, but unfortunately it won't provide a definitive answer on anything.

However, I look forward to your thoughts and opinions on this!  What have been your experiences with preparedness evaluation?  What do you find most problematic? Have you identified any best practices?