The Role of Competing Objectives in Exercise Design and Evaluation
/As an exercise practitioner with experience in different types and levels of exercises, I question the efficacy of our existing exercise evaluation paradigms (e.g., HSEEP, REPP, CSEPP, etc.). In my experience, they are messy and misaligned with the overarching objectives we are trying to achieve.
The Problem with Exercise Objectives
This messiness is partly due to the fact that we create dueling objectives such as training and evaluation. For example, if you slow down or modify an exercise to ensure responders understand and can perform their duties (training objective), can you objectively state the system capability they performed was successfully tested (performance objective)? I have a hard time saying this unless the capability's performance is duplicated in more challenging and realistic environments.
Additionally, the limitations of time and budget influence the desire to maximize the value of each exercise conducted. Running an exercise, especially a large-scale/multi-organizational exercise that is more realistic of a full-blown response, is not an easy task. It requires many personnel from different organizations and disciplines to agree on the objectives of the exercise. Reconciling these objectives into a refined set of clear objectives is a tough task by itself.
The resulting objectives, though, must also be considered collectively to understand if and how they may compete with each other. For example, the most valuable learning and evaluation for system performance comes from understanding and examining the relationships between different plans and activities, not the plans and activities themselves. As a result, additional objectives surrounding plan testing should be carefully considered in order to ensure a coherent exercise design that will produce the desired behavior so it can be effectively evaluated.
The Four Types of Objectives
There are four types of objectives to understand in order to ensure the exercise design is aligned with your evaluation framework:
- Training Objectives seek to improve the knowledge, skills and abilities of single person or group. The objective type is often encountered in tabletop exercises and drills where the goals are to familiarize people with plans, procedures, and equipment.
- Task/Activity Objectives seek to demonstrate and verify the knowledge, skills, and abilities of a single person or group. The task or activity being performed can be the execution of a plan; however, the objective is for a person or group to demonstrate competence in that plan, not validate the plan itself.
- Plan/Procedure Objectives seek to validate a plan and its assumptions. These objectives focus on learning about the plan itself. The successful execution of the plan, though, does not automatically indicate the plan will actually make the organization better prepared. That objective is better assessed in system performance objectives.
- System Performance Objectives seek to determine if the response system met the needs of the constituents it serves. These objectives reflect the interaction effects between different plans, processes, actions, tasks, etc. that supported the response. These objectives can include both internal (e.g., EOC fulfilling fire department resource needs) and external (e.g., setting up two field hospitals) constituents.
All objectives are needed and no objective is more noble than the other. As a systems engineer, though, system performance is the most intriguing objective type that has the greatest potential to help understand what it means to be prepared. There is much more work to be done in this area. For now, a good exercise is one that enables the performance of the behavior under observation AND the evaluation of established objectives.
Exercise Design and Evaluation
Exercise design and evaluation can be significantly improved with an understanding of how objectives are different and potentially in competition with each other. For example, you may design the exercise and your evaluation materials to evaluate workforce competence rather than capability performance if you want to validate training success. If you want to validate a plan, then your evaluation will reflect the nuances of that plan and it's assumptions.
While the post-exercise evaluation process with root cause analysis and other techniques can help identify and deconflict important issues on the back-end, this should not be relied upon. What happens if you find one prevailing incident that affects your assessment of other objectives you are trying to evaluate? Sure you may pull out some marginal benefit (e.g., small lessons learned and areas for improvement), but are you able to better understand and significantly improve your overall operations with such limited data? In my experience, no.
Parting Thoughts...
Exercises with their complexity and high costs need to do more than provide marginal benefits. This starts with ensuring exercise objectives are well-aligned and do not conflict with each other.
Additionally, existing exercise design and evaluation frameworks need to acknowledge that competing objectives occur and incorporate a deconfliction process to ensure the value of the exercise is maximized before the exercise is designed and conducted. The supporting exercise evaluation material then need to reflect the objectives being sought. Evaluation material should not simply be based on high level core capabilities and may require multiple "types" of evaluation material that address the different types of objectives.
Have you experienced competing exercise objectives? What happened and what did you do?