The evaluation of training programs has long been considered one of the critical components of the ADDIE process. Typically though, training evaluations at best measure learning, with no measure of the learner’s ability to apply what they learned on the job and no measure of the impact on organizational performance. Measuring these aspects of training programs can be quite challenging. Moreover, recent trends in training evaluation have advocated evaluating return on investment (ROI) or return on expectations, both of which become increasingly difficult and complex. When undertaking training program evaluations, it is critical to establish what exactly needs to be measured based on the needs of the stakeholders in the program. While ROI may be important for some, it may not be the critical aspect for others.
Evaluation can be used to determine the value of a training program or simply to identify how the program can be improved. A widely accepted model for evaluating training considers four levels that include reaction, learning, behavioral change and organizational impact. Some authors and evaluators have advocated a fifth level, which some describe as return on investment and others describe as return on expectations. The needs of the stakeholders must be considered when determining how detailed and which level of evaluation to undertake, as the time and other resources necessary to conduct an evaluation that measures organizational impact can be quite significant. A general rule-of-thumb that has been suggested by at least one author suggests that 100 percent of training should conduct evaluation at the participant reaction level, while each succeeding level of evaluation should be conducted for fewer programs at 20 percent increments.
While this rule of thumb may be useful, ultimately evaluators need to understand the needs of the stakeholders so that time and other resources are not wasted. In a recently completed evaluation for an Army training program, the sponsor was interested in changing one aspect of a training program and requested an evaluation to determine the effectiveness of the change. While not specified, the evaluation team recognized that simply evaluating the training effect in a “school house” training environment (i.e. evaluation of learning) was not sufficient. The sponsor, who was the commander of the training organization, really needed to determine whether or not the change in training resulted in soldiers who could perform better in the unit to which they were assigned after they left the school, which would require evaluating at level 3 (behavior).
In another example for evaluating a military training program, the original requirement from the sponsor was to determine the ROI of a training course. Later, as the evidence indicated improved organizational effectiveness, the need for evaluating ROI became moot since the improved organizational effectiveness translated into fewer casualties during combat operations, which required no further justification.
There were several similarities between the two examples, which provided important lessons to the evaluators. First, both of the examples illustrated the importance of understanding the needs of the sponsor and taking into account the needs of other stakeholders involved with the program. Second, conducting the level one, or participant reaction, evaluation allowed the evaluators to provide valuable information to the instructors, which was used to improve certain aspects of the instruction. Third, the evaluation of learning was important to the overall evaluation goal. The results of the learning evaluation in the second case resulted in changing the course prerequisites to only allowing those soldiers with the appropriate duty position and rank to attend the training course. The evaluation results clearly indicated that other participants would have a low probability of learning the course material.
Understanding the needs of the evaluation sponsor and other stakeholders has led to more valuable evaluation by providing pertinent information that the stakeholders could use. In the two examples, the sponsors were the primary user of the evaluation results but other stakeholders used the results of the different levels of the evaluation as well. In both cases, the sponsors focused on level 4 results but the other stakeholders, which included instructors, program managers and other leaders, wanted levels 1 and 2 results for the purpose of using the information to improve the course. With a clearer understanding of the importance of knowing the needs of the sponsors and other stakeholders, the evaluation team has used this knowledge to work with sponsors and stakeholders in subsequent evaluation projects to ensure their concerns were addressed and evaluation requirements were met.