
Methodology in Measurement and Evaluation
Fifty years ago, Donald Kirkpatrick, one of the pioneers in the learning and performance improvement fields, developed his taxonomy, the four levels of training evaluation. His seminal work has played a vital role in structuring how our profession thinks about evaluation and in giving us a common language for how to talk about this important topic. Human resource development (HRD) professionals around the world have benefited from his valuable contribution, which identified the following four levels of evaluation:
· Level 1: Did the participants like the training or intervention?
· Level 2: Did the participants learn the new skills or knowledge?
· Level 3: Did the participants apply the skill or knowledge back on the job?
· Level 4: Did this intervention have a positive impact on the results of the organization?
Yet, when we recently went to the Internet and typed “training evaluation process” into the search engine, more than six million entries surfaced on the subject. They included recommended processes, reports, tips, books, articles, and websites. This multitude of resources was provided by universities, vendors, hospitals, state agencies, various military branches, and the federal government.
We believe this extraordinarily large number of entries on this topic strongly suggests two things:
1 The concept of training evaluation is a hot topic that many HRD organizations are interested in, and
2 Our profession is still searching for the approach or formula that will make evaluation practical and the results meaningfulC.
So why does this search for the evaluation “Holy Grail” continue fifty years after Kirkpatrick first developed his taxonomy and approach? And why do we struggle as a profession to crack the code?
We suspect that many of you reading this chapter are hoping to find this magic formula for evaluation—one that is easy to use, yields compelling Level 3 and 4 results, and will solve the evaluation mystery. It is our belief that our profession does not need a slicker formula for evaluation or a new technique for performing ROI evaluation. Nor do we need more technology to make our current efforts faster and easier. Our profession is awash in formulas, equations, and techniques for evaluation. Therefore, the solution does not lie in inventing yet another formula or technique. The key to unlocking the mystery is developing a fresh perspective around the evaluation of training and performance improvement interventions—developing a whole new strategy that looks at why we do evaluation and how we approach it.
THE REALITIES OF TRAINING
After having conducted numerous evaluation studies during our careers, reviewing the evaluation studies conducted by prestigious organizations around the world, and talking with HRD professionals about the challenges associated with their evaluation efforts, we have seen two factors consistently emerge:
1 All training interventions will yield predictable results, and
2 Training interventions alone never produce business impact.
These factors are the realities operating whenever training is done. In order to perform a meaningful evaluation, we need to use a methodology that acknowledges these two realities and leverages them.
Throughout this chapter we will frequently refer to “training,” “learning,” or “training evaluation.” To clarify our terminology, we will use these terms in the broad sense to refer to any performance improvement intervention in which there is a training component. Our intent is not to ignore or marginalize the importance of other HPT components. In reality, solutions are almost never all training or all non-training. Virtually every intervention aimed at driving performance or business results will have a training component to build employees’ skills and knowledge, just as every training solution will need to be augmented with performance support tools, such as revised incentives, job aids, more explicit supervisory direction, and so forth. Our intent behind shining the bright light on the training component is to make sure that this large and visible expenditure is truly paying off and that the organization is getting full value from its investment, because frequently, organizations do not.
All Training Interventions Will Yield Predictable Results
The first reality is that all training will yield predictable results. No matter whether the training is an executive development program, customer service skills training, technical skills training, or a coaching program, there will be a predictable outcome:
1 Some participants will learn valuable information from the training and utilize it back on the job in ways that will produce concrete results for their organizations.
2 Some participants will not learn anything new or will not apply it back on the job at all.
3 And most participants will learn some new things, try to use the newly acquired knowledge or skills but for some reason (for example, lack of opportunity, lack of reinforcement and coaching, time pressures, lack of initial success) will largely give up and go back to their old ways.
The exact percentage of people in each category will vary depending on the nature of the training, the level being trained, and the organization. For example, participants in technical training typically use their new knowledge or skill back at a higher rate than participants in soft-skills training. But regardless of the specific numbers in any intervention, this pattern will emerge.
Traditional Method of Evaluating Usage and Results. Because of these two realities, relying on traditional statistical methods such as the mean (or average) can be misleading when it comes to capturing or evaluating the impact of training. Let us explain. (We promise this will not digress into an esoteric discussion of mind-numbing statistics.)
The problem with the average is that it tries to describe the entire distribution with a single number. By definition, that number is always going to be “average.” There will be many cases that were much better, and there will be many cases that were much worse than the average. And they all get “smooshed” together into a number that is “average.” So why is a single number a problem? As we described earlier, there are actually three categories of participants who will leave training programs, not one. To use a single number to characterize these three groups, which are very different and produced different levels of results, is misleading and not particularly useful. Consider this simple example. If Microsoft founder Bill Gates were in a room with one thousand homeless and destitute people, the average net worth of those individuals would be about $40 million. In reality, that average does not begin to describe what the real situation is and to report that, on average, the people in the room are doing well economically would be an egregious misrepresentation, or possibly a dishonest deception.
In the same way, it can be misleading or dishonest to report an average impact of training, because those few participants who use their training to accomplish some extraordinary results may mask the fact that the larger proportion of participants received no value at all. Or vice versa: the large proportion of participants who failed to employ the concepts from the training can overshadow the important value that a few people were able to produce for the organization when they actually used the training. The average will always overstate the value of the training for people who did nothing with it, and it will always understate the good the training did for those who actually used it. In short, it obfuscates what really happened and what we as HPT professionals need to do about it. This leads to the second reality of training. It surrounds the issue of why training works or does not work to produce business impact.
Training Alone Never Produces Business Impact
Our profession largely operates with a mythical view that states: “If we are doing the training well, the business results should be good.” This is depicted in Figure 5.1.
Figure 5.1 Mythical View of Training.
Unfortunately, this is not what happens in the real world. Anyone who has been in the HRD business for very long has probably experienced a situation similar to this: two people attend the same training program, taught by the same instructor, using the same materials, demonstrating comparable skills on an end-of-course assessment, even eating the same doughnuts on breaks. Yet, one of them takes what she learned and consistently applies it on the job in a way that helps improve her performance and produces a great benefit for the organization. At the same time, the second person hardly uses the new skills/knowledge at all and has nothing to show for his efforts. How can the same training program produce such radically different levels of results? How would you judge the effectiveness of this training program?
This example dramatizes the fact that there is almost always something operating outside of the training experience that can have a significant impact on whether the trainees will even use the new skills/knowledge and what results they will achieve by doing so. Therefore, the second reality, simply stated, is that the training program alone never accounts for the success or failure of the training to produce results. There is always something else happening before or after the training that has just as much impact (or more) on whether the training is used to produce results for the individual and organization. This is depicted in Figure 5.2.
Figure 5.2 The Reality of Training.
The size of the “learning event” square is relatively small compared to the “before” and “after” rectangles to signify that the training itself is typically a smaller player in the results outcome. Other performance factors usually have a greater influence in determining the level of results. Sometimes those factors are deliberate and desirable, such as job aids, new work processes, or manager coaching; frequently, they are accidental and undesirable, such as peer pressure to stick with the old approach, lack of confidence in using the new skills, or no support or time to try out the new techniques.
Restating the Two Realities
To restate the two realities:
· Reality Number 1: Training yields predictable results. Typical statistical measures used in evaluation studies can be very misleading.
· Reality Number 2: Training alone never accounts for the success or failure of the training to produce results. Therefore, attempting to parcel out the results produced by the intervention is impossible and terribly counter-productive.
To be useful, an evaluation strategy must acknowledge that these two realities are operating and then leverage them. By leverage, we mean capture and report the kind of data that helps the organization maximize the impact of training and any other performance improvement interventions in the future. An evaluation that is simply “a look in the rear-view mirror” and reports statistics on what happened in the past has limited value to the organization. Moreover, it can be perceived as self-serving or defensive. “Look at the wonderful results the L&D organization produced” or “We are producing meaningful results; please approve our budgets.” The message that runs through this chapter is quite simple: The goal of evaluation is not to prove the value of training or performance intervention. The goal of evaluation is to improve the value of training. Its primary purpose should be to help the organization produce more business impact from its training and performance improvement investments. This goal cannot be accomplished by creating a new and slicker formula for calculating ROI, but requires a strategy and method that will help L&D departments collect the kind of data and communicate the kind of information that will begin to change the paradigm for how their organizations view and implement training and performance improvement interventions. In other words, greater results will not be achieved by using better counting tactics, but only by taking a more strategic approach toward evaluation.
SUCCESS CASE EVALUATION METHOD
The Success Case Evaluation Method, developed by Dr. Robert Brinkerhoff, has provided this strategic perspective and has enabled HRD professionals to begin this change effort in their organizations. This strategic approach answers four basic questions:
1 To what extent did the training or performance intervention help produce valuable and concrete results for the organization?
2 When the intervention worked and produced these valuable results, why did it work?
3 When the intervention did not work, why not?
4 What should be done differently to maximize the impact of this training (or any future performance intervention) so the organization is getting the best return from its investment?
Success Case Method Case Study
Below is an actual case of one of the member companies in our user group that used the success case method (SCM) to proactively improve training, rather than just document the success of a training intervention. This organization was implementing a large and strategically critical business initiative to help employ new marketing concepts and tools in their business plans and pricing decisions. Training was an important part of this initiative for building the capabilities of the managers with these new pricing and marketing approaches. A training director discovered from the evaluation study that just one of the several dozen trainees had used the training to directly increase operating income by an impressive $1.87 million. In this case, it would have been very easy (although this training leader did not succumb to the temptation) to calculate an average impact estimate that would have made it look as if the typical participant had produced close to $100,000 of value from the training, well above and beyond what it had cost. And indeed, had this training function employed one of the typical ROI methodologies, this is exactly what they would have discovered and reported.
Instead, this training leader happily reported and shared in the recognition for the wonderful success that the training had helped one participant produce. But he also dutifully reported the darker side of the picture—that there was a large proportion of the trainees who came nowhere near this sort of outcome—and that, in fact, many trainees made no use of the training at all. This took courage to tell the whole story, but it also drew attention to the factors that needed to be better managed in order to help more trainees use their training in similarly positive ways.
By bringing critical attention to the low-level usage of the training and the marketing tools and the projected business consequences that would ensue if the strategic shift could not be made, our user group member was able to stimulate some key executive decisions in some of the business’s divisions. These decisions would drive more accountability for employing the new marketing skills and more effective manager involvement. The bold actions of this training leader spawned a new attention to the many performance factors that drive impact and enabled the entire organization to accelerate strategic execution more deeply through the organization.
The SCM enables HPT professionals to dig beneath the results headline and investigate the real truth. Why were these great outcomes achieved? Who did what to cause them to happen? What would it take to get more such outcomes in future interventions? What prevented other people from obtaining similar great results? Only when the L&D organization begins reporting the complete story about training and business outcomes in ways that senior managers and line managers can understand and act on will they be able to effectively change the way that training and other performance interventions are perceived and ensure that they lead to business results.
The Success Case Evaluation Method: Five Simple Steps
The primary intent of the SCM is to discover and shine a light on instances in which training and other performance tools have been leveraged by employees in the workplace in remarkable and impactful ways. Conversely, the SCM also allows us to investigate instances of non-success and to better understand why these employees were unable to use what they had learned to make a significant difference in the organization.
The SCM is an elegantly simple approach that can be used to evaluate a multitude of organizational improvement initiatives, with training being just one. SCM employs an initial survey process to identify instances of success, as well as instances of non-success. The successful instances are then investigated through in-depth interviews to determine the magnitude of the impact these employees achieved when they applied their new skills and capabilities on the job. In addition, employees who were unable to successfully leverage the training are also interviewed to determine the likely causes for their lack of success. It is through this collection of “stories,” both positive and negative, that we can gain keen insight into how the organization can get maximum impact from learning interventions.
Specifically, the SCM consists of five essential steps:
Step 1: Focus and plan the evaluation study.
Step 2: Craft an “impact model.”
Step 3: Design and implement a survey.
Step 4: Interview and document both success and non-success cases.
Step 5: Communicate findings, conclusions, and recommendations.
The remainder of this chapter will look more closely at the process for conducting a success case evaluation study.
Step 1: Focus and Plan the Evaluation Study. It does not take a college degree in accounting to conclude that any dollar invested in the evaluation of learning is a dollar that will not be leveraged in the design, development, or deployment of learning. In other words, any effort to evaluate training diverts valuable resources from the training function’s most essential products and services to the organization. In times of dwindling training budgets and downsized training staffs, evaluation efforts must be thoughtfully and strategically expended.
The focal point of this first step is to clearly articulate the business question that needs to be answered. What information would help business leaders accelerate the key results or better execute a business strategy? What information would help the organization understand how the training can support these business goals? Success case evaluation studies that place these questions “front and center” are the studies that yield the greatest value for the organization. In our experience of conducting SCM evaluation studies, we have often found the following types of training initiatives to be good candidates for this type of evaluation:
1 A performance intervention that is an integral part of a critical business initiative such as a new product launch or a culture change effort. The organization cannot afford these initiatives to falter. An SCM study can help the organization assess lead indicators of success or barriers that need to be changed before it is too late and the business initiative fails to deliver the expected results.
2 A training initiative that is a new offering. Typically, business leaders want to be reassured that a large investment in the launch of a new training solution is going to return the favor. A new training initiative will benefit from an SCM study, especially if it was launched under tight timeframes. An SCM study can readily identify areas of an implementation that are not working as well as they should and can provide specific recommendations for modification or even redesign. In addition, an SCM study conducted following the pilot of a new initiative will provide invaluable data regarding its initial impact and help to determine whether a broader roll-out is advisable.
3 An existing training solution that is under scrutiny by senior-level management. Often, in good economic times, organizations perennially offer development opportunities for employees, without much thought to the relative value they add. But in times of severe “belt tightening,” these training solutions are usually the first to be considered for an SCM study, in order to truly understand their worth, especially if the initiative is expansive, expensive, and visible.
4 Any “soft-skills” learning resource. When looking for opportunities to increase impact and business results, business leaders frequently question the value of learning solutions that teach skills that can be generally applied in many settings, such as communication skills, customer service skills, and leadership, management, and supervisory skills. An SCM evaluation study can clearly pinpoint the ways in which employees are able to leverage these broad skill sets in ways that have a positive impact on business goals.
Regardless of the initiative selected for the SCM study, it is imperative that a relevant and important business question lies at the heart of any evaluation study.
Step 2: Craft an “Impact Model.” In this world of technology-enabled gadgets, where would we be without our onboard navigation systems, our GPS devices, or even mapquest.com? Well, frankly, we’d be lost, which is exactly where we would be without an impact model during an SCM study.
The impact model is the GPS device for the evaluation study. It is a simple illustration of the successful outcome of the training initiative we are evaluating. In other words, the impact model creates the “line of sight” that connects the following:
1 The skills, knowledge, and capabilities learned through our training solution;
2 The performance improvement we expect from employees back on the job in specific and important situations as a result of acquiring the skill, knowledge, and capabilities;
3 The results we expect given this new and improved performance; and
4 The business goals that will be directly impacted.
This visual depiction provides us with a snapshot of success that will drive the entire evaluation study. The model will help us to craft our survey in Step 3, to formulate the interview questions in Step 4, and to derive our findings, conclusions, and recommendations during Step 5. Table 5.1 illustrates an example of an impact model for a call center supervisor. Note that the statements within columns do not necessarily correspond with one another. An impact map works more like a funnel. For example, in this case the third column lists applications that obtain the results and help to achieve the business goals in the fourth column.
Table 5.1 Impact Model for Coaching Skills Training for Call Center Supervisors
Key Skills and Knowledge | Critical Applications | Key Job Results | Business Goals |
Learn a questioning technique for effective diagnosing of development level. | Help integrate new representatives into CSR teams. | ||
Understand how to assess team strengths and performance gaps. | Use behavior observation and targeted questions to determine skill level of representatives. | 75 percent of CSR representatives score 90 percent or better on the universal QA form. | Increase customer renewal rates by 10 percent. |
Understand how to adapt leadership style to effectively coach a CSR representative. | Coach representatives by explaining the call metrics and their relationship to the model and impact on corporate goals. | Attrition reduced to 30 percent. | Maintain or improve J.D. Power rating of 92. |
Develop ability to help teams set goals and achieve goals. | |||
Learn techniques to reduce defensiveness in coaching situations. | Coach by mapping day-to-day tasks and corporate goals. Ask questions like: “Why is this task important?” |
Step 3: Design and Implement a Survey. It is during this step of the process that our efforts with the SCM study move from being strategic to being tactical. At this point, we have selected the performance intervention we will evaluate and have an impact model documenting what success should look like in terms of behavior and results. We now need to craft a survey that will be administered to the target audience of that initiative, so that we can identify employees who successfully applied their learning in significant and meaningful ways. In addition, we also want to uncover employees who were unable to get positive results, as their stories yield valuable insights as well.
Many questions typically arise with regard to the design and implementation of this survey. The questions we are asked most frequently include:
1 What questions should be asked on the survey? If your only goal for the survey is to identify the most successful and least successful training participants, it is possible that the survey consists of a single question: “To what extent have you been able to leverage [the name of the performance intervention] to have a significant positive impact on [some organizational goal]?” If, however, you want to solicit additional input from the survey, such as demographic information or the degree of managerial support, you will include additional questions to collect this data. In general, it is recommended that the survey be brief, not to exceed five to eight multiple-choice questions in total, and follow accepted best practices of survey construction.
2 To whom should the survey be sent? While there is an extensive amount of research available on sampling theory, such as Babbie’s (1990) book, Survey Research Methods, here are a few helpful guidelines. First, survey your entire target audience if it is fewer than one hundred participants. We anticipate, and usually experience, a 50 to 70 percent response rate, which will yield about fifty to seventy completed surveys in this case. If your target audience exceeds one hundred, then use a sample size that will result in at least fifty completed surveys assuming a 50 percent response rate.
3 Is the survey anonymous? No, this initial survey cannot be anonymous, as we need to be able to follow up with those survey respondents whom we believe have a success story, or a non-success story, to tell. Even though we do not provide anonymity, we steadfastly guarantee every respondent’s confidentiality throughout the process.
How much time should elapse between participants ’attendance at the training and the receipt of the survey? This question is best answered with another question: “How long after exposure to the training is it reasonable to expect that participants would have the opportunity to
“
What Students Are Saying About Us
.......... Customer ID: 12*** | Rating: ⭐⭐⭐⭐⭐"Honestly, I was afraid to send my paper to you, but you proved you are a trustworthy service. My essay was done in less than a day, and I received a brilliant piece. I didn’t even believe it was my essay at first 🙂 Great job, thank you!"
.......... Customer ID: 11***| Rating: ⭐⭐⭐⭐⭐
"This company is the best there is. They saved me so many times, I cannot even keep count. Now I recommend it to all my friends, and none of them have complained about it. The writers here are excellent."
"Order a custom Paper on Similar Assignment at essayfount.com! No Plagiarism! Enjoy 20% Discount!"
