When it was first introduced, Donald Kirkpatrick’s four-level approach to the evaluation of learning was revolutionary work that gave training professionals a model by which to finally measure the effectiveness of training programs. Even today, almost 30 years after the collection of Kirkpatrick’s work, “Evaluating Training Programs: The Four Levels,” was first published, virtually all training programs are still assessed using this model.
In recent years, there has been growing criticism of Kirkpatrick’s approach to evaluating learning programs. Kevin Kruse (facilitator of E-LearningGuru.com) reminds us that the “critics of the Kirkpatrick model say that it doesn’t take the business impact far enough and that the final step in any training program should be a ‘fifth level’ of evaluation—financial return. This ultimate evaluation determines the financial return on investment (ROI) of the training program.”
Many of Kirkpatrick’s detractors fail to escape what we will call the Kirkpatrick paradigm, where the learning executive makes virtually all decisions about what is important, what should be measured, what should be reported and what constitutes ROI for the learning. Frequently, these determinations occur as soon as a request for training is made, and prior to any serious discussion with the stakeholder who requested the learning initiative.
Even when there is a cursory dialogue between the CLO and the primary stakeholder, the real business requirements tend to be subordinated so that the reporting will correspond with Kirkpatrick’s four levels of evaluation.
There is a supposition on the part of learning executives that:
- Learning is exempt from the rules that apply to other business processes.
- There are some universal metrics that quantify the effectiveness of every learning program.
This philosophy fails to take into account many of the factors that contribute to the identification of ROI for other business processes. These factors, which include corporate goals, corporate culture, different audience types and the position of the process in the organization, do not fit the Kirkpatrick model so neatly.
Other components of the Kirkpatrick paradigm encompass its methodology and language. Evaluation models based on Kirkpatrick use a legacy training approach to measure the “business” impact of the activity. These “business” results are then reported in a proprietary “training” language. If one is trying to show business results or impact, why not use a business model and business tools to measure, and business language to report?
From the perspective of many businesspeople, the foreign language, unfamiliar methodology and seemingly “meaningless” reports have long been sources of frustration. The advent of electronically delivered training programs and the increased capital investment required to develop learning programs have expedited this frustration, and possibly have made the Kirkpatrick model obsolete.
Why Apply Business Methodology?
Before getting into the specifics of how business methodology applies to training programs, it is important to understand why learning professionals should acquiesce to this philosophy. To demonstrate this, let’s look at two common scenarios that illustrate the disconnect that takes place when business metrics are not used to calculate learning impact:
- Scenario I: You are headed to a “lessons learned” meeting to give a final report about a learning program that your group developed. This program taught employees how to use a new feature of an internally developed processing system. You consider this some of your best work. Better yet, for the first time, you had the opportunity to track all four levels of evaluation. Level One appraisals showed that 99 percent of the learners felt that every aspect of the program was excellent. Every student passed the Level Two assessment. Post-training interviews with students and their supervisors revealed that virtually everyone who attended the program was now using the new feature. As you head to the meeting, your secretary hands you a report from Operations. The decreased processing time saved the company $10,000 during the last quarter.As you give your report, you can’t understand why you are the only one smiling. You go back to your seat a little confused. The CFO acknowledges the $10,000 savings and then points out that the cost to develop and deploy the program was $160,000, meaning that it would take four years just to recoup the cost of the best work that you have ever done.
- Scenario II: You are the manager of e-learning for a small financial services industry firm. One of the company’s business units is about to launch a new product. The manager of the product line approaches you about developing an e-learning course to “teach” users how to use this new service. You engage the internal client in a discussion about the content, the time frame, system access, subject-matter experts and system specifications. You explain the value of evaluations and post-training follow-up. You also convince the product manager that once the e-learning is completed, tracking the product usage will demonstrate its business impact. Finally, you describe how the training can be hosted on the company’s learning management system (LMS), which will track who has taken the e-learning, generate reports that demonstrate how much students learned, how long students were online and how many times they visited. Everyone is excited about the project. The product manager is impressed with the methods that you are going to employ to ensure that the e-learning really “worked.” You are ecstatic because you finally have someone from the business unit who understands the importance of measuring impact. The two of you agree that you should put together a project plan right away. You go back to your office to start the project plan when you get a call. The product manager, still excited, says, “Just one more thing, how much is this going to cost?” You send the product manager a detailed project plan indicating the cost of each task in the process. Sensing that there may be a problem, you attach a note saying that as soon as you get the “OK” from him, your group will begin the work. A couple of days go by with no word. At the end of the third day, you receive an invitation to meet with the product manager and his boss. The subject line in the request is “training costs.”At the meeting you give a detailed description of the costs associated with each of the project’s phases. You even identify ways that you can reduce the overall cost by about 10 percent. The meeting ends with an agreement that the product manager will contact you within a couple of days about how they would like to proceed. Another week goes by with no word. Your follow-up e-mail goes unanswered. You leave a voicemail message before heading home for the weekend. When you arrive at work on Monday morning, there is a message from the product manager informing you that their division has decided not to use e-learning with this release of the product, but that they would contact you if there was a need later. Later that day you run into the product manager at a company function. You pull him aside and ask what happened. His reply is, “Listen, all we wanted was a brief overview highlighting that our new service was available. All of this Level One stuff, follow-up and tracking sounds good, but there’s no way we’re spending that type of money just to let people know that the product is available. You training folks just don’t understand real business.”
In both scenarios, the result was a dissatisfied customer who had the impression that the training group either misunderstood or could not meet the business requirements. In both cases, there was a genuine attempt on the part of the learning executive to show the business impact of training. The setback was that the methodology used to determine business needs was a learning methodology, not a business methodology.
Learning methodologies do a good job of identifying some issues that are important to training professionals, but are not equipped with the tools necessary to sufficiently capture business requirements. In order to accurately capture the issues that are critical to all process stakeholders, a business methodology (with business tools) must be used. In order to effectively communicate with business professionals, a business language must be spoken.
Which Business Methodology Should Be Used?
There are many business models that employ tools to identify business requirements. Total Quality Management (TQM) and Software Development Life Cycle are both familiar, and Six Sigma is currently the preferred methodology of many businesses. It is credited as a major factor in the resurgence of General Electric. Former GE CEO Jack Welch even described it as the “most ambitious undertaking the company had ever taken on.”
Six Sigma uses a five-step methodology (DMAIC) in order to ensure that customer/business requirements are met. DMAIC is an acronym representing the five phases of the Six Sigma process: define, measure, analyze, improve and control. In the define phase, specialized business tools are used to identify customer requirements. The measure phase compares the outputs of the current process to the newly identified requirements. This comparison identifies areas of opportunity for the process. The analyze phase employs statistical tools that validate why the process is not meeting customer requirements in the areas of opportunity identified. The improve phase is where solutions that ensure the process meets customer requirements are generated. Finally, the control phase ensures that the process will not revert to its old ways.
As the previous learning scenarios illustrated, the biggest disconnect between the learning professional and the business professional occurs in the identification of customer (business) requirements. Therefore, the focus here will be on the tools and the processes used in the define phase of Six Sigma, where business requirements are identified, measures are established and the reporting language is agreed upon.
Identifying the Critical Business Requirements
To successfully identify business/customer requirements, learning executives must first understand who those customers are. Traditionally, learning professionals have almost exclusively focused on the learner to determine the critical training or learning. Thus, Kirkpatrick’s Level One captures the feelings of the student, Level Two measures what the student learned in the session and Level Three attempts to measure the student’s use of the skills in the workplace. The only attempt to gather the perspective of the business occurs in Kirkpatrick’s final level of measurement, where learning executives attempt to show the first three measures’ business impact. It must be said, however, that according to the American Society for Training and Development (ASTD) State of the Industry report, in 2002, only 11 percent of learning organizations even measured to Level Four. This approach virtually ignores the perspective of the entity that, in many cases, finances the training to begin with: the business stakeholder.
Six Sigma ensures that the perspective of all process stakeholders are addressed by capturing what it calls output indicators of the process, that is, the measurable and prioritized list of the critical requirements of both the business stakeholders and the customer (end user or student). Identifying these indicators is accomplished by capturing what Six Sigma calls the voice of the business (VOB) and the voice of the customer (VOC). (See Figure 1.)
The Voice of the Customer
The VOC may come from a variety of sources, including surveys, phone calls or written complaints. The VOC is then categorized into key customer issues, which are converted to critical customer requirements (CCRs) or specific, measurable targets. For example, you receive telephone calls or written comments that your e-learning programs are too long. These types of comments and any other feedback mentioning course length, number of assessment questions or download time are then categorized under “time.” Knowing that time or course length is an issue for the end users of your programs, you would then survey the customers to identify (from their perspective) how long a lesson or a course should be. If your customers tell you that lessons should be no more than 10 minutes long, then that time becomes one of the output indicators that must be met to ensure student (end user) satisfaction with your course.
The Voice of the Business
The same process is used to capture the VOB. Business partners, general managers, business unit managers and any other business stakeholders are interviewed to determine their key issues. Corporate goals and initiatives are also examined. After these issues are categorized, they are converted into measurable targets.
An example of this at work might be as follows: The business has a goal to reduce all product development costs. Business unit managers want the same level of learning support at a lower cost. These types of issues are categorized as a topic called “cost reduction.” In surveying your business stakeholders, you may find that they are willing to pay no more than $26,000 for the development of one hour of e-learning. This measurable target then becomes one of the measurable outputs that must be met in order to ensure business satisfaction.
One method that might be used to help identify critical business requirements is simply to ask business partners, “Why do you want training?” The answer to this simple question will greatly assist in determining what needs to be measured. If the answer is, “We have a compliance requirement,” then simply reporting on the number of students enrolled might be sufficient to measure business impact. If the answer is, “We are trying to generate revenue from our learning programs,” then it is necessary to report on the revenue derived. And if the answer is, “We want smarter employees,” then it makes sense to build and measure the results of Level Two assessments. In each of these examples, the business partner, not the learning executive, decides what should be measured.
A prioritized listing of the measurable customer and business requirements then becomes what Six Sigma calls output indicators. While compiling and prioritizing the output indicators, one must remember that all customers are not created equal, and all business units are not created equal. The requirements of a business unit that pays for 70 percent of your learning initiatives, for example, has more weight than one that only pays for 5 percent. The feedback of students who comprise 80 percent of your user population holds greater weight than feedback from a student who represents 2 percent of your user population.
The output indicators identify everything that must be measured about the learning program, as well as the measurable targets that must be met in order for the learning to make a business impact. The items on this list are compiled from the perspective of the business partner and the end user, and are therefore written in a language that is familiar to and accepted by those constituents.
Donald Kirkpatrick is a visionary and pioneer who provided learning professionals with a methodology to measure the effectiveness of their initiatives from a performance improvement perspective by measuring the feedback of the student or end user of the program. Six Sigma, however, provides a proven and respected methodology that captures the perspectives and requirements of all training stakeholders, and finally gives learning professionals an alternative to the Kirkpatrick paradigm for identifying and measuring the business impact of learning.
Kaliym Islam is the director of instructional technologies for The Depository Trust & Clearing Corp., where he oversees all technology-based training and e-learning strategies. E-mail Kaliym at editor@CLOmedia.com.
- 5 Forces Shaping the Future of HR
- Why ‘Leaders Eat Last’
- Skills aren’t soft or hard — they’re durable or perishable
- 5 things you should be doing for your virtual internship program
- Developing a real strategy for on-the-job learning
- Video: Overcoming the narrative of racial difference: Why the controversy?
- Mitigating the effects of implicit bias