Most learning executives find it frustratingly hard to determine how much a learning event benefits their organization. After all, the direct results sit inside employees’ heads, where you can’t see them. The business benefits then unfold over time, in the midst of many other forces that also influence end results. Because of this indirectness, the business impact of training is rarely evaluated.
Unless companies are provided with better data on the business benefits of learning, they will systematically make poor decisions about how much to invest in learning and how to invest it. While data on learning costs are readily available, data on benefits remain elusive. Business leaders often respond to this imbalance by trying to optimize what they can measure. So, project by project, development costs are driven down while business impact receives less attention. It’s little wonder that companies often invest less than they suspect they should in learning.
What is particularly problematic is that companies typically can improve their bottom line far more by improving training effectiveness than by reducing learning costs. However, unless business leaders are provided with better data on the business benefits of training, they will systematically invest less in learning than they should, focus more on reducing costs than they should, leave their people less well prepared than they should be and, in the process, cost themselves money.
This method for evaluating the business benefits of learning, called the business impact analysis (BIA), offers several benefits:
- Provides a quantified output: In the training world, ROI has come to have many meanings. BIA provides hard estimates of the benefits of learning, expressed in dollar values.
- Improves how training is scoped: BIA provides management with data to make clear scope decisions and provides training teams with concrete charters for training initiatives.
- Isolates the effects of training: BIA works even in situations where many factors impact end business results, and where it might be otherwise difficult to untangle the contribution of training.
- Supports continual improvement: BIA provides detailed insight into what a learning solution accomplished and what it did not. This insight enables training groups to move from a one-shot approach to training (ship it and move on) to a data-based approach to refining training over time to better meet business needs.
These benefits come at the cost of some extra effort. For example, one recent BIA, discussed here in detail, required approximately two weeks to implement, plus $5,000 in expenses to collect “secret shopper” data. However, when compared with the cost of implementing the initiative blindly or cutting the wrong corners, the costs were minor in comparison.
What Makes the Benefits So Hard to Measure?
Why is evaluation so difficult? Consider an example learning problem. Imagine that you work for an electronics retailer. The holiday season is fast approaching. You are charged with training sales associates to sell DVD players. Your project sponsor has insisted that you measure your results. What do you do?
From his perspective, your measurement challenge is simple. He faces a current state in which sales associates are performing poorly. You are going to intervene. Your training will produce an improved new state. It is easy to determine the costs of the training. You simply need to quantify the benefits. Simple, right?
However, when you think about how to do this, you realize you face some challenges:
- The training is only one of many factors that drive results: You would like to simply look at the difference in sales. However, while you are implementing your training, the world is moving around you. Products roll over, the compensation plan shifts, and the holidays loom into sight. In ï¿½Return on Investment in Training and Performance Improvement Programs,ï¿½ Jack Phillips explains that the impact of your training becomes intermingled with many other factors, and they quite likely have larger impacts. So, the end result of sales revenue is volatile and difficult to tie to training.
- It is not possible to run controlled studies: You might want to form a control group to measure the impact of your training. However, holiday sales are critical in your business. Your sponsor is not willing to pay the cost of not training some associates.
- Existing metrics do not adequately measure performance: Since it seems you cannot measure end sales directly, you search for other proxies to use to measure performance. For example, you could examine what percentage of prospects sales associates closed or the average dollar value of sales. However, your hopes are dashed. The company does not track this data. Whatever data you want, you have to go collect yourself.
The key features of this example play out daily in training departments. Business managers and trainers try to treat learning interventions as ï¿½black boxes.ï¿½ They insert these black boxes into very complex business environments. The output data trainers would like to analyze is usually not already available, nor is it easily generated. Under these constraints, if one wants to measure business impact and provide hard results, one might have to construct a customized study. However, such studies are impractically expensive and time-consuming. Hence, business impact is sometimes measured using potentially distant proxies (e.g., could participants identify three months later some change in behavior that they could attribute to the training). Or more commonly, it is not measured at all, and business sponsors become ever more cynical about the value of training.
An Overview of Business Impact Analysis
How can we do better? The approach underlying BIA is to open up the black box by creating a simple causal analysis of how a given training course is expected to create value. With this analysis, we can then create an efficient method of quantifying the impact of a piece of training and measure its economic benefit.
We begin with the notion, borrowed from T.F. Gilbertï¿½s ï¿½Human Competence: Engineering Worthy Performance,ï¿½ that the outside boundary on the value a training course could possibly provide is to narrow the gap between peak performers and average performers. Hence, we begin to estimate the value of this gap. But having determined the outside boundary, how do we measure what portion of it a training intervention actually realizes?
Here, we proceed by using a form of Pareto analysis. There are many factors that cause peak performers to achieve better results than average performers. It is wasteful to try to target all of them with training. Some do not much matter to the business. Others are not amenable to training. So, instead, we focus on a set of the ï¿½critical mistakesï¿½ that separate peak from typical performance and that matter the most to the business.
To maximize and measure the benefit produced by training:
- Identify a set of potential critical mistakes.
- Estimate their cost.
- Provide a solution that eliminates the most costly mistakes.
- Measure the resulting reduction in the frequency of those mistakes.
The underlying idea here is not new. Itï¿½s similar to how one improves oneï¿½s home. There are always many gaps between your actual home and the ideal home you want. You cannot afford to fix everything. So instead, you identify a few gaps to work on, those that will have the biggest impact at the least cost. This year, it may be the upstairs bathroom, and next year, it may be putting a window in over the back yard.
Given this approach, can the benefit of a piece of training be measured? Once the cost of each kind of critical mistake has been estimated, this is straightforward. Simply determine how much the frequency of each critical mistake has been reduced and multiply that by the cost. Then, add up the results across mistakes to determine the total benefit.
A key benefit of this approach is that most of the work involved makes the training better. As the following case study illustrates, most of the work goes into identifying specifically what pulls down average performance and how much improving each cause is worth. This work makes it easy to communicate with business sponsors, scope the training and develop more effective learning programs while paving the way for quantified measurement of results.
A Case Study
To illustrate how BIA works, letï¿½s walk through the DVD case. (Note: The company name and financial data have been changed to protect the companyï¿½s proprietary information.)
Step 1: Estimate the ï¿½Opportunity Boundaryï¿½
To get started, establish an outside boundary on how large the business opportunity might be. This opportunity boundary is the value of raising typical performance to the level of peak performance. Although no learning solution could completely achieve such a shift, the opportunity boundary provides a hard outside limit. In later steps, the company chips away at this outside limit, determining how much of it can actually be achieved.
To compute the opportunity boundary, perform the kind of business analysis that management consultants typically conduct when valuing their recommendations. The key is to identify the core business driver(s) that one hopes to impact, then invent a method for valuing changes in that driver. It pays to keep this analysis simple, both to save time and also to be able to communicate results. For example, in the DVD case, the business identified improving the close rate as the business driver. This is the percentage of sales opportunities that employees convert into actual sales. To calculate the opportunity boundary for the close rate, the organization performed this analysis:
No. of Chances x Close Rate Gap x Avg. Gross Margin per Sale = Opportunity Boundary
In this case, the equation looks like this:
9.5 million inquiries per year x 36 percent gap (with experts at 71 percent, and the average at 35 percent) x $10.80 margin (sale = $90, margin = 12 percent) = $37 million per year
Most of the required data was provided by the organizationï¿½s business analyst. She provided transaction volumes, sales data and margin levels. The business then gathered the close-rate-gap data by polling a sampling of department managers.
The result provided confidence that this was a problem worth solving. The gap represented a potential doubling of gross margin.
Step 2: Identify Critical Mistakes
The organization then set out to identify what caused the gap between typical and peak performance. In this stage, a series of concrete critical mistakes that the business might choose to train were identified. Typical learning objectives focus on what employees should be able to know or do. In contrast, critical mistakes focus on where employees actually fall short, either by doing something wrong or neglecting to do something right. In a sense, critical mistakes ï¿½unpackï¿½ learning objectives, laying out what specific behaviors need to change.
In this analysis, the following critical mistakes were identified:
- Leaving the department for more than 30 seconds.
- Ignoring a customer while doing other tasks.
- Bothering a customer who wants to browse.
- Ignoring a browsing customer.
- Greeting with a closed question.
- Criticizing the merchandise.
- Not reserving an out-of-stock item.
- Interrupting one customer to handle another.
- Ignoring one customer while handling another.
- Showing a product before diagnosing need.
- Failing to get an answer to a question.
- Handing off a customer with a question, instead of listening to the response.
- Showing only one product.
- Letting a customer handle non-functional display products.
- Not educating customers on services.
- Not educating customers on key features.
Three steps were taken to gather this data:
- A two-hour workshop with subject matter experts (SMEs) was held. This workshop created a simple task model of the sales process and identified what the SMEs found to be common gaps.
- Next, a second two-hour workshop with a group of experienced employees was held. They were asked where new employees tend to struggle.
- Finally, a small number of field interviews were conducted with new employees to drill down on a few remaining open questions.
The entire process of identifying critical mistakes took approximately one week to complete.
Step 3: Estimate the ï¿½Identified Opportunityï¿½
At this point, the business had valued the gap between typical and peak performance, and had identified a set of specific critical mistakes that contributed to that gap. The solution ended up targeting some of those critical mistakes. However, those mistakes are not the only causes of poor sales. Sales associates make other mistakes. They have different personalities. They bring different levels of motivation. In this step, the opportunity boundary is narrowed down. How much of it could be realized if the critical mistakes were eliminated?
This amount is the identified boundary. Calculating it is straightforward:
Opportunity Boundary x Percentage Identified = Identified Opportunity
In this case, the equation looked like this:
$37 million per year x 69 percent = $26 million per year
The trick here is to estimate what percentage of the gap has been identified. The company did this by relying on estimates from SMEs. It surveyed a sample, asking, ï¿½If we were able to train your associates so that, on average, they commit this list of specific mistakes no more often than your best performer does, how much of the difference in their performance would we eliminate?ï¿½
This approach is a rough method for estimating the identified opportunity. However, it provides an efficient method to create reasonable, agreed-upon estimates. Other methods could have been used, such as intensively training associates in a sample of stores and observing the results. However, such methods would be significantly more expensive, both in the time they would take and the resources they would require.
Step 3: Estimate the Value of Providing Training for Each Critical Mistake
Next, the organization estimated the potential value of providing training for each critical mistake. These estimates were first used to set scope. Then, they were used to project the total value the training produced, given the reductions in critical mistakes it achieved. To calculate potential value per mistake, the company first determined the cost to the business of the mistake, and then estimated how much training might reduce the mistake. Analyzing a single critical mistake looks like this:
Pre-Training Frequency (how often it happens) x Cost per Occurrence (how much it costs x Reduction in Frequency (how much you reduced it)
Determining the Cost of Each Critical Mistake: The organization already knew how much all the mistakes cost, taken as a set. This is the identified opportunity. To calculate cost for each mistake, it needed to allocate the identified opportunity across individual mistakes. Cost is allocated by the impact of each mistakeï¿½that is, by how often it happens and how much it costs when it happens. To get the frequency and impact data required, a sample of managers was surveyed.
Determining Potential Reduction: Not every mistake is equally tractable to training. For example, it may be a mistake for my postman to leave my neighborï¿½s mail on his front curb every afternoon. But regardless of how the postman is trained, he will continue to do so until my neighbor chains his dog.
To determine potential reduction, estimate how much each mistake can be reduced via a learning solution. These estimates are generated from prior experience, making sure to use conservative numbers. In this case, the organization assumed that it could reduce the frequency of mistakes by 30 percent if they were simple procedural mistakes and by 15 percent if the mistakes were more complex (e.g., if they required associates to take on a new task or put themselves in a situation in which they might embarrass themselves).
Figure 1 shows both how the total identified opportunity was allocated, as well as how estimates of reduction were used to compute a value for providing training for each mistake.
The data showed that managers were hard critics. Taken literally, the frequency data would mean that an average customer had to endure more than nine mistakes. (Note: Thatï¿½s how to interpret the total for frequency of 931 percent). Furthermore, since managers estimated that the customer would likely walk out when subjected to most of these mistakes, this would imply that the firm made few sales indeed! Clearly, this was not the case in actual practice. In fact, managers rated the frequency of mistakes about twice as high as later analysis showed they happened.
However, such inaccuracies did not harm the analysis. While managers may not be that accurate in estimating the absolute value of frequency and impact, they are more reliable for rating the relative levels. And for this analysis, all that matters are the relative levels. Knowing those enabled the organization to allocate the identified boundary across mistakes. The size of the identified boundary itself had been set previously via top-down analysis.
By the time this step in the analysis is reached, it provides an estimate of the business value of the training. If all of the mistakes identified were included in the scope of the training and could be reduced as predicted, the training program would produce improved margin of about $6 million per year. This number is both much more realistic than the $37 million opportunity boundary and still high enough to provide a very large ROI.
Step 4: Set Scope
The company could choose to provide training on all identified critical mistakes. However, such an approach could result in an unduly large training programï¿½one forced to cover content too lightly and therefore not effectively. Given that some critical mistakes cost the business much more than others, the organization narrowed the training down to cover the most important mistakes.
It could have used the data produced in the prior step to select the most costly mistakes. That information provides concrete data that managers can readily employ to set scope. However, in the actual case, a different approach was employed. Due to a very tight timeline on this project, scope had to be set and content development started before the analysis was complete. So, a panel of SMEs was asked to set scope, given a list of the critical mistakes, but without the benefit of the above data. Figure 2 summarizes their selections.
The results show that the organization could expect approximately a $4 million per year increase in margin due to the training, based on the scope set.
Later, when the analysis was complete, it revealed that it had been somewhat costly to set scope without the benefit of the above data. The panel selected 12 critical mistakes to keep in scope. With the benefits of hindsight, they ended up including a couple of low-impact mistakes and excluded several high-impact mistakes. If the company had waited for the analysis to set scope and chosen the 12 most costly mistakes, it could have increased the impact of the training by an estimated 12 percent or $480,000 per year.
Step 5: Provide Training
The organization then proceeded to develop the training program itself. The program employed sales simulations delivered via the Web. The content focused tightly on the critical mistakes identified above, with each simulation decision providing opportunities to make one or more mistakes and get coaching.
Step 6: Measure the Results
To measure the actual impact of the training, the business now had only a limited task. It had already identified the cost of each kind of mistake. Given that, could it validate how much the training actually reduced the mistakes? To do this, it used secret shoppers.
The results were encouraging. The program beat conservative estimates for reducing mistakes by a significant margin. Instead of producing the approximately $4 million per year benefit projected, it instead produced a $7 million per year benefit. In general, the benefits of good, well-focused training are so high that it is best to be conservative when making estimates. Far better to beat your promises than to fall short!
The secret shopper analysis took two weeks to complete for each segment (pre- and post-training). It cost $5,000 for the 100 visits required. To conduct it, a script was developed for the secret shoppers to follow, which gave associates the opportunity to make mistakes. The secret-shopper service then sent shoppers out to 50 stores before the training was shipped and then again to 50 stores after the training was delivered. While the script did not test every mistake included in the scope of the training, it did cover most of them. The secret shopper results were generalized to estimate reductions for the mistakes not covered.
Step 6: Analyze Results to Determine Additional Opportunities for Improvement
Next, the analysis was taken to the business sponsor. The business sponsor had already heard good word-of-mouth feedback, so he was prepared to see a good result. He had conducted an annual internal conference between when the training was released and when we spoke to him. The year before, good product training had been the major complaint within the department. This year, it did not show up on the radar screen.
The analysis did more than simply validate that the training had the desired impact. It also helped uncover further opportunities for improvement. By analyzing the value of training on specific critical mistakes and the field results achieved, two specific opportunities were identified:
- Add critical mistakes: Some mistakes were left out that were worth including. Given the demonstrated impact of the training, it is worth expanding its scope.
- Do a better job at eliminating certain classes of mistakes: The data showed that the organization had successfully reduced simple process errors (e.g., letting a customer handle a non-functional display model). Similarly, product knowledge errors were successfully reduced. However, associates were not trained to take up new sales tasks. Too often, associates still ignored customers. Too often, they did not try to sell add-ons. In fact, the secret shopper analysis showed that performance on these mistakes had actually gotten a little worse post-training. (The business attributed this to the growing rush because of the start of the holiday season). Here, it was hypothesized that this was not a training problem, at least for the associates. Associates knew what they should do. They made these errors because of incentives and management focus. Based on this hypothesis, additional analysis was recommended to identify the root causes of these errors and to develop appropriate performance solutions.
Most training is produced in a ï¿½ship it and move onï¿½ mode. Developers have many projects to complete. They work the project, hand it off and move to the next. A major benefit of business impact analysis is that it supports a different mode of using training. Instead of trying to produce just the right solution the first time, it shows how one can use data to continually refine and improve training, based on concrete feedback of what works and what does not. This is most likely to be useful when dealing with large audiences and relatively constant content. Using BIA, trainers and their sponsors can choose to solve the largest parts of the problem initially with a constrained solution. If that works, they can choose to evolve and extend solutions over time.
Stepping Back: Helping Business Sponsors and Trainers Collaborate
Sometimes it seems like business sponsors are from Mars and trainers are from Venus. It is distressing how often they fail to communicate effectively. A major cause is that they lack clear methods for making joint agreements. They talk past each other, each group using its own language. Trainers sometimes hear about ï¿½costs per unitï¿½ and ï¿½share of walletï¿½ and begin to get impatientï¿½where are the learning objectives? Similarly, business sponsors sometimes hear about ï¿½learning objectivesï¿½ and ï¿½levels of interactivity,ï¿½ and their eyes begin to gloss overï¿½whatï¿½s the bottom line? BIA provides business sponsors and trainers with a common ground and a common language that they can use to collaborate.
Today, business sponsors are often asked to sign off on learning objectives as the tool for establishing scope. This can be difficult. On what basis can they decide whether this or that objective should get more stress? Then, they are rarely told what impact their investments in learning have had on their business. Using BIA, business sponsors are asked to make straightforward, role-relevant decisions based on concrete data. How much scope would you like to buy? To decide, you can review a list of critical mistakes with estimates available for the value of training each. You can see how these estimates were derived. And this is how much it will cost to train a set of them.
Similarly, trainers are asked to make commitments that are relevant to their role. Can you run a process that helps me set scope? Given the critical mistakes identified, how much can you reduce them through a learning solution? What will be the solution cost?
Detailed measurement can be costly. Resistance is often blamed on inadequate measurement instruments or insufficient statistical techniques. However, the problem runs deeper than that: Its roots lie in how we tend to scope training.
Itï¿½s worth noting how little of the effort in BIA is actually spent on evaluation per se. Rather, most is dedicated to understanding the business problem. It is gathered during scoping, not after delivery. What are the mistakes? What does each cost? How well can you eliminate each? By addressing these questions, you develop a quantified, causal understanding of what separates peak performance from typical. This enables you to decide how to best go about narrowing the gap. The data goes beyond evaluationï¿½it helps build a better solution.
For any audience and task, there is always a long list of potential learning objectives one might try to work on. Business impact analysis provides a data-based way to narrow down the list. It does this in a way that enables participants to take responsibilities that fit their roles. Business sponsors set investment levels based on projected paybacks. SMEs identify problems and estimate their frequency and impact. Trainers organize the process and project levels of behavioral changes and training costs.
As a further benefit, BIA provides the opportunity for continual improvement. Today, training developers are typically asked to ship training and move on. Instead, using BIA, trainers and their sponsors have the option to gradually evolve effective solutions based on field results.
Chip Cleary is vice president of design at CognitiveArts and leads the advisory services practice for NIIT, its parent company. Chip can be reached at firstname.lastname@example.org.