Organizations struggle with the costs, benefits and return on investment of learning. But evolving tools and technology now enable organizations to apply business analytics to understand the effectiveness and impact of training. The goal of any analytics solution is to help an organization understand what’s going on in its training operation. The solution should answer basic business questions: How much did something cost? What were the components of the cost? Who completed a learning offering? What can we do to improve it?
An analytics solution also should provide different users the information they need to make decisions. Bersin & Associates has identified three categories of analytics users. Depending on job roles, they use information for different purposes. (See Figure 1.)
These different groups need different views of information. Executives want dashboards and charts. Line managers typically need tabular reports and charts designed around their audience and programs. Training managers and executives need the ability to slice, dice and filter information.
The Need for Learning ROI
ROI can have many connotations depending on the users’ perceptions and motivations. ROI is really a measure of perceived value. And value can be different for different stakeholders.
The most important question you should ask when contemplating a learning measurement calculation is: How will the users of this information define value? By definition, this requires a balanced approach to learning measurement. A balanced approach necessitates a broad understanding of all stakeholders’ perceptions of ROI.
Learning Measurement Models
Learner-Based Model: The learner-based measurement model captures data from training participants at two points in the learning process. The first occurs immediately after the learning intervention (post-event) to gauge satisfaction and learning effectiveness. Because there is a high response rate to these data instruments, you can also capture indicators for advanced levels of learning, such as job impact, business results and ROI. These indicators help forecast or predict training’s future impact.
A second data collection point occurs in a follow-up survey conducted after the participant has been back on the job for a period of time. This survey is meant to “true up” the forecast and predictive indicators of Levels 3, 4 and 5 by gathering more realistic estimates now that the participant is presumably applying the training.
The approach is low-cost if an organization leverages standard data collection instruments across training activities and uses technology and automation to capture, process and report the data. Thus, learner-based measurements can be easily implemented for all learning activities to yield continuous measurements.
Manager-Based Model: This method has the same data collection points but adds a manager-based dimension. The manager of the training participant is sent an evaluation instrument timed to match when the participant receives a follow-up. The manager survey focuses on providing estimates related to job impact, business results and ROI from the manager’s perspective. The manager survey also asks “support-type” questions to understand the on-the-job environment where the participant applied the training.
Due to the increased effort it takes to conduct and analyze manager surveys, the cost and time to measure at this level is higher than the purely learner-based approach. But with automation and technology to facilitate the dissemination, collection, processing and reporting of the data, the cost and time can be minimal. In essence, it could be used on a continual basis for every learning event a participant attends. But more realistically, it will be used on a periodic basis for more strategic programs when manager data is most relevant.
Analyst-Based Model: This approach uses significantly more comprehensive post-event follow-up and manager surveys. It also uses other analytical tactics that go beyond surveying. For example, to analytically measure Level 2 (learning effectiveness), a detailed test is designed and administered to participants. Due to the time commitment of conducting a significantly detailed data collection and analytical exercise, the analyst-based approach might only be used for about 5 percent to 10 percent of all training programs. Typically these programs are the most strategic or visible and have the budget for a more costly and time-consuming measurement exercise.
Estimation, Isolation and Adjustment
Jack Phillips’ guiding principles include the elements of estimation, isolation and adjustment. These are the cornerstones to monetizing a benefit (the numerator in our ROI equation) and linking it to training.
Estimation is a common business process. Salespeople estimate their future sales, and accounting people estimate the cost of a warranty or claim that is expected in the future. Similarly, training personnel ask participants (and managers) to estimate the performance impact that a training program will have on their jobs. Participant estimation does not estimate performance solely related to learning, but asks participants to estimate job performance changes in general, including among other factors, training.
For example, one might estimate an increase in job performance following attendance at a sales training session. But that increase could be related to other factors—such as a competitor going out of business—that increase sales performance more so than training. Therefore, estimates of performance change need to take many factors into account, including process changes, people changes, marketplace changes, technology changes and, of course, training.
The next step is to isolate the estimated increase in performance to just training. In this part of the process, the participant should estimate how much the training has influenced or will influence job performance, relative to the other factors, and assign a value to it. If the salesperson felt that training was the strongest factor that caused change or will be the driving force behind future change, it would receive a higher value.
Finally, because participant estimation and isolation is participant-driven, one must adjust any resulting ROI calculation for the estimate. Again, this is a common business process. Using shades of analysis (such as “most likely,” “optimistic” and “pessimistic”) the estimator adjusts estimates for bias and flaws in assumptions. You’ll often see sales forecasts reported in this manner.
Adjustment is made for two reasons. The first is conservatism. Conservatism is one of Phillips’ guiding principles. Conservative assumptions also build integrity into your ROI model. The second reason for adjustment is bias. Self-reported bias by participants is typically inflated. In fact, studies done by organizations like the Tennessee Valley Authority suggest that respondents tend to overestimate by 35 percent. To this end, when computing an ROI calculation, one might reduce the inputs by a factor of 35 percent or a similar confidence rate to adjust for conservatism and bias.
The principles of estimation, isolation and adjustment form a powerful model for tabulating a systematic, replicable and comparable ROI model for human capital. The result of the process is a monetized benefit factor that, when multiplied by the salary (the human capital), yields a monetized benefit from training.
Through automation and technology, this model can be used to drill deep into a specific business result, such as the ROI on sales, quality, productivity, cycle-time, customer satisfaction or employee retention.
Having established a method for obtaining a monetized benefit from training, we can use standard financial terms to establish ROI indicators for a human capital ROI scorecard: the benefit-to-cost ratio, ROI percentage and payback period.
The benefit-to-cost ratio is the most relevant of the three. It is simply the monetized benefit divided by the costs of the training. The costs should also be fully loaded for conservatism. Typical costs include items such as needs assessment, design, delivery, materials, overhead, evaluation, lost work time of participants and travel expenses. The benefit-to-cost ratio will then be a conservative view on the financial ramifications of your learning program.
Another financial ratio is the ROI percentage. This is the benefit minus the cost, divided by the cost, expressed as a percentage. Although ROI is more common than benefit-to-cost ratio, the benefit-to-cost ratio is a more typical measure of training’s use of ROI financial measures because it is not as hard to interpret as the ROI percentage, and has less tendency to be compared to other ROI projects that are not human-capital-based.
The final ROI indicator is payback period. This is a time-based financial metric. It tells you how many months are required before you break even on the investment, after which there is a positive return. It is good to provide time-based metrics to balance out your scorecard.
The Balanced Scorecard
ROI can be expressed as a value to your stakeholders, and can mean different things to different stakeholders. Merely positioning a financial metric to training managers won’t solve their measurement needs. They want and need feedback on instructor performance, courseware quality and more. Hence the need for an ROI scorecard that has a balanced set of metrics that provide indicators on all five levels of learning, not just a financial ROI.
What should these measures be? One suggestion is to have a small set of measurements that are comprised of data gathered in a consistent manner on a continual basis. A scorecard with these types of metrics can be generated in real-time for any learning event or combination of events you choose.
The key performance components that comprise a balanced scorecard include:
- Level 1: Satisfaction
- Level 2: Learning effectiveness
- Level 3: Job impact
- Time to job impact
- Barriers to use
- Post-training support
- Level 4: Business results
- Job performance change
- Business drivers impacted by training
- Level 5: ROI
As you move forward developing and implementing your own learning measurement model that incorporates different views of value and attempts to deliver a balanced ROI scorecard, consider these best practices:
- Plan your metrics before writing survey questions. Never ask a question on a data-collection instrument unless it ties to a metric you will use. As simple as this sounds, organizations often create questions with no purpose in mind.
- Ensure the measurement process is replicable and scalable. Organizations will spend thousands of dollars on one-off projects to measure a training program in detail. The information is collected over months with exhaustive use of consultants and internal resources. Although the data is powerful and compelling, management will often respond, “Great work. Now do the same for all training.” Unfortunately, one-off measurement projects are rarely replicable on a large-scale basis. Don’t box yourself into that corner.
- Ensure measurements are internally and externally comparable. A one-off exercise is significantly less powerful when you have no baseline of comparison. If you spend several months calculating a 300 percent ROI on your latest program, how do you know if that is good or bad? Surely a 300 percent ROI is a positive return, but what if the industry average on similar training programs is 1,000 percent ROI?
- Use industry-accepted measurement approaches. Management is looking to the learning group to lead the way in learning measurement. A finance department must convince management of the way it values assets. Similarly, it is your job to convince management that your approach to learning measurement is reasonable. This is not unlike a finance department that must convince management of the way it values assets. In both cases, the group must ensure the approach is based on industry-accepted principles with proof of concept externally and merit internally.
- Define value in the eyes of your stakeholders. If you ask people what they mean by “return on investment,” you are likely to get more than one answer. ROI is in the eye of the stakeholder. To some it could mean a quantitative number, and to others it could be a warm and fuzzy feeling.
- Ensure metrics are well balanced. Although you want to understand the needs of your stakeholders and have them define how they perceive value, you also need to be proactive in ensuring that your final measurement scorecard is well balanced.
- Leverage automation and technology. Although this goes hand-in-hand with a measurement process that is replicable and scaleable, it is worthy of mention. Your measurement process should leverage technology to do the heavy lifting in areas such as data collection, storage, processing and reporting.
- Crawl, walk, run. Designing a learning measurement strategy requires a long-term vision, but don’t attempt to put your entire vision in place right out of the blocks. The best approach is to start with low-hanging fruit that can be done in a reasonable time frame to prove the concept, demonstrate a “win” and build a jumping-off point to advance to the next level.
- Ensure your metrics are flexible. The last thing you want to do is roll out a measurement process that is inflexible. You will likely have people who want to view the same data in many different ways. You need to have architected your database to accommodate for measurement flexibility.
Will Hipwell is the vice president of marketing at GeoLearning Inc. He can be reached at email@example.com.
- 5 Forces Shaping the Future of HR
- Why ‘Leaders Eat Last’
- What’s holding inclusion back? Leaders’ behavior.
- Psychological safety: an overlooked secret to organizational performance
- Designing virtual learning for application and impact: the missing ingredient
- Brain-based leadership in a time of heightened uncertainty
- Creating an environment for effective learning measurement