ROI Controversy Rooted in Expectations

Learning leaders can’t reach consensus on something almost all of them value intensely: measurement of learning impact on the business. ROI measurements or metrics continually spark conversations and debates in the enterprise education space. Parties on b

Learning leaders can’t reach consensus on something almost all of them value intensely: measurement of learning impact on the business. ROI measurements or metrics continually spark conversations and debates in the enterprise education space. Parties on both sides of the fence — those who say learning can be measured, and those who subscribe to “We know intuitively that learning works” — might want to go back to the beginning of the debate and evaluate their learning and development expectations before they initiate programs, are disappointed and potentially conclude that learning program content or delivery was to blame when results are less than stellar.

Ideally metrics should prove whether learning and development activity has an impact on workforce performance and ultimately on the bottom line. Learning also should improve performance, and learning executives might be overlooking this aspect when metrics are at issue.

“Any training investment gets really predictable results,” said Dr. Robert O. Brinkerhoff, professor emeritus at Western Michigan University and senior consultant for Advantage Performance Group. “It never works 100 percent of the time. Typically a training program, whether it’s leadership development, technical or soft skill training, is going to work for about 20 percent of the people and not work at all for about 20 percent of the people. The remaining 60 percent, it will be just partially or marginally effective with. The second piece, the improve, is so important because if the training for the top 20 percent gets good results and has a positive return on investment, it’s really important to figure out how can you get more of that middle 60 percent into the top 20 percent. That way you realize much more return on the investment because companies pay for training that works, and they pay for training that doesn’t work. Getting it to work more of the time is really the most important outcome from metrics.”

Brinkerhoff said what constitutes good, effective or worthy metrics is hotly contested, and it has been for decades because of the confusion over whether training is a benefit or an investment in employee performance improvement and capability.

“A lot of senior executives look at training as a necessary overhead. Just the same way that you have to have a building, utilities, a parking lot for employees, you’ve also got to give them some training. It’s necessary to recruit and retain people,” he said. “Then there’s another view of training that says it should be a principle driver of performance improvement, and it should help a company be more competitive. I think there’s confusion right at the beginning over what should training be doing. When expectations for the value it should bring aren’t clear to begin with, then you’re going to get a lot of argument on the back end, when people start bringing measurement metrics forward because there was never agreement in the first place over what it was supposed to do.”

Another often-talked-about piece of the metrics controversy arises rose because senior learning executives aren’t getting what they want. Most don’t want simple numbers such as those related to course completions, training hours taken, etc. Yet complex or more elaborate reports might be too much to handle and still not relay the most vital pieces of information: Was learning effective, and how so?

“Too many of the methods have been too complex and too difficult to use and understand, and they produce results that are highly qualified,” Brinkerhoff said. “There’s all sorts of caveats to any conclusions. People get frustrated with the reports and the results that they get back from a training department, when really what they want, I think, is clear, sensible and credible information. They want to know if they’ve invested in a training initiative: Is it working — yes or no? What good is it doing? This is where ROI procedures have really let them down. They have a second set of questions underneath those first questions: If training is working and doing some good, what would more investment get us? Metrics fall short at that point.

“Metrics don’t tell people why they got what they got. The two deeper questions that are important to ask are, ‘What would more investment get us?’ and the corollary is ‘What’s the risk by reducing the investment or by not investing at all? What money are we leaving on the table with training, and should we be investing more to get more back?’ The current set of metrics are very much retrospective and after-the-fact — it’s sort of like measuring how open the barn door was after the horse is already gone. There’s nothing you can do with the data to make decisions going forward as to what’s needed, what would work better, what would more investment get us, and what’s at risk by not investing as much as we did.”