In 2010, the ROI Institute conducted a major study sponsored by the Association for Talent Development to understand the executive view of learning and development investments. With responses from 96 Fortune 500 CEOs, the results were comprehensive, representing the largest input from this important group specifically on this topic. The initial results, presented in a keynote at the 2011 Chief Learning Officer Symposium, showed that the No. 1 measure of L&D investments preferred by executives is business impact, followed by ROI.
We also asked how many executives see a learning scorecard — only 22 percent said they did. However, we know from our work with ATD, CLO and others that most major learning functions have some kind of scorecard. The problem is that L&D scorecards usually are not presented to the top executives in organizations.
Why does this matter?
One of the questions in the ROI Institute’s study was, “What is your role in learning and development?” The No. 1 answer selected (by 78 percent) was: “I approve the budget with input from others.” The top executive is a major stakeholder in L&D with ultimate approval, and a meaningful scorecard is important … possibly even critical. The scorecard should offer insight into how L&D programs contribute to improvements in bottom-line measures.
So how should the scorecard be populated?
First, consider the inputs — or indicators — such as the number of people involved in programs, their involvement time and the investment in learning (per person). Executives want to see learning’s reach and costs.
Following identification of inputs, focus should turn to the five outcome levels, level 1 being participant reaction. Instead of expressing overall satisfaction with programs, reaction data should include feedback such as, “is important to my success,” “is relevant to my work,” “I intend to use what I’ve learned” or “I would recommend this to others.” These are powerful reactions for executives to see.
Additionally, consider showing the connection of each program to key business measures. For example, identify the top five measures for the organization (e.g., revenue growth, profitability, customer satisfaction [maybe in terms of net-promoter score], operating costs and talent retention). Imagine placing this list in front of every participant in a program and requesting: “On a scale of 1 to 5, tell us the extent to which you think this program, when fully implemented, will influence these measures.”
This line of sight to key business measures is critical; if your CEO can’t see it, you have a problem. Taking this measurement may also provide evidence of a lack of alignment, which underscores that there is work to be done on this important issue.
Next, you can populate the scorecard at level 2, learning, with just one measure (which can be captured simultaneously when you capture reaction): the extent to which participants have learned the skills and knowledge provided by the program.
Still, reactions and learning measures don’t garner much executive attention, so you need to move to the level 3 — application. This is where you should ask participants the extent to which they have used what they’ve learned. This can be collected with every follow-up evaluation conducted. It’s best practice to follow up on application for about 30 percent of programs you hold each year.
We know that most programs aren’t measured at level 4, or impact. Best practice currently is to measure about 10 percent of programs at level 4. When they are measured at this level, it is important to include an impact study summary on the scorecard.
In lieu of measuring a program at level 4, consider asking for elaboration when collecting level 3 data. Once again, include the organization’s top five business measures, and ask attendees the extent to which the program has influenced each measure. This demonstrates how participants are connecting learning to the business and will attract executive attention because it follows through on the reaction question and shows whether employees see this alignment.
Obviously, the first data collection with this question in a follow-up might be a bit disappointing, but that provides you with a challenge to improve.
Next, you can push evaluation to level 5, an ROI study. As best practice, we suggest evaluating about 5 percent of programs at this level. When you do conduct an ROI study, place a brief summary of it on the executive scorecard.
This scorecard approach should be more digestible for an executive and may increase respect, support and perhaps even funding for L&D in the future. If you have questions or would like to request a detailed document showing how to build an executive-friendly scorecard, please let us know.
Jack J. Phlllips is the chairman and Patti P. Phillips is president and CEO of the ROI Institute. To comment, email editor@CLOmedia.com.