Why do we measure? There are several excellent answers to this question, and they will provide much needed direction to your measurement strategy. The most common answers are: 1) to answer questions, 2) to show results, 3) to demonstrate value, 4) to justify our budget (or existence), 5) to identify opportunities for improvement and 6) to manage results. I’ll examine each and comment on the implications of the answer for your strategy.
The most basic reason for measuring is to answer questions about programs and initiatives. For example, someone wants to know how many courses were offered in the year, how many participated in a particular course or what the participants thought about the course. Assuming this information is already being collected and stored in a database or captured in an Excel spreadsheet, simply provide the answer to the person who asked for it. If it is a one-time-only request, there is no need to create a scorecard or dashboard. However, if someone wants to see that same information every month, then it does make sense to create a scorecard to show the data by month.
The second most common reason to measure is to show results. L&D departments produce a lot of learning each year and usually want to share their accomplishments with senior management. In these cases, departments typically share their results in dashboards or on scorecards that show measures by month (or quarter) and year-to-date. The scorecard might also show results for the previous year to let senior management know they are producing more learning or improving.
The third reason to measure is to demonstrate value. This really is an extension of measuring to show results. Some believe that simply showing results demonstrates value while others believe that demonstrating value requires a comparison of activity or benefit with cost. For example, a department might demonstrate value by calculating cost per participant or cost per hour of development and showing that their ratios are lower than industry benchmarks or perhaps last year’s ratios. Some adopt a higher standard for value and show the net benefit or ROI of a program. Net benefit is simply the dollar value of the impact less the total cost of the program. Any value above zero indicates that the program has more than paid for itself. ROI is simply net benefit divided by total cost expressed as a percentage, and any positive percentage indicates the program more than paid for itself. Measures to demonstrate value are usually shared at the end of a program or annually rather than monthly.
The fourth reason is to justify the L&D budget or the department’s existence. This is yet another extension, of measuring to demonstrate value, where justification is the reason behind demonstration of value. In my experience this is almost always a poor reason for measuring. Typically, a department measuring to justify its budget is a department that is not well aligned to the goals of the business, lacks strong partnerships with the business and has poor or nonexistent governing bodies. Not only is it a poor reason for measuring, but the effort in most cases is doomed to fail even with high ROIs. In this situation, energy would be better spent addressing the underlying problems.
The fifth reason to measure is to identify opportunities for improvement. This is a great reason and indicates a focus on continuous improvement. In this case, scorecards may be generated to show measures across units, courses and instructors with the goal of discovering the best performers so that the lessons learned from them can be broadly shared. There may also be a comparison with best-in-class benchmarks, again with an eye toward identifying areas for internal improvement. Another approach would be to create a scorecard, graph or bar chart with monthly data to determine whether a measure is improving or deteriorating over time.
The last reason to measure is to manage. This is perhaps the most powerful — and least appreciated — reason to measure. A well-run L&D department will have specific, measurable plans or targets for its key measures. These plans will be set at the start of the fiscal year and the L&D leaders will be committed to delivering the planned results. By definition, this approach requires the use of measures, and time will be spent selecting the appropriate measures to manage and then setting realistic, achievable plans for each. Once the plans are done, reports need to be generated each month comparing year-to-date results to plan in order to answer two fundamental questions: 1) Are we on plan, and 2) Are we likely to end the year on plan? If the answer is “no” to either question, the leaders need to take appropriate action to end the year as close to plan as possible. In this case, reports must be generated each month showing plan, year-to date results and, ideally, a forecast of how each measure is likely to end the year.
A good measurement and reporting strategy will address all the aforementioned reasons except number four (justify existence). Reports, dashboards or scorecards are required in some cases but not in others just as monthly reporting is required in some cases but not in others. If there are limited resources, it is best to generate regular reports only for those measures to be actively managed (reason six) or to show results (reason two). Reports can be generated on an as-needed basis in other cases and most measures can simply be left in the database until they are needed to answer a specific question.
David Vance is the executive director for the Center for Talent Reporting, founding and former president of Caterpillar University and author of “The Business of Learning.” He can be reached at editor@CLOmedia.com.