For decades learning and talent leaders have talked about earning a seat at the table. The fact that they’re still talking about this shows how difficult it is to get there.
To enjoy strategic influence leaders must understand their company’s goals, align their activities to those goals and execute their plans to deliver results for the company and its customers. Then, to clearly articulate those contributions, they must be fluent in the language of business; namely, business goals, impact on those goals and, especially, measurement.
This approach requires creating a strategic plan, consistently working from it and regularly reporting against it. Leaders must frame dialogue in terms of human capital investments and results that support business objectives, and help senior leaders understand the tangible, specific value of those results using agreed-upon, standardized metrics.
Part of the difficulty in obtaining that seat is related to language. If the language of business is measurement, the substance of the conversation is about quantitative analysis. Both take many forms, and learning leaders have traditionally been less fluent in the complex argot of metrics than their peers in operations or finance.
Plus, it is difficult to connect learning programs to the return on investment they produce, statistically or otherwise. Forecasting and measuring the effect of investment may be equally challenging in research or marketing, but those disciplines are more accustomed to making the attempt. And because those in support roles rarely own accountability for business outcomes, refocusing on core business objectives requires a shift in mindset and the partnership of relevant business units.
The View From the Top
Learning organizations that seek to apply quantitative analysis to human capital achieve widely varying levels of success. Gene Pease, an author and CEO of Capital Analytics, describes the process as an evolutionary continuum, and said achieving success becomes more difficult as you progress. His continuum follows seven steps:
- Anecdotes. Personalizing the story behind the numbers helps engage the audience and demonstrate complexity.
- Scorecards and dashboards. These provide the first level of quantitative insight into data in an easy-to-use format.
- Benchmarks. This familiar tool builds context for the work by comparison with peer companies.
- Correlations. The first step in analysis connects the dots on a descriptive level.
- Causation. When cause and effect can be linked, the analysis becomes actionable and more compelling.
- Predictive analysis. This advanced stage allows the organization to drive real-world results and gauge program success.
- Optimization. The holy grail of measurement is achieved when stakeholders understand the effect of human capital decisions and they can be consistently improved.
For context, consider the evolution of marketing. Once considered a secondary activity to production and distribution, marketing and advertising grew in prominence with postwar consumer demand, yet remained subjective and focused on selling. Economist Philip Kotler and others transformed the field by shifting emphasis from price to consumer needs, and by applying quantitative measurement and analysis broadly and rigorously. In time, marketing became a part of economics and a sophisticated practical science with a powerful voice in the modern corporation.
Work from Jeffrey Pfeffer, professor of organizational behavior at the Stanford University Graduate School of Business, and his peers suggest that management is fundamentally more craft than science. Stressing that enlightened management is as much a mindset as a skill, Pfeffer offers recommendations that revolve around factual, quantitative evidence.
Commit to gather data and use tools to analyze data. But for a learning organization to turn numbers into processes and learn to run itself like a business, the first step is to understand customers’ needs.
Corporate executives focus on top-line results, bottom-line performance and market share. If learning exists to help the organization achieve its goals, all major initiatives, programs and systems should serve those goals and execute with discipline to achieve planned results. A critical part of developing a plan is to secure agreement from corporate sponsors for expected learning impact on each major goal.
For example, if the company intends to increase sales by 10 percent, and a learning program could help achieve that, the senior vice president of sales and the chief learning officer need to agree upfront on the expected impact from the learning initiative.
According to David Vance, director of the Center for Talent Reporting, a comprehensive, quantitative reporting process is inseparable from the plan itself. At a minimum, the highest level of reporting should show key measures for prominent initiatives and goals, year-to-date results, and a comparison of those results to plan. He envisions a time when talent management reporting filters up to the level of the annual report.
The best efforts to measure, analyze and report learning face an additional hurdle: compared to other kinds of corporate metrics, there is a lack of standardization. A wide range of scorecards, dashboards and models are available. Any might work well for a given organization. But without accepted standards, none can tell a compelling story to leaders.
The accounting profession, by contrast, has used Generally Accepted Accounting Principles for more than 80 years. Accountants have standard definitions for terms, agreed-upon methods to calculate measures and commonly accepted statements such as the income statement, balance sheet and cash flow statement. Why doesn’t learning? Further, managers are taught how to use these financial measures to analyze and improve their company’s performance. What are learning leaders taught?
One promising effort to meet that need began in 2010 when industry thought leaders and practitioners convened to ask some critical questions about measuring learning and talent development:
- What data should we collect and how should it be organized?
- What measures should we use to gauge effectiveness, efficiency and outcome, and how should those measures be defined?
- How should these measures be reported?
- How should these measures and reports be used to manage HR?
- How can this process align HR more closely with relevant functions or business units to achieve shared goals?
This work grew into the Talent Development Reporting Principles initiative, led by Vance. Leveraging the work of the American Society for Training & Development and the growing literature on HR standards, the effort first defined standard terms and measures for learning and development, then extended the approach to other HR processes such as talent acquisition and performance management. For any given process, TDRp’s recommendations establish three broad categories of standard measures:
- Effectiveness measures look at quality, such as a participant’s satisfaction with a learning program.
- Efficiencymetrics are about ratios, such as the number of new employees hired, the utilization rate for classrooms or cycle time for new program development.
- Outcome measures track learning effect on organizational goals. For example, if the company aims to increase sales by 10 percent, a learning program might be found to contribute a 2 percent increase, accounting for 20 percent of the goal.
Making Sense of the Data
While standard measures are essential, the focus should be on reporting data and analysis. Clear, consistent reporting helps learning executives manage the function better and deliver greater organizational value. It also gives them the tools to take the learning conversation upstairs.
To track and compare various measures, the TDRp work recommends three standard monthly statements based on effectiveness, efficiency and outcome. While these are intended primarily for the individual functions of HR, the outcome statement has particular value because it shows the alignment of learning’s goals and achievements with those of the organization.
These statements summarize findings and add year-end forecasting. The first is a quarterly, high-level summary report. Intended as a standalone piece for senior leaders, this concise briefing distills past, current and projected data for each critical measure, including effect on the most important strategic goals, with a focus on conclusions and actionable recommendations.
The second and third types of report are generated monthly for the department head to use in managing the learning function. The operations report focuses on effectiveness and efficiency measures across all company learning programs, while individual program reports track outcome, effectiveness and efficiency measures in support of each initiative or company goal.
Armed with this type of reporting, learning executives can clearly articulate the effect of development decisions on business objectives, and begin to predict those effects more accurately. Leaders can pinpoint and discuss opportunities for improvement, approaching Pease’s final, optimization stage of the continuum. And by delivering the rigorous quantitative analysis expected of a business, the learning function can claim a strong voice in any discussion of strategy.
Unfortunately, efforts to provide comprehensive, executive-ready reporting on talent development remain in their early stages. The Society for Human Resource Management recently launched an initiative to define key HR measures and practices, and hundreds of organizations have begun to implement TDRp standards. But their efforts merely outline how far learning has yet to travel.
Yet at each step of the journey, better standards, tools, data, analysis, reporting and management bring real benefits. Those improvements will be noticed. Inevitably, learning will one day be run like a business, with a consistent focus on metrics that support broader business objectives. This will fundamentally change how investments in human capital are managed and perceived.