How would you respond if the chief financial officer of your organization posed these specific benchmarking questions to your team?
How many days or hours does it take you to on-board a new employee? Is this more or less than your nearest competitor?
How rapidly can your line managers determine which element of a procedure is not being fully learned by their employees? What is your cycle time for providing remedial action? How does this compare to your industry?
How many hours a year do your subject-matter experts provide assistance on learning projects? How does this rank against organizations your size?
When teaching a new system upgrade, what percentage of knowledge transfer is delivered via classroom, e-learning, performance support, documentation or the help desk? Are your allocations similar to other users of this enterprise system?
How does your organization use the features of your learning management system, and what impact does that have on business results, such as increased sales? Is your organization ahead or behind the curve?
How many minutes of a classroom hour are spent on new content, context and stories, practice and discussion, administrative logistics, or learner-driven conversations? How does this mix change by division of the company or type of trainer?
If there were a bird flu pandemic on May 1, what would be the speed at which you could transfer all travel-based learning to a digital format? How does your readiness compare with other organizations?
How does your organization compare with your competitors on the annualized learning time (formal and informal) per employee, by division and by job category?
What is your organizational spend on compliance versus skills-based training? How is that vectoring in comparative organizations?
These are similar to the questions asked of most functions in our organizations. Process improvement programs such as Six Sigma and Lean manufacturing are motivating executives to get specific in benchmarking key activities. Yet, many of these questions would challenge or completely stump a good percentage of learning leaders.
Although we are good at measuring the outcome of a specific program, we rarely compare our results at a granular level with our competitors. Most of our industry reports are quite vague, talking of broad trends, such as the percentage of spending on e-learning versus classroom training. Yet, it is almost impossible to find data to compare to the mix of learning strategies for a specific role in a specific industry.
Cardiologists continually compare the details of their practices with one another in terms of procedures, time, expense, survival rates, training patterns and patient profiles. Manufacturing benchmarking is getting incredibly detailed, right down to number of steps in a key process and comparing that across similar factory settings.
My goal is to get specific in learning benchmarking. I can’t wait to have a panel of CLOs who will openly compare details to the questions listed at the start of this article. Imagine if we were at an industry conference and could watch a video of two trainers teaching the same topic, comparing and contrasting the time-to-competency and techniques deployed. What if we could take a look at a similar program taught across 200 companies and see how our own approach lined up?
Learning leaders have reported significant pressure from their top management to move toward a much more specific level of benchmarking. They want more than nice-looking pie charts of industry trends. They need to get to a granular enough level to make authentic comparisons with the very organizations that keep their stakeholders up at night.
We have the systems, tools, smarts and appetite to start a new era of learning benchmarking. Let’s have the courage and focus to make it happen soon.
Elliott Masie is the CEO of The MASIE Center’s Learning Consortium. He can be reached at firstname.lastname@example.org.Filed under: Technology