This article attempts to discuss some of the more significant characteristics and best practices that have helped organizations create an effective learning measurement process.
Plan Your Metrics Before Writing Survey Questions
First and foremost, never ask a question on a data-collection instrument unless it ties to a metric you will utilize. As simple as this sounds, organizations often create questions with no purpose in mind.
A great example is a consumer services company that invested enormous resources in custom surveys that were specific to each course with little or no comparability across courses. When asked if the value of collecting this data was beneficial to management, the company stated that while the data was great for course designers, it had stimulated little interest in the eyes of senior management or other stakeholders.
Never write survey questions or collect data unless it results in a metric your stakeholders find valuable. Once you have finalized your set of metrics, the survey questions are an easy byproduct.
Ensure the Measurement Process Is Replicable and Scalable
Organizations tend to spend thousands of dollars on one-off projects to measure a training program in detail. This information is collected over many months with exhaustive use of consultants and internal resources. Although the data is powerful and compelling, management often comes back with a response such as, “Great work, now do the same thing for all the training.” Unfortunately such one-off measurement projects are rarely replicable on a large-scale basis. So don’t box yourself into that corner.
A classic example is a telecommunications company that hired an expert third party to evaluate a single training program. The third party provided a convincing argument to management that the single program was a good use of company resources. Management quickly mandated the same process be done for all programs. This was not feasible given resource constraints. The training group had boxed itself into a corner and was forced to quickly come up with a scaled-down process that could be replicated and scaled up, yet they had to backtrack and resell the revised approach to management.
It is essential to make sure that you create a measurement process that can be replicated and scaled across all learning events without spending more on measurement than you do on training. To do this you must acknowledge and accept that not everything needs in-depth, precise measures. The key is to use reasonable assumptions to predict and estimate learning effectiveness. Doing so will provide a baseline for you to manage by measurement and to extract relevant data points to present to management that clearly demonstrate the value of the learning investments. This is not to rule out maintaining enough flexibility in your measurement process to drill deep into a program 5 percent to 10 percent of the time where such an exercise is warranted. You simply want to make sure that what you do 90 percent to 95 percent of the time is replicable and scaleable.
Ensure Measurements Are Internally and Externally Comparable
Related to the best practice of ensuring that measurement is replicable and scalable is the concept of comparability. It is a significantly less powerful endeavor to do a one-off exercise when you have no baseline of comparability. If you spend several months calculating out a 300 percent ROI for your latest program, how do you know if that is good or bad? Surely a 300 percent ROI is a positive return, but what if the average ROI on training programs is 1,000 percent?
A great example is a manufacturer that believed it had an excellent training organization. Day in and day out, this manufacturer measured the performance of its instructors, courses and facilities. The scores were always consistent. Finally, it compared these scores with an external group of organizations and found that it was scoring consistently lower than the other training organizations.
Ensuring your measurement process is comparable both internally and externally is critical. Comparing learning effectiveness for each course, comparing investment value by each client grouping and comparing job impact by key program are just a few examples of how internal and external comparisons can give you a more accurate portrayal of how your training is really measuring up.
Use Industry-Accepted Measurement Approaches
Management is looking to the training group to lead the way in training measurement. It is the job of the training group to convince management that its approach to measurement is reasonable. This is not unlike a finance department that must convince management of the way it values assets. In both cases, the group must ensure the approach is based on industry-accepted principles that have proof of concept externally and merit internally.
For example, a software company was tasked with the challenge of determining a return on investment for its thousands of learning offerings. Senior management wanted to know the value of the training. The training group researched several methods and approaches, looking at books, articles, consultants and associations such as the American Society for Training and Development (ASTD). At the end of the day, the company adapted an approach that emphasized the models used by Donald Kirkpatrick in his Four Levels of Learning and Jack J. Phillips in his ROI Process. Senior management was able to more quickly buy into the approach based on the overwhelming external support for these methodologies.
Regardless of the approach you use, keep in mind that you need to adapt it to your organization, not adopt it in your organization. There is a big difference. Adapting means you take an approach and tweak it to fit your needs. It is not a cookie-cutter approach. Finally, make sure you are comfortable with the approach and can defend it. Looking at how others have applied it and its acceptance in the industry will get you more comfortable with the approach and make it more defensible with management.
Define Value in the Eyes of Your Stakeholders
If you ask people what they mean by ï¿½return on investment,ï¿½ you are likely to get more than one answer. ROI is in the eyes of the beholder. To some it could mean a quantitative number, and to others it could be a warm and fuzzy feeling.
For example, to showcase the diversity in value, letï¿½s look at two very different perspectives. A large utility defines value in measuring the financial return on every class it runs. The value is in comparing the returns to each other and showcasing to management that the benefits exceed the costs of training. Contrast this with a large oil company. This organization is far less concerned with a financial ROI on training. The company places enormous value on ensuring that the employees receive a quality training experience. The ï¿½ROIï¿½ for this company is the ï¿½warm and fuzzyï¿½ feeling they get when they review evaluations and know that the employees were satisfied with the training.
As you can see, had you calculated a financial ROI for the oil company, it may have been a waste of your time. However, if you only showed the utility the ï¿½warm and fuzzies,ï¿½ they would have felt that your measurement was not adequate. The point is to ensure that your measurement process and your resulting metrics yield business intelligence that is of value to each stakeholder.
Manage the Change Associated With Measurement
Some best practices might be doomed for failure if you fail to manage the change with your stakeholders. Successful organizations spend time and energy planning for the change.
A large, international manufacturerï¿½s corporate university rolled out a measurement process that would significantly change how the organization evaluates training. To manage this change, the organization slowly built it up during the implementation process. First, the leaders of the corporate university got the buy-in from stakeholders by getting them involved in the establishment of metrics and finalizing data-collection instruments. Second, the corporate university sent out communications in its newsletters describing the timetables and benefits of the change. As a result, the organization fully embraced the new measurement process. In fact, the corporate university now receives measurement requests from conference planners and decentralized learning groups within the organization.
Failure to get buy-in from stakeholders can create hostility and resistance to change. Assess the culture and the readiness for change. Plan for change, or plan to fail.
Ensure Metrics Are Well Balanced
Although you want to understand the needs of your stakeholders and have them define how they perceive value, you also need to be proactive in ensuring that your final ï¿½measurement scorecardï¿½ is well balanced.
One IT organization was told that satisfaction was the sole measure of learning performance. This organization focused its efforts on measuring learnersï¿½ satisfaction with such items as the instructor, the courseware and the facility. They received excellent metrics on satisfaction. However, when it came time to answer the question regarding the impact training made on the job, or its return on investment, the organization did not have the data to support it. Too much emphasis was focused on one element of learning measurement.
A lopsided approach to measurement is risky. Using the popular Kirkpatrick model, you can see a balanced scorecard emerging from each of the four levels of learning he writes about. The first attribute is reaction. It is important to measure reaction as that can help improve future events and is a proxy for satisfaction. Second is learning. Just because people were happy does not mean they learned anything. It is critical to measure this either through testing or through alternative predictors. Third is behavior. If the employee learned something, can you imply that he changed his behavior on the job? Probably not; that is why you need to measure it. The last attribute is results. Did the training positively impact the business results that management felt it should have? As you can see, each level is like a quadrant in a scorecard. Some stakeholders may care more about certain quadrants than others. The key is to keep it balanced so that you can comprehensively measure your investment.
Leverage Automation and Technology
Although this goes hand and hand with a measurement process that is replicable and scaleable, it is worthy of separate mention. Your measurement process must leverage technology and automation to do the heavy lifting in areas such as data collection, data storage, data processing and data reporting.
A classic example is a mid-sized accounting firm. The company collected training surveys via paper and then manually keyed them into an Excel spreadsheet. Requests for information often took hours to fulfill and could not be done in a timely manner.
In todayï¿½s world of automation and technology, any company, large or small, can cost-effectively leverage technologies such as the Internet to collect data. Even when no computer is in the classroom, surveys can be sent by e-mail to participants after the training. With proper reinforcement, a decent response rate can be gleaned. In addition, scanning technologies can scan paper data to avoid manual data entry. Finally, software technologies exist to create standardized reports using the collected data. The end result is that you spend fewer resources collecting, processing and reporting results and more time analyzing the data for improvement purposes or for showcasing the value of the training to management.
Crawl, Walk, Run
When designing a learning measurement strategy, it is nice to have a long-term vision, but donï¿½t attempt to put your entire vision in place right out of the blocks. The best approach is to start with the low-hanging fruit that can be accomplished in a reasonable time frame to prove the concept, demonstrate a ï¿½winï¿½ and build a jumping-off point to advance it to the next level.
A great example is an insurance company that envisioned a measurement strategy that measured across a balanced scorecard of learning, leveraged automation and provided relevant, timely metrics to key stakeholders. Recognizing the challenge of limited resources and lack of consistent technologies within the organization, this company started out by piloting the process on a few key programs. The pilots afforded the company the opportunity to test-drive the process, refine it and build momentum for expanded usage. Now the company is rolling out the process organization-wide with the goal of integrating various systems involved in the process soon after.
Although hard at times, your first step should not be a step at all, but a crawl. You can learn a lot from a pilot or test-run of your process. Also, you build the quick wins you need to move the process forward.
Ensure Your Metrics Have Flexibility
The last thing you want is to roll out a measurement process that is inflexible. You will likely have people who want to view the same data but in many different ways. You need to architect your database to accommodate this important issue, thereby creating measurement flexibility.
For example, a large technology training company has multiple stakeholders who want to see the same data from different perspectives because they manage different aspects of the business. The courseware designers need to see the training data sliced by course so they can understand which courses generated the greatest job impact. The team that manages instructors needs to slice the data by instructor so it can monitor the quality levels of its large group of professional instructors. Senior management wants to view the data by location. Each location is like a separate entity, and the performance and quality levels of each are of keen importance to them. Finally, the sales and marketing folks want to mine the data for intelligence surrounding historic performance and productivity gains experienced by learners as a result of the training. This helps them create better business cases when positioning the training to prospective buyers.
As this example shows, you need to carefully think about the data before you collect it. Once collected, you are limited by the data you have. The saying ï¿½garbage in is garbage outï¿½ is very true. Most commonly, you should ï¿½tagï¿½ every data element with the following: instructor name, learning delivery mode, location where the training was held, learning provider, date of training, course name, curricula and program. To make matters easier, technologies such as OLAP cubes can be used to slice this data into near infinite proportions, satisfying the needs of all of your data requests.
Finally, flexibility is also inherent in your ability to ï¿½roll upï¿½ the data. This too must be thought about prior to data collection. Often companies ask different questions for each course. That is good tactical detail, but it is not good strategic intelligence. Different data is more challenging to aggregate up into higher levels. Senior management is far more likely to want to view aggregate data than class- or course-specific data. So, ensure you have a common set of ï¿½standardï¿½ questions you ask across all courses that can be aggregated and benchmarked. You can still have course-specific questions for the more tactical analysis. In this way, you have more flexibility in your data.
Edward Hubbard, Ph.D., consultant and author, once said, ï¿½Training is either at the table working with senior management and adding value, or they are on the table perceived as a cost center that is going to get cut.ï¿½ The best practices mentioned in this article are not an all-inclusive list of what it takes to avoid being on the table. However, they should be used as a sobering reminder to all that not even the biggest and best training organizations are protected against the reality or perception that oneï¿½s value can come into question at any given time. Leverage these best practices and be a step ahead by creating the right measurement approaches to continuously improve and showcase value to stakeholders.
Jeffrey Berk is the director of Products and Services for KnowledgeAdvisors, a business intelligence software company that helps organizations measure and manage their learning investments. Jeffrey can be reached at firstname.lastname@example.org.
- 5 Forces Shaping the Future of HR
- Why ‘Leaders Eat Last’
- The skills gap: technology first
- 5 strategies to diminish sexual harassment and toxicity in mentoring
- 2020 and beyond: skill sets that matter
- Personalizing performance, not learning: lessons from mass customization
- A chief of learning and positive thinking