Learning executives must be able to measure the skills of their workforce for multiple purposes. Being aware of employees’ skills not only helps CLOs prescribe new learning initiatives, but it also enables them to measure the success of those initiatives once complete. In addition, by measuring employees’ skills, you can smooth the succession planning process with your awareness of who has which skills. But what are the best methods of measurement?
Multiple-choice testing has many drawbacks, according to Wallace Judd, executive director of the Performance Testing Council (http://www.performancetest.org). “A lot of us in the testing industry have recognized that multiple-choice testing leaves much to be desired,” he said. “It’s susceptible to cheating, and in addition it doesn’t seem to measure the skills as they’re applied in all the complexity that’s required.”
This awareness of the faults of multiple-choice testing led many in the education and testing industry to begin experimenting with performance testing in the early ’90s, and eventually to form the Performance Testing Council, a nonprofit consortium of companies, including test developers, test delivery vendors and test users, dedicated to improving the availability and efficacy of performance testing.
“Performance testing in our definition is testing that requires that the response modality be the same as the modality required at work,” Judd said. “It doesn’t matter whether it’s simulation or whether it’s real hands-on. Either way, it’s performance testing.”
Judd added that many times people are able to do things that they cannot explain using words. “It’s not crystallized knowledge or declarative knowledge, but you can do it,” he said. “To the Performance Testing Council, if you can do it, you know it.”
Learning and testing are intertwined, according to Judd. When the Wright Brothers wanted to fly, they began playing with kites and gliders, and it took them years to determine the proper shape for the wing, he said. In 1901, the Wrights built the first wind tunnel for use with airplanes. The following year they went to Kitty Hawk and flew their kite more than 600 times and determined they had come up with the right wing surface, and all that was needed was a motor. Flight was the next step. “The problem with most learning and learning theories is that we don’t have a wind tunnel to test them,” Judd said. “The measures that we get out of aptitude tests and multiple-choice testing are not precise enough for us to know what works and what doesn’t. You can train all you want, but you don’t know how effective you are or what aspects of your treatment are effective unless you can efficiently measure the skills attained. And I would maintain that you can’t measure anything but verbal skills from a multiple-choice test.”
Performance testing has been around for centuries, Judd explained. Medieval guilds used performance testing for their apprentices, requiring them to create a masterpiece. “If you were a glasscutter at Waterford, you had to create a fruit bowl. If you were a cooper, you had to create a barrel that would hold wine,” Judd said. “Those were hands-on tests; they were performance tests. They were used to ensure the quality of the product and also to constrain the prices to make sure everybody paid a fair price for it.”
In the Information Age, we have computerized testing and virtual reality, and organizations still use hands-on performance testing. For example, airlines test pilots using flight simulators created by aircraft manufacturers, Judd said. Heavy equipment operators in California must operate a bulldozer to earn their certification. Many companies in the computer industry use performance testing as well, Judd said. “For example, Red Hat has hands-on test for Linux administrators, and Cisco has hands-on tests for people who install routers and networks. Oracle has hands-on testing for Oracle administrators, and Microsoft and Certiport have hands-on tests for the Microsoft Office suite,” said Judd.
Kelly Services, a firm that provides workers on a temporary basis, has been testing its temporary workers using hands-on performance testing for more than a decade, Judd added. “So if you go in and you say I can do Microsoft Word, they say, ‘Fine, why don’t you show us?’ They sit you down to an automated test, and that test gives you a chance to use Word to respond to a bunch of different problems that are posed on the screen, like set a tab in this paragraph to 4 inches, or reset the margins in this document to 1 inch around,” Judd said. “So instead of doing multiple-choice testing, you’re actually using, in their case, a simulator to respond to the questions.”
Performance testing has many benefits. Many learners are test-averse and do not perform well on multiple-choice tests. They may tend to relax in a situation that shows what they can do.
“Your readership’s probably pretty familiar with the book ‘Intellectual Capital,’ by Thomas Stewart,” said Judd. “In it, he elaborates the hypothesis that the most important capital in a company is intellectual capital—the skills and talents of the people who work there. But what he fails to discuss is the fact that when you can’t measure that intellectual capital effectively, you can’t manage it, and that the weak link in measuring intellectual capital is the multiple-choice testing system.”
There are many challenges to implementing performance testing, which is why most companies still measure their learners’ skills via multiple-choice testing or other methods. According to Judd, performance testing is not easy to implement for test-deployment companies, requiring the creation of a custom driver—an expensive proposition. “Another problem is the conventional statistics that were developed for multiple-choice testing are not very descriptive of performance tests,” Judd said. “And, performance testing is so new there are few standards and best practices available to people who want to learn how to do this. What we have is a new paradigm for testing—one that needs to be elaborated, filled out and promulgated.”
This is where the Performance Testing Council comes in. Its members work on establishing best practices in the industry and address some of the complex issues surrounding performance testing. In addition, the consortium tries to encourage more people to rely on performance testing to determine the skills of their employees and partners. “What we’d like to do is to encourage people to use the resources available to them to automate performance testing so they get an objective measure and so that they can in fact reliably evaluate skills that are important to the specific job,” Judd said. “Computers are wonderful devices because they have built-in monitoring capabilities that allow you to evaluate how people are performing on them, but you can use automated evaluation for all kinds of other performance as well.”
He added, “I think the thing that is important is any time a learning executive is contemplating evaluating a training class, that they try to see if there isn’t some live situation that can be used to evaluate the bulk of the learning that takes place in the class. If they can do that, they have a much better descriptor of the results of the class than any multiple-choice test.”
To learn more about performance testing and the Performance Testing Council, visit http://www.performancetest.org.
- When it comes to executive education, the challenge is to design for desired success
- Listen: Upwork’s Zoe Harte makes the case for freelancers as core part of talent development strategy
- What should be the employer’s role in tackling student loan debt?
- Intellectual humility is a key skill for tomorrow’s leaders
- Student debt is an impediment to lifelong learning