Validating Knowledge Through Testing and Assessment

Ensuring that learners leave training with the skills they need is essential to the CLO’s mission. Taking a performance approach to testing and assessment leads to greater productivity, fewer errors and more confident employees.

In today’s knowledge economy, the transfer of information and knowledge is critically important. An organization’s financial value is measured more by its intellectual capital than by the number of buildings and equipment it owns. Yet, knowledge for the sake of knowledge alone is of little business value. In the business world, knowledge is only useful to the extent that it is applied in meaningful performance that supports the execution of critical strategic business initiatives. The fact is that people don’t just go around the workplace spouting their knowledge—they need to use that knowledge so they can explain a product to a customer, complete an activity or do something meaningful for the business. Possessing knowledge is not enough—that knowledge must be applied appropriately. Simply put, learning organizations should be focused on what people need to be able to do, rather than on what they know.

The Limits of the Academic Model

Because the roots of learning in the business world are found in the academic model, it’s not surprising that testing and assessment also typically follow that model. Traditionally, trainers determine all the content that learners need to know, just as an educator develops a course curriculum. Training time is spent covering the content, moving from one topic to the next. A subject-matter expert discusses each topic, expecting his or her knowledge to be transferred to the learners. Experiential activities, including group exercises, role-plays or case studies, are added to build interest and provide “practice.” At the end of the training, learners are tested on their ability to recall the content. True-false or multiple-choice questions are used to test the learners’ knowledge. If learners are able to answer questions more or less correctly, they are deemed “trained” and job-ready.

But when those same learners go back to their jobs, managers quickly discover that they are not fully ready to perform, despite the knowledge they might have attained in training. There is a learning curve—the time needed for learners to experience all the real-world situations that are part and parcel of the job. In many cases, this is where the real learning takes place, and it often takes place through trial and error. That can be a high-risk and expensive way to learn, because learners are dealing with real customers and live data. Too often, the results are lost customers and sales, reduced productivity, compliance violations and other negative consequences.

Eliminating the Learning Curve

Since the learning curve carries with it both risks and costs, there is good reason to see if there are ways to eliminate it by taking a different approach to training, testing and assessment. By following a performance model, it’s possible to guarantee that learners will have all the skills needed to meet management expectations before leaving training.

In the performance model, knowledge is presented in the context of the job to be done and is assessed in terms of performance, not simply memory recall. There are four steps to making certain that knowledge can be effectively applied on the job:


  • Define the precise performance expectations for the job.
  • Determine what knowledge is required for competent performance.
  • Present the knowledge in the context of the tasks to be performed on the job, including real-world practice and feedback on the quality of the practice.
  • Test the learners’ ability to apply the knowledge and skills in the situations that they will face on the job.

Defining the Performance Expectations

Unlike the academic model’s initial focus on subject-matter content, the performance model starts with a clear definition of job performance expectations. This enables the training to be built around performance objectives that describe:


  • What the learner must be able to do.
  • The conditions under which the performance will occur (simulating the actual job performance as closely as possible).
  • The criteria (or performance standards) for competent performance.

In other words, what should the performer be able to do, and how well should she be able to do it? Note that the focus is not on knowing, but on doing.

Learners should be told what these expectations are, and they should serve as the basis for both the training objectives and the testing on the accomplishment of those objectives. These performance expectations serve as the measure for determining whether the learning has accomplished what it intended to accomplish. The aim of training or instruction should be to ensure that every learner is able to meet these performance expectations before leaving—whether training is delivered in the classroom, online or on the job.

Determining Knowledge Requirements

Analysis is used to determine exactly what knowledge learners need to meet job performance expectations. Analysis helps answer the question, “What do exemplary performers do when completing this job task?” rather than “What does someone need to know?”

Through observation, interviews, surveys and other analytical techniques, relevant knowledge items are identified and described as actions or performances. For instance, knowledge items might be described as “interpret a diagram” or “recall the types” instead of as content topics (such as safety precautions, billing, welding, etc.).

Information and knowledge are limited to only what’s required to practice the skill being taught. This helps reduce training time and enables learners to spend the majority of training in actual job-related practice—at least 50 percent of training should be spent practicing the same skills that will be needed on the job. In fact, one of the major reasons why so much academic-style learning is not as effective is because it frequently includes knowledge that does not directly relate to job performance. Including the history of any field or theory unattached to job requirements virtually guarantees the inclusion of irrelevant knowledge. It is far better to sacrifice some knowledge and information for more practice time on critical tasks.

Presenting Knowledge in the Context of Job Tasks

One of the most common mistakes in learning design is to allocate hours, days or even weeks to providing knowledge, then massing all of the practice at the end. This presents multiple obstacles for learners. First, they aren’t able to practice in small chunks so that they are able to build on their own success. By including practice in bits and pieces along the way, learners receive continuous feedback to correct mistakes and reinforce appropriate actions and behaviors. Second, the emphasis on content apart from application is not only inefficient, but also increases the likelihood that much of the newly learned knowledge will be forgotten before the opportunity for practice arrives.

A better approach is to integrate knowledge content with task performance by presenting information (knowledge, procedures, errors to watch for, etc.) needed to perform a single task or sub-task, immediately followed by practice applying that information. The practice should mirror the real-world environment and conditions the learner is likely to face as closely as possible. For example, in a call center training program, learners should practice performing tasks with all the different kinds of customers they are likely to face—whether they are angry, abrupt or pleasant. If there is a time constraint for performing a particular task, the practice should require meeting that constraint as well.

Testing Knowledge Application

If something is worth teaching, it also makes sense to see whether or not the instruction was successful. However, traditional, academic-style written tests measure the acquisition of knowledge, but not the real-world application of that knowledge. The problem with true-false or multiple-choice tests is that they don’t evaluate what people will really be doing on the job. Whether or not someone can answer a multiple-choice question correctly is not a good indicator of real-world performance. How many times in the real world is someone given five choices and asked which is the most correct answer?

It is true that these tests are easy to score and are well-suited for assessing large numbers of learners, but the bottom line is that they don’t test for competence. You may find out whether someone understands how to read a meter or how to conduct a performance evaluation, but whether or not the person can do what you need him to do is still an unknown. For example, passing the written test on driving a car is very different from passing the actual road test. If you are trying to teach someone how to do something, the test should determine if the person is able to do it—or not.

Even case studies or group simulation exercises fall short as assessments of capability and predictors of on-the-job performance. These assessment methods fail to match what will be required on the job and don’t measure individual performance.

It is clear that what we need to test for is competence—not knowledge or understanding. A skill check, or performance check, is designed to accomplish exactly that. The term “skill check” is preferable to “test,” partly because so many people feel trepidation at the mention of the word “test” (remember school?) and partly because skill check is a far more descriptive term for determining competence.

The skill check simply asks the learner to demonstrate the same performance that was described in the performance objective (and the performance objective is derived directly from the real-world job requirements). The skill check should match the objective exactly, both in the performance that is called for and the conditions under which the performance is completed. Learners know up front exactly what they will be required to do, so there are no surprises—no teaching one thing, but testing for another. This eliminates the fear and frustration learners experience in academic-style testing.

Learners are required to complete the skill check for each instructional module. If the learner is able to meet the requirements of the skill check, she can move on to the next module. If the learner can’t successfully complete the skill check, he receives additional coaching or practice until he can meet the requirements. As learners continue to complete the skill checks throughout the course, the instructor knows that the instruction has taught what it intended to teach. Equally important, learners gain confidence in their ability to apply new skills. By using skill checks to test for competence throughout the course and building skill checks directly from job performance requirements, it is possible to guarantee that every learner will acquire every skill needed to meet management performance expectations before training ends. This eliminates any issue around learning transfer and even eliminates the learning curve once learners return to the job.

To further prepare learners for the real world, integrated skill checks can be developed that combine the assessment of performance of individual tasks in a way that mirrors the real world. For example, for a customer service position, an integrated skill check might combine adding a new customer, taking an order and handling customer questions in one integrated skill check. Integrated skill checks also can serve as a certification of performance ability that is required before moving on to more advanced modules.

Online Testing

One of the many challenges of online learning is that there are limited options for testing and assessment. The best option is to take a blended approach and provide knowledge and information pieces online, but enable the practice and performance checks to be handled live. If that option isn’t possible, the next-best choice is to develop scenario-based testing, where learners are given a situation and must apply what they have learned. Depending on the available technology, learners either type in a response or compare their response to multiple-choice options. For example, in a course teaching problem-solving skills, learners might be given a customer situation and asked to apply the newly learned problem-solving process to that situation. While less than ideal, this may be the only practical option available.

There are multiple benefits for taking a performance approach to testing and assessment. By testing application rather than retention of knowledge, organizations can be certain that every learner leaves training with every skill needed to meet management expectations. The end result is shorter time to full productivity, fewer errors, increased sales and more confident employees.

Paula Alsher is vice president of client solutions at the Center for Effective Performance. She can be reached at palsher@clomedia.com.