His comments reflect a growing drum roll in the industry, which I agree with (see the January 2003 column on the limits of ROI), that e-learning is not just a way to save travel dollars and chain employees to their desks. However, my response to Bob was, “Yes, we need to emphasize e-learning as a critical business productivity tool, but as of now, senior executives are skeptical. We have no credibility. Let’s put our money where our mouths are and start proving it.”
I’m not talking about the obvious and training-centric four-level evaluation model we all grew up with. I’m talking about rigorous control-group studies that look at endpoints and search for statistically significant improvements.
Let’s look at how pharmaceutical companies develop new products. If they want to launch a new cholesterol drug, they sign up hundreds of patients and split them into three groups. One group gets a placebo (sugar pill with no health benefits), a second group gets a competing drug, and the third group gets the new drug. If the new drug is no better than the placebo or the current treatment, they lose. If they can statistically prove that it is more efficacious than current treatments, they’ll have a billion dollars.
So how can we adopt this comparison study strategy?
- Have a new e-orientation program? Run half of your new hires through the “old way” and the other half through the new program, then measure employee engagement.
- Want to roll out a new selling skills program? Why give it to everyone if it doesn’t work? Put a large group of your sales reps into the new program, leave everyone else alone and look for sales improvement over a full sales cycle.
- Designing a course? What are the measurable benefits of audio and video versus text? Of simulations versus tutorials? Of e-learning versus blended learning?
Critics will say we can’t prove e-learning’s effectiveness because there are too many other external variables on performance. Yet drug discovery researchers will be the first to admit that there is no such thing as perfect science. They too struggle with factors like inadequate sample size, demographic factors and unrelated symptoms. Choose your control groups carefully to minimize external factors and then look for variance in outcomes that are statistically significant (i.e., beyond mere random chance).
A few days after my conversation with Bob, I found someone who is actually investing in the kind of research I’m talking about. Not coincidentally, I think, he happens to be at pharmaceutical giant, Pfizer. Steven Rauschkolb, senior director, University of Pfizer, and president of the Society of Pharmaceutical and Biotech Trainers, told me about his decision to hire a full-time, Ph.D.-level staff member to research and measure training effectiveness. Rauschkolb was tight-lipped about the role, saying only, “We truly believe training is a critical business function, and hiring a full-time resource to evaluate our programs will absolutely give us a competitive advantage. While I can’t tell you the details of his projects, I can tell you his dance card is full! And he more than paid back his salary in the first year by helping us eliminate programs that don’t benefit the business.”
Let’s follow Pfizer’s lead and use comparison studies to prove effectiveness. Rather than leaving this as an academic task, CLOs need to study real programs in their own very real businesses. Our industry won’t get the respect it deserves, and we won’t get the support we need, until we prove the higher-order benefits that we think are there.
Kevin Kruse is a principal with Kenexa, a human capital management consulting firm, and is facilitator of www.e-LearningGuru.com.