Where Do You Find Your Data?

Our methods to gather data could use some work. 

Everyone in the learning field wants data and metrics to measure, assess and provide accountability for learning activities. Whether considering big or small data, learning leaders are looking for evidence of learning’s effect on one learner or thousands of learners.
So, where do we look for this data?
The default behavior of learning departments is to leverage data that they can collect through surveys during a class, following an assessment, after an event or back in the workplace. As responsible learning professionals, we want to build data collection into our systems and activities — and own most of the data acquisition process.
The challenge is the data we really need, want and require is rarely within easy reach of the learning department. We want to measure the impact of learning investments and engagements. The impact does not happen until learners leave the learning department’s domain. By the time the impact is clearly measurable, multiple things have happened.
Return to workplace: The greatest shift is the return from the classroom, e-learning or mentoring situation to the workplace. Our desired data will come after the shift back to work.
Time lapses: Over time, learners will add to or delete from their newly acquired knowledge set. We want to know how they will function after a few weeks or months. Does the content stick?
Manager and peer engagement: The learner may return from a leadership program ripe to practice new models of collaboration — until they interact with their manager and peers. Great learning can evaporate without cultural support from colleagues. In contrast, high levels of manager and peer support can turn learning into continual practice.
Remediation is inevitable: Even after a great learning program, learners get confused or require additional context or remediation. Measuring the impact of a learning program must include follow-up and utility of resources like performance support.
Confidence in competence: In collecting data on learner competence, we also want to look at learner confidence. What are learners’ degree of confidence in their ability to adequately apply newly acquired skills or processes on the job?
The “easy” data that we can collect will be time-limited by our access and control. We can require 100 percent reaction to an end-of-class survey or an assessment prior to granting a completion certification. But this easy data misses the deep impact that time, location and engagement have on transfer to the workplace.
Ideally, we would go after the “big and great” data that can be collected only after our learners return to their workplaces.
If we are interested in measuring the impact of learning, we must weave in a high-confidence way to collect data downstream.
Imagine if the curriculum design for every learning program included a data collection set that provided metrics secured weeks or months after the program, including 360-degree feedback. This would take an enormous amount of time and money to collect, and most learners and managers would not cooperate. But, what if we could collect three levels of post-learning data?
Sample deep data sets: Take a 5 percent random sample of learners and do an extensive collection of performance and assessment data months later. In this model, we might spend significant money on the sample, but it would provide a full profile on impact.
Business assessment of readiness: For the rest of the learners, provide a shorter assessment about learning impact to be completed by them and their managers. This might require them to show an example of how they applied the knowledge, or even share the difficulty they had applying the knowledge.
What’s missing: One of the best data sets we can collect comes from asking the learners — several months after a program — what was missing from their learning experience.
It’s easy to use easy data to do easy learning assessment. But the true measures of impact come with the passage of time, and that data is in the learner’s workplace. Let’s make a deal to shift our data collection to where and when it matters.
The data we need, want and require to measure learning impact is rarely within easy reach of the learning department.