While learning tools are increasingly smart and valuable, they are moving in new directions. These involve additional conceptual rigor, but may provide serious additional capabilities. The most significant of these tools are semantic systems, analytics and richer tracking mechanisms. Alone, each can provide value, but together there is a significant qualitative leap.
Semantics offer an extra layer of detail for content being developed and accessed, adding machine-readable descriptions and more focus in the level of granularity in the content. Tighter definitions of the components and rich descriptions produce higher-quality elements and the ability to start delivering different combinations to meet different needs. This is the basis of personalizing learning, pulling content out by description instead of handcrafted learning sequences.
This facilitates adaptations and helps learning leaders pull together content on the fly. They can push a troubleshooting tool out to a field service engineer at a client site, while sending a list of current products purchased and recommendations to an account manager at the same site. This may not sound like learning, but it is about performance. While not new, not enough organizations add this level of detail. Positive results are being seen in marketing and technical communications, and the benefits are available to learning organizations as well.
While tracking activity within digital systems, individuals leave fingerprints on what they’ve touched. By looking at that data, learning leaders can correlate it with other things of interest as well as look at who did what, and start matching that up with who succeeded. This exploration of data can be made quite high level by tying the semantic descriptions of digital activities with outcomes. This is the source of much of the excitement about analytics.
For example, when customers visit Netflix or Amazon, no one is watching their actions in order to make a recommendation — it wouldn’t scale. Instead, rules pull up content by description, based on what others have done. Learning systems can do the same. For example, they could segregate which customer service agents accessed a product description versus a sales process support tool, and their relative success. More finely granulated, they could see the order in which those two things were done and the impact on success.
Learning leaders also can identify resources or initiatives that are not as successful as needed or other emergent patterns. It takes a significant amount of data for such patterns to emerge, but at the detailed transaction level with which systems can track, the quantity is there.
Another direction that’s emerging is the successor to SCORM. Currently called Tin Can, this standard provides the ability to track a wide variety of activities as part of a learning path. It’s relatively simple. With a syntax that says who did what, it allows one to track “Pat completed safety training” while “Lee attended the CLO Symposium.” It supports a richer learning description, and tools are already working with this system.
These activities can be registered in a learning record store. Competencies can be richer, as the ways to acknowledge learning can be more comprehensive. Ultimately, communities of practice will be able to define and update their own paths to expertise. This is a new development, but one that has potential to support a wide variety of meaningful outcomes.
Each of these capabilities brings its own unique value, but together there are even richer possibilities. Systems could look at learning trajectories in the Tin Can format and use previous results via analytics to pull up certain recommendations by description. This is the beginning of the architecture necessary to start delivering personalized learning. The limits are no longer the technology.