In recent years, there has been a lot of focus on adaptive e-learning, fueled by the advances of machine learning and artificial intelligence. As the one-size-fits-all approach of e-learning loses its appeal and online course attrition rates continue to rise, there is a move toward more personalized and adaptive learning to engage learners and achieve better learning outcomes. Personalized and adaptive learning has the ability to change learning content or the mode of delivery on the fly and to provide real-time feedback to learners. The origin of adaptive learning came from the research of intelligent tutoring systems, recommender systems and adaptive hypermedia. The advent of machine learning and artificial intelligence techniques have helped the plethora of platforms and tools that support adaptive learning flourish.
Applications of Machine Learning
Broadly speaking, machine learning can be utilized in the following e-learning applications:
Personalized learning paths: A learning path is a sequence of courses or learning materials that allows learners to build their knowledge progressively. Personalized learning paths are either pregenerated based on job roles, org charts, competencies required, etc., or can be changed dynamically based on learners’ progress, interest or some other criteria. Usually, a learner model is built at the back-end to identify, collect and update variables (such as learning preferences, demographic information, competencies or knowledge levels) to push different content to each individual learner.
Chatbots: In an educational context, chatbots provide conversational answers and serve as a quick reference guide. Increasingly, there are applications for coaching and performance support, similar to an intelligent tutoring system, that present a learning concept with a series of conversations. Chatbots potentially can tap into various sources of information that are distributed across the organization as a knowledge management tool.
Performance indicator: Performance indicators are used to pinpoint a certain learning pattern, such as significant spikes in course failings, so instructors can intervene before it is too late. Additionally, machine learning provides a more effective way to analyze learner engagement data and identify patterns that suggest where content could be better written or redesigned, or to provide additional support to learners who are failing to complete a course or a learning activity.
What Could Go Wrong?
However, as machine learning becomes more common, we could be making the wrong assumptions and predictions. Workplace learning is sometimes linked to performance. Organizations need to take a holistic view in supporting adaptive learning and learning analytics. Otherwise, it could demotivate learners and alienate employees. The following list examines what could potentially go wrong in applying machine learning for online learning:
Prediction could be too prescriptive: Adaptive learning algorithms make assumptions about learning needs and provide learning recommendations and assignments accordingly. For many years, there were attempts to cater to different learning styles, however, the idea of learning styles is a myth. Just because a learner prefers a certain type of learning (e.g., video-based learning), it doesn’t mean they always prefer it. Preferences are largely context dependent. For example, if someone is sitting at a noisy cafe, they might want to access information textually rather than through audio. When learning a foreign language, one might want to listen, watch a scenario and practice. Furthermore, unlike the popular saying, “Netflix for learning,” learning is hard to recommend. There is usually not enough data to make useful recommendations. Most learning management systems rely on job role, department and a few competencies to push courses without considering prior experience and knowledge levels. Machine learning and big data analysis cannot sufficiently replace instructor observations and feedback from learners, peers and managers.
Adaptive learning is costly and time consuming to build: In designing adaptive learning, granularity is an issue. Learning designers and developers need to figure out how much adaptation to provide learners. Will you adapt at the curriculum level, course level or module level? Per activity or scenario? A content system requires constant updating and monitoring, especially with multiple pathways. Rules and assumptions also need to be checked regularly. Often, we think adaptivity is always the way to go, but is it worth the opportunity cost? A focus on good instructional design and interaction principles is sufficient. Sometimes, it makes sense to have one-on-one human tutoring and content grounded on sound pedagogical principles. Know your target learners and design accordingly based on their needs, how they acquire knowledge and process information and deploy learning best suited to them. In other words, apply user-centered design principles.
Algorithm black box: Increasingly, researchers in AI and machine learning say that algorithms used by organizations can be opaque and discriminatory. In dynamically generated learning paths, the machine should be explicit about the decisions it makes to recommend or not recommend certain learning paths. There is a lack of diversity behind the algorithms that drive the adaptive learning decisions. One adaptive e-learning company I spoke to recently couldn’t tell me how many of their data scientists have a solid grasp of pedagogical principles; yet they are the people who make decisions on what and how we learn. One notion to counter algorithm black box is advocating the use of explainable AI. Essentially, XAI programs enable users to understand and provide input on the decision-making process to improve algorithmic accountability. Of course, this concept is by no means perfect.
Trust: In measuring learning and understanding how to adapt to individual learners, companies are collecting a huge amount of personal data. At the same time, ownership and governance of data is often ill-defined or not defined at all. This results in mistrust among employees. Data is collected without clear and conscious learners’ consent. What rights do learners have in acting upon the recommendations and predictions or choosing to ignore them? What about their managers? Do learners identified as “at risk” in their learning trajectory face repercussions? In today’s Big Brother surveillance culture, people don’t trust machine learning algorithms — especially when the decision-making process behind the scenes is lacking transparency and accountability. What happens when the prediction goes wrong? Should you make recommendations to learners who don’t want them? Who owns those data? These questions need clarification, especially when workplace learning records are potentially linked to performance reviews.
As developers, designers, business owners and educators, we have a responsibility to actively engage in the decision-making process, safeguard the practice and seek diversity of input for guidelines. As learners, ask where the data comes from, how it is being collected and applied and challenge the recommendations and predictions from the machine learning algorithms. Only then will we be able to optimize the potential of personalized and adaptive learning to improve learning and performance outcomes.