Everyone is thinking about AI—here’s what you can do about it as a CLO

Instead of treating AI as a passive assistant, companies should integrate it as an active team member—a collaborator embedded into workflows, decision-making, and strategic execution.

There is immense hype surrounding artificial intelligence. While it is reasonable to posit that AI is here to stay and will influence work and work-based learning, there is little empirical evidence as to how this will happen or what it will look like.     

In this article, we present two research-based frames that may help you in your work. Then, given those frames, we posit a question for your organization to ponder. If you want the punchline, it is that how you think about AI will dictate how AI impacts your organization and your work as a chief learning officer. 

Think of AI not as a tool but as a colleague     

Everyone talks about AI as a tool, often as either a panacea or a plague. We argue that such a frame will lead to both unintended consequences when you implement AI, and that most of our uses will be trivial rather than transformative.

 Peter Senge presents us with the frame of organizational learning—where the goal is organizational performance rather than individual performance and one designs the performance (and consequently the learning) as a system of interconnected parts. Paul R. Daugherty and H. James Wilson argue that AI is most powerful when it complements human strengths rather than replaces them. They introduce the concept of the “Missing Middle,” a framework demonstrating how AI can enhance human capabilities instead of simply automating jobs. Given the rise of AI agents—systems that act independently, learn from experience, and make decisions within a defined scope, one can interest Senge’s and Daugherty’s and Wilson’s frames which suggests that rather than treating AI as a passive assistant, companies should integrate it as an active team member—a collaborator embedded into workflows, decision-making and strategic execution.

Matt Seligman’s work at the Burning Glass Institute offers some intriguing first signals that this human/AI interaction is much more likely than the job displacement everyone else is writing about fearfully. Consequently, it is a frame more grounded in science but Senge suggests to us that if we change the unit of analysis from the individual to the team/org, we will better design the work and the learning. Design everything for the team with AI as a member of that team. Just like any team, members have different strengths, so again, from Daugherty and Wilson:     

  • Humans provide judgment, creativity and oversight, while AI handles routine and analytical tasks.
  • AI agents are embedded into teams.
  • Organizations develop new skills to enable employees to work effectively alongside AI.

This framework supports the idea that AI should be assigned to specific teams, given structured oversight and managed within functional business units.

Now, there is already some precedent for this; Salesforce’s Einstein AI acts as a sales assistant, engaging prospects, recommending next-best actions and managing customer interactions autonomously. Similarly, HR AI agents like HireVue and Eightfold AI conduct first-round interviews, screen resumes and predict candidate success rates. In operations, AI agents manage logistics, optimize supply chains and detect inefficiencies before human employees notice them. 

Think of AI not as a teacher but as a learner

This frame of AI as a member of the team leads us to our second frame, again supported by research.  Most people think of AI as an expert, as the answer to the problem. We see this most in the plethora of AI as a tutor in Ed Tech solutions for our nation’s schools. Frankly, there is little evidence that this approach is the most productive one for our schools (beyond what it says about how engineers think about teachers), and it is even less logical when applied to the world of work. Instead, think of AI as the new team member, the new hire.  

So, as a CLO, you need to teach AI in the same way you teach all your other new employees – and not just on the technical components of their role but also on the “soft skills” such as collaboration. You are the teacher, and AI is the student, the neophyte. CLOs must take ownership of teaching AI in order for it to be productive in their organizations.

As CLO, you will also need to determine ways to help the AI learn effectively, as systems are only as good as their training data. Bias in hiring, lending or performance evaluations can have real-world consequences.

Companies must conduct bias audits and maintain human-in-the-loop processes to prevent AI-driven discrimination. It is particularly interesting to think about this in the context of the conversations surrounding myriad biases that we all fall prey to. 

To whom should AI agents report? Not IT!

If you at least consider our two frames, then it begs the question: If these AI Agents are parts of teams, should they not be managed with the same level of oversight and accountability as human team members? Does it really make sense for a sales AI agent to be managed by IT as a tool and not be part of the sales department, with oversight from the head of sales? Should not HR AI agents be managed by the chief people officer? More importantly, how do organizations govern AI teammates to ensure they enhance productivity while maintaining ethical and responsible business practices?

With AI agents taking on increasingly sophisticated roles, rethinking AI’s place in organizational structures may be key to its successful adoption.

Organizations must determine where AI agents fit within existing structures and who is responsible for their oversight. This begs ancillary questions such as AI agents need performance metrics, tracking accuracy, bias and efficiency. AI dashboards and monitoring systems should ensure compliance with company policies (how about a balanced scorecard for AI).

When AI agents are treated as team members rather than passive tools, organizations gain significant benefits:

  • Enhanced decision-making
  • Increased efficiency and productivity
  • Cross-departmental collaboration

However, for AI agents to fully integrate, organizations must train, manage and evaluate them, just like human employees. AI is not replacing teams; indeed, despite the hype, the data on actual job displacement is scant and what we have suggests it is barely a trickle. Nevertheless, AI will become part of the team. And like any new team member, AI requires structure, training, oversight and accountability. The companies that will thrive in the AI era will be those that treat AI as a workforce partner, not just another technology upgrade.

So, should AI report to a team leader? The answer is a resounding yes, not in the sense of human employment, but in the need for clear leadership and structured governance. AI is most effective when it complements human work rather than replacing it. Organizations that structure AI agents as accountable, well-managed teammates will harness the true power of AI—enhancing productivity, fostering innovation, and ensuring ethical adoption.