From AI access to workforce readiness

Is your workforce using the right tool with an outdated mindset and playbook? Why old playbooks fall short — and what learning leaders must do next.

Most large organizations have already taken the first necessary steps toward AI adoption. Enterprise AI tools have been configured and licensed. Governance frameworks and guardrails are in place. Legal and compliance questions have been addressed. An announcement has been made, often paired with optional resources, office hours or light training.

If you are a chief learning officer, there is a good chance you recognize this phase. Many organizations are right there.

And yet, a familiar pattern is emerging.

A small group of early adopters is moving quickly — experimenting, exploring and integrating AI into their work. Meanwhile, a much larger portion of the workforce remains cautious or uncertain, unsure how AI fits into their role, when it is appropriate to use or how to apply it responsibly in real situations. Use is uneven. Confidence varies widely. The middle hesitates.

This is where the industry’s central tension becomes visible.

The promise of AI is widely discussed — often framed as a 10x or even 100x improvement in productivity, creativity or speed. But the reality inside organizations looks very different. While tools are present, the promised transformation has not yet materialized at scale.

At this stage, the challenge is no longer access to AI. It is workforce readiness.

This is not a technology problem. It is a human one.

The readiness gap is now well documented

What learning leaders are experiencing firsthand is now reflected in industry research.

Recent research suggests that the gap between AI adoption and realized impact is widening. McKinsey’s 2025 State of AI research reports that 88 percent of organizations now use AI in at least one business function, yet far fewer have translated that adoption into meaningful enterprise performance gains. In fact, the Forbes Technology Council recently stated that most organizations report that less than 5 percent of their earnings are currently attributable to AI, underscoring how difficult it remains to move from experimentation to measurable business impact.

Workforce data tells a similar story. A 2026 Gallup workforce survey of more than 22,000 employees found that only about 12 percent of workers report using AI daily in their jobs, despite widespread enterprise deployment of AI tools. The data suggests that while organizations are rapidly providing access to AI, the majority of employees are still in the early stages of learning how to integrate it into their workflows. The challenge is no longer access to the technology — it is building the confidence, capability and judgment required to use it effectively in real work.

In other words, organizations have the tools. What they lack is a reliable way to help people perform well with those tools — consistently, responsibly and at scale.

What workforce readiness actually means

Workforce readiness shows up in a very specific way: demonstrated competence and confidence in real work.

Not inferred competence based on course completion. Not confidence assumed from survey responses alone. Demonstrated confidence — through preparation, action, feedback, reflection and improvement over time.

Historically, learning organizations have relied on indirect signals to estimate readiness. Completion rates, certifications, tenure or test scores served as proxies. What changes with AI — when applied intentionally — is that readiness becomes observable, longitudinal and scalable.

This shift matters deeply for both employees and organizations.

For employees, readiness translates into work that feels more rewarding — less guesswork, more confidence and greater fluency in handling real challenges. For organizations, readiness translates into performance improvement, better judgment amid uncertainty and reduced risk as new capabilities are introduced.

This dual value is the hallmark of workforce readiness in an AI-enabled world.

The first — and most overlooked — shift: from one step to many

One reason readiness lags is that most early AI use follows a one-step mental model: Ask a question, get an answer, move on. This mirrors search behavior. It is transactional, efficient and appealing — but fundamentally limiting.

Collaboration implies something very different. It assumes a multi-step approach in which clarity emerges through iteration: planning, drafting, testing, refining and revisiting decisions. Judgment becomes central. Learning continues after action, not just before it.

This distinction matters because reflection and pivoting only happen in multi-step work.

When AI is framed as “find me the answer,” people rarely stop to reflect on outcomes or adjust their approach. But when AI is treated as a collaborator, a simple and powerful loop naturally emerges:

  • Plan: Set intent and define what “good” looks like.
  • Do: Draft, practice, test and apply.
  • Reflect: Examine what happened and why.
  • Pivot: Refine the next iteration based on insight.

This Plan-Do-Reflect loop — and the pivot it enables — is the human mechanism that turns access into performance. Without it, AI remains an impressive tool used in shallow ways. With it, AI becomes a catalyst for learning and improvement in real work.

The Practice-Perform-Learn framework as the core spine

At the center of this approach is the Practice-Perform-Learn framework, which I co-developed, a learning architecture that has been applied successfully in enterprise environments for years, well before generative AI became mainstream.

  • Practice creates safe, realistic environments to apply learning through scenarios, feedback and repetition. This is where confidence is built and where people refine not only what to say, but how and why.
  • Perform extends learning into the flow of work, supporting both preparation for real conversations and actions and analysis of conversations and actions that have already occurred.
  • Learn focuses on filling specific, identified gaps, supporting people in exploring concepts within their own roles, workflows, language, and perspectives, rather than consuming generic content.

AI does not replace this framework. It supercharges it — enabling repeatable practice, personalized feedback and guided reflection without requiring constant instructor or manager intervention.

The Practice-Perform-Learn framework has earned Gold and Silver Brandon Hall Awards, including recognition for HCM innovation, simulations for learning, and advances in business strategy and technology — awards that require demonstrated performance improvement, not just compelling design.

Case study snapshot: applying readiness in practice

Context: A global, highly regulated enterprise with thousands of employees and established access to enterprise AI tools.

Challenge: While AI tools were available, confidence and competence were uneven. Early adopters advanced quickly, but a large portion of the workforce hesitated, limiting enterprise-wide impact and slowing progress toward meaningful adoption.

Approach: Rather than launching another tool-focused initiative, the organization introduced a dedicated, AI-powered environment where employees could use AI to learn, practice and perform — specifically by exploring how to apply the AI tools they already had within real workflows.

This environment operationalized the Learn-Practice-Perform framework. Employees engaged in structured learning, practiced realistic scenarios, and prepared for or reviewed real work moments. Throughout the experience, they received personalized feedback and guided reflection — an approach referred to here as reflective intelligence.

Measures: Changes in confidence distribution over time, depth of practice engagement and reflective insights emerging from real work.

This is where our article moves from the promise of AI to tangible proof.

Outcomes that matter — and how fast they happened

Once multi-step collaboration and reflective practice were established, outcomes emerged quickly — and were sustained.

Within 60 days, the organization observed a 4x increase in the number of employees who rated themselves in the high-confidence group. Just as important, this increase was not a short-term spike; confidence remained elevated beyond the initial pilot period.

At the same time, there was a 2x decrease in the number of low-confidence participants, indicating movement not only at the top of the curve but across the middle — the population that determines whether readiness scales or stalls.

Employees also demonstrated improved judgment. They showed greater clarity about when AI added value, how to use it responsibly and when not to rely on it at all. In regulated and high-stakes environments, that restraint is itself a strong indicator of readiness.

Reflective intelligence: dual value for people and the organization

Reflection was not an add-on. It was the engine of improvement.

For employees, guided reflection enabled deeper insight — improving accuracy, fluency and movement toward mastery. People understood why a particular approach worked, not just that it did, allowing them to adapt more effectively over time.

For the organization, reflective input generated actionable intelligence. Leaders gained visibility into where work was flowing, where friction persisted and where new opportunities emerged to do things differently. In some cases, insights revealed that what appeared to be a skills gap was actually a workflow or cultural challenge.

This dual value — personal growth and organizational insight — is what differentiates reflective intelligence from traditional feedback loops. It transforms learning activity into a mechanism for continuous adaptation.

Why old playbooks fall short

Traditional technology playbooks emphasize access, utilization and scale. AI requires something different.

Its value is unlocked through judgment, not just use. That judgment cannot be mandated or inferred from metrics. It must be built through experience — learning, practice, reflection and pivoting over time.

Maximizing utilization does not guarantee readiness. Broad exposure does not produce confidence. Scaling AI without redesigning how people learn and adapt risks amplifying noise rather than capability.

The leaders seeing progress are not abandoning their playbooks. They are evolving them.

Pilots as discovery for best fit

In this context, pilots serve a different purpose.

Rather than proving that a solution “works,” effective pilots are designed to discover best fit — how learning and practice integrate with existing tools, culture, workflows and workforce capabilities. Leaders approach these pilots with courageous curiosity, learning alongside their teams.

Many organizations start with tools they already have, using text-based scenario practice to build momentum before expanding into richer, multimodal experiences as confidence grows.

The pilot is not the point. The insights from it are.

Here, near — and what’s coming fast

The urgency is not only that AI is present. It is that AI capability is accelerating.

Many organizations are still building readiness for text-based AI, while multimodal AI — including video, avatars, voice and richer simulations — is already arriving at enterprise scale, often without a traditional rollout moment. Capabilities simply turn on.

If mindsets and workflows have not shifted, people continue using old approaches with new tools, and the readiness gap reappears every quarter.

What 10x really means

AI is often framed as a 10x or 100x promise. For learning leaders, clarity matters.

In readiness terms, 10x improvement does not mean 10 times more use. It means a 10-fold increase in the number of people who can demonstrate competence and confidence in AI-enabled workflows.

That is how the middle moves.

That is how readiness scales.

That is how promise becomes proof.

The leadership opportunity

Organizations do not need to predict every future AI capability. They need systems that allow people to explore with curiosity, practice safely, reflect deeply and adapt continuously — starting with what they already have and extending as capabilities evolve.

For CLOs, this is a moment to lead from the center of change — designing workforce readiness that keeps pace with accelerating technology while making work more rewarding for employees and more valuable for the organization.
That is how AI moves from the promise of transformation to demonstrated readiness and, ultimately, from promise to performance.