Into the deep end: How I stopped waiting for perfect AI and started building smarter learning

Pushing a cautious culture into building smarter learning with AI.

Working in tech my whole career, I’ve never believed in waiting for technology to stabilize—because it never truly does. It churns, it evolves, it reinvents itself, demanding courage and curiosity in equal measure.

But the culture around me was another story: cautious, perfection-seeking, hesitant to leap before every box was checked. This article is about how I pushed that culture—in a highly regulated, risk-averse environment—to act, to experiment and to build smarter learning with artificial intelligence before perfection ever arrived.

In these settings, where accuracy, safety and trust are non-negotiable, introducing AI role-play tools into a sales training ecosystem wasn’t just an experiment. It was a bold bet.

But the potential was clear: AI could simulate real customer conversations, scale coaching, reduce bias and offer learners a low-stakes space to practice and reflect.

It also delivered unprecedented gains: enabling the rapid creation of complex role-play scenarios in minutes instead of weeks, providing detailed assessment feedback in real time and freeing trainers to focus their precious hours on higher-impact coaching rather than repetitive drills—a potent return on investment that spoke for itself.

We moved forward with guardrails, designing for iteration, building structured feedback loops and aligning stakeholders on a continual improvement philosophy.

Within the first six months, we had cast a wide net: AI coaches for procedural conversations, product certifications, executive messaging and just-in-time performance support, spanning multiple languages and regions. It was never about perfect solutions on day one, it was about running proof-of-concept experiments to gather rich data on what worked, what surprised people and where to improve next.

The response six months in was also encouraging. Learners engaged more deeply, managers grew curious and confidence rose in applying skills to real conversations. These early signals suggested we were on the right track—that designing for imperfect, evolving AI was worth it.

We continued adapting as more feedback arrived, but one thing was clear: the culture was shifting, and momentum was building.

There was an essential shift in how we approached AI—not as a replacement for human trainers, but as a proxy, a bridge. These tools scaled practice opportunities while keeping the human connection intact. 

The goal wasn’t for AI to coach better than humans, but to help people prepare better—for their manager, their customer and themselves.

Maybe that’s the biggest shift of all: seeing AI not as a magic wand, but a whiteboard—writable, erasable, collaborative. A place to think out loud, to try again. 

In the end, AI didn’t just help us train better. It helped us learn better. Not because it was perfect. But because we finally stopped waiting for it to be.