For decades, competitive strategy rewarded ownership. The firms that won controlled their technology stacks, guarded their intellectual property and differentiated through proprietary capability. In the era of agentic AI, that logic is breaking down.
Today, some of the fiercest competitors in technology are choosing to collaborate not at the margins, but at the core of their intelligence architecture. What looks like a paradox is, in fact, a structural shift. Competitive advantage is no longer defined solely by what a firm owns. It is increasingly shaped by how effectively it participates in ecosystems.
In short: W.T.F. is happening to competition.
Case #1: Apple and Google: When rivals choose capability over control
Few rivalries in technology are as deeply entrenched as that between Apple and Google.
They compete across operating systems, devices, platforms, data, advertising and user attention. Apple has long positioned itself as the privacy-first, vertically integrated alternative to Google’s data-driven, services-based ecosystem. Their incentives, business models and cultural identities have historically been in direct tension.
For years, Apple’s advantage rested on end-to-end control. Hardware, software and user experience were tightly orchestrated under one roof. Siri, introduced more than a decade ago, reflected that philosophy. But as large language models and agentic systems evolved, the limits of vertical integration became increasingly visible. Model innovation accelerated faster than any single company’s internal development cycle.
Apple evaluated multiple paths to power its next generation of intelligence. Internal development proved slower than market velocity. External partnerships were explored, including a high-profile relationship with OpenAI. Ultimately, Apple made a striking decision.
Apple confirmed that the next generation of Apple Foundation Models would be based on Google’s Gemini models, powering future Apple Intelligence features, including a more personalized Siri. Apple stated that after careful evaluation, Google’s AI technology provided the most capable foundation for its needs.
What makes this move remarkable is not collaboration alone. It is the separation of capability from control.
Apple retains what matters most to its differentiation: on-device execution, Private Cloud Compute and industry-leading privacy standards. Google provides what Apple chose not to replicate at this moment: frontier model capability at market speed.
This is not weakness. It is strategic clarity. Apple did not try to win the model race. It chose to win the experience race.
Where this partnership could fail
This collaboration carries risk. Dependency on a competitor introduces vulnerability if incentives diverge or trust erodes around roadmap control. As Patrick Lencioni’s work on team dysfunctions suggests, even rational partnerships fail when accountability and commitment are implicit rather than explicitly governed.
High drama, high tech: The breakup and make up logic of AI power plays
If Apple and Google can collaborate at the foundation-model layer, it signals something larger than a single partnership. In the AI era, rivalry is no longer a stable boundary line. It is a shifting relationship shaped by capability gaps, speed-to-market pressure, governance requirements and the economics of compute.
Alliances form, fracture and reform as conditions change — not because competition has disappeared, but because advantage increasingly depends on selective interdependence.
This pattern is not confined to consumer platforms. It is accelerating across the enterprise stack, as well.
Case #2: Salesforce and AWS: Competing platforms, shared AI infrastructure
The collaboration between Salesforce and Amazon Web Services reflects the same structural logic at the enterprise layer.
Salesforce differentiates through customer-facing applications and workflows. AWS dominates infrastructure, cloud services and foundational AI capabilities. As agentic AI moved from experimentation to enterprise deployment, customers increasingly needed secure, scalable and governed systems that neither firm could efficiently deliver alone without duplication.
The result was a deepened partnership that enables Salesforce’s agentic AI capabilities to run on AWS infrastructure, including availability through AWS Marketplace. This reduced procurement friction, embedded governance and allowed both firms to focus on their respective strengths.
They continue to compete. But they collaborate where the economics and complexity of AI make isolation inefficient.
Where this partnership could fail
The risk lies in trust erosion around data access, customer ownership or incentive misalignment. Lencioni’s insight applies here as well: Collaboration breaks down when hard tradeoffs are avoided rather than designed into the operating model.
Case #3: IBM: Ecosystem orchestration through proof, not prediction
IBM offers a different but equally instructive frenemy strategy.
IBM competes with hyperscalers, software firms and consultancies across AI, automation and transformation services. At the same time, it collaborates extensively through open-source models, shared governance standards and partner ecosystems.
Internally, IBM operates as Client Zero. Through Project Bob, a multi-model IDE used by more than 10,000 developers, IBM reports productivity gains of approximately 45 percent in production environments. These results provide rare, quantified evidence of agentic AI operating at enterprise scale.
Externally, IBM’s Granite models are released under open-source licenses, aligned with responsible AI standards, and distributed through partner platforms such as Hugging Face and Docker Hub. IBM competes not by hoarding models, but by differentiating on governance, integration and execution.
Where this strategy could fail
Openness without accountability risks diffusion rather than differentiation. As Lencioni’s framework suggests, ecosystems fail when shared outcomes are assumed rather than explicitly measured.
Case #4: Microsoft and Anthropic (Claude): When platform owners prioritize capability over internal loyalty
Microsoft is one of the most deeply integrated AI platform builders in the world. It owns GitHub Copilot; has embedded Copilot across Microsoft 365, Azure and its developer stack; and is a major investor in OpenAI. On paper, Microsoft has every incentive to drive internal adoption of its own AI tools exclusively.
And yet, Microsoft has instructed some of its own software engineers to use Anthropic’s Claude Code alongside GitHub Copilot, rather than relying solely on Microsoft’s internal tooling.
At first glance, this looks like a contradiction. Why would a company with one of the most expansive AI platforms in the world encourage employees to use a rival model?
The answer lies in execution realism.
Reports indicate that Microsoft engineers found Claude’s strengths in reasoning, code explanation and long-context handling made it better suited for certain development tasks. Rather than forcing internal loyalty at the expense of productivity, Microsoft made a pragmatic choice: allow teams to use the best tool for the job, even when that tool belongs to a competitor.
This is not a rejection of Copilot. It is a recognition that agentic AI performance varies by use case and that no single model currently dominates across all dimensions of software development.
Microsoft continues to compete fiercely at the platform level while selectively collaborating at the capability level.
This is a frenemy strategy inside the firm.
Where this strategy could fail
The risk is not technical. It is human.
If tool choice becomes ambiguous rather than intentional, teams may fragment, standards may erode and accountability may blur. As Lencioni’s dysfunction model predicts, lack of clarity around commitment and accountability can quietly undermine even rational strategies.
Success depends on governance: clear guidance on when and why different tools are appropriate, how learning is shared across teams and how insights feed back into platform strategy rather than competing with it.
Why frenemies are becoming inevitable
Across these cases, a common truth emerges. AI systems are advancing faster than any single organization’s ability to build, govern and scale them. Compute costs, safety expectations, talent mobility and regulatory scrutiny have shifted advantage from ownership to orchestration.
The competitive unit is no longer the firm. It is the ecosystem.
SHINE at the ecosystem level: The human operating system behind frenemy success
Across all four cases described above, success depended not on technology alone, but on human systems.
- Sponsorship & sensemaking: Leaders reframed where to compete and where to collaborate.
- Habits & upskilling: Teams shifted from building everything to orchestrating intelligently.
- Integration & incentives: Collaboration created mutual value rather than zero-sum tradeoffs.
- Norms & governance: Clear boundaries preserved trust across organizational lines.
- Evidence & expansion: Scaling followed proof, not hype.
Without these elements, frenemy strategies collapse under their own tension.
Read more about the SHINE framework here.
What this means for learning, talent and change leaders
Capability can no longer be developed in isolation. Learning agendas must prepare employees to operate across organizational boundaries, collaborate with external platforms and work effectively alongside AI systems that are not owned or fully controlled by their employer.
Leadership development must emphasize sensemaking, boundary-setting and ecosystem literacy rather than functional mastery alone. Upskilling strategies must focus on orchestration skills: how to integrate tools, partners and agents into coherent workflows. Change management must extend beyond internal adoption to include trust-building, governance design and shared accountability across firms.
People leaders become stewards of trust. As partnerships proliferate, employees will experience ambiguity around ownership, incentives and identity. Clear narratives, aligned rewards and transparent governance are operational necessities, not soft considerations.
The takeaway
AI has collapsed old competitive boundaries.
Innovation now happens in ecosystems.
Execution happens through alliances.
Advantage emerges from teaming and collaboration.
Competitors are not disappearing. They are transforming.
In the era of agentic AI, frenemies are not a curiosity. They are a strategic capability. And the organizations that master the human systems behind collaboration will be the ones that lead.
To read more about the SHINE framework referenced in this article, check out author Christyl Lucille Murray’s recent Chief Learning Officer piece, “SHINE in the age of agentic AI: The human operating system behind enterprise transformation.”















