Dive Brief:
- Anthropic signed new agreements with Google and Broadcom to add multiple gigawatts of TPU capacity starting in 2027 as customer demand accelerates, the company said Monday. The model provider doubled its number of business customers over the last two months, from 500 in February to now over 1,000.
- Anthropic trains and runs its Claude frontier AI model on Google TPUs, Nvidia GPUs and AWS Trainium chips, a diverse set of hardware that “translates to better performance and greater resilience for customers who depend on Claude for critical work,” according to the announcement. The range of hardware helps the company make Claude available to customers across key cloud platforms, including Google Cloud, AWS and Microsoft Azure, it said.
- The deal places a spotlight on interconnectivity and signals that “AI infrastructure is becoming an industrial supply chain, not just a cloud purchase,” Alastair Woolcock, VP analyst at Gartner, told CIO Dive in an email. “This gives CIOs more confidence that leading model providers are securing the capacity needed for enterprise-scale AI. However, frontier AI access will increasingly favor providers that can lock in chips, capital and infrastructure years in advance.”
Dive Insight:
Global AI infrastructure spend is expected to increase in 2026, reflecting heightened enterprise demand for AI services. As vendors race to supply compute power for enterprise deployments, CIOs will need to pay attention to constraints around available compute and infrastructure.
An AI model provider like Anthropic locking in large amounts of compute signals that “access to AI at scale will favor companies aligned with hyperscaler infrastructure,” Naveen Chhabra, principal analyst at Forrester, said in an email to CIO Dive.
“The risk isn’t model quality, it’s whether enterprises can secure capacity, control costs and avoid long term lock-in as AI demand outpaces supply,” Chhabra said.
CIOs will need to plan for compute as a constrained resource, anchor vendor strategies to the reality of available infrastructure, focus FinOps on AI spend and push vendors on transparency even as models are offered across multiple cloud providers, Chhabra said.
As compute power becomes a strategic resource, vendors are striking new deals and expanding existing partnerships to better compete for enterprise dollars.
AWS launched a multiyear partnership with OpenAI in February to distribute OpenAI Frontier, the Anthropic competitor’s enterprise platform for AI agents. The same month, Intel partnered with SambaNova to support enterprise AI inference capabilities. Meanwhile, IBM recently announced a collaboration with Arm to build dual-architecture hardware supporting IBM Z mainframes to help enterprises run AI workloads.
Anthropic on Tuesday announced Project Glasswing, yet another initiative uniting multiple companies to use the LLM provider’s unreleased frontier model — Claude Mythos Preview — as part of their cybersecurity strategies. Partnering companies include AWS, Anthropic, Apple, Broadcom, Google, Microsoft, Nvidia, Cisco, Crowdstrike, JPMorganChase, the Linux Foundation and Palo Alto Networks.
Anthropic’s move to expand its partnerships with Google and Broadcom demonstrates how compute access at scale serves as a “key differentiator for generalist models,” Arnal Dayaratna, research VP for software development at IDC, said in an email to CIO Dive.
The vendor's role in the partnerships “reflects how close alignment with large-scale compute providers can support the development and deployment of generalist systems,” he added.
“Large, dedicated infrastructure commitments influence how quickly models can be trained, how frequently they can be iterated, and how broadly they can be deployed across enterprise workloads,” Dayaratna said. “The ability to secure this level of compute and to manage it effectively across training and inference is increasingly shaping which organizations can advance model capability and operate reliably at scale.”
The partnership signals that “frontier AI is moving from a chip race to a systems-and-power race,” with the winners being providers that can secure silicon, interconnectivity and electricity together, Woolcock said.
The move also demonstrates “capacity diversification and resiliency at the frontier,” Woolcock said. Anthropic noted in the announcement that while it’s expanding its Google Cloud relationship, Amazon remains its primary cloud provider and training partner.
“Anthropic’s own strategy suggests enterprise should preserve optionality,” Woolcock said.