Dive Brief:
- Oracle partnered with chipmakers AMD and Nvidia to boost its Oracle Cloud Infrastructure, according to a Tuesday announcement. Oracle and AMD will deploy 50,000 AMD Instinct MI450 Series GPUs on OCI starting in Q3 2026.
- The cloud infrastructure provider also unveiled OCI Zettascale10, which connects Nvidia GPUs across multiple data centers, creating “the largest supercomputer in the cloud,” according to a press release.
- As part of its Nvidia partnership, OCI Zettascale10 will leverage Oracle Acceleron RoCE networking architecture powered by Nvidia AI infrastructure, the release said. OCI Zettscale10 will serve as the “fabric underpinning the flagship supercluster built in collaboration with OpenAI” as part of the Stargate infrastructure initiative announced by Oracle and OpenAI earlier this year.
Dive Insight:
Partnering with AMD and Nvidia to support its cloud infrastructure will let Oracle expand end user choice at a time of skyrocketing demand for AI services.
OCI is “playing both sides of the coin” through deep partnerships with Nvidia and Nvidia’s direct competitor, AMD, according to Forrester Principal Analyst Tracy Woo.
The partnerships will set up OCI to compete with some of the bigger cloud players, Woo said in an email. Oracle’s cloud infrastructure and software services revenue increased 24% year over year at the end of 2024 and cloud-based AI usage continues to grow.
“Using the AI tide as its chance to catch the hyperscalers and create its own competitive niche, OCI has established a partnership that none of the major cloud providers have – access to a much larger amount of AMD and Nvidia chips since it is not creating a competing chip to either chip supplier,” Woo said.
In its AMD partnership, the AMD Instinct GPUs will support OCI end users running advanced language models, generative AI and high-performance computing workloads, according to the release. Oracle described the offering as the first publicly available AI supercluster, a purpose-built infrastructure designed specifically for handling AI’s compute demands.
The partnership also serves as a boon for AMD, which Woo said has lagged in the AI chip momentum. AMD is “positioning itself as a viable competitor to Nvidia’s GPUs,” she said.
Companies like Oracle and OpenAI will continue to rely on AMD and Nvidia to expand compute capacity, but the ability to use both chipsets presents a choice previously lacking for data center compute, Gartner VP Analyst Gaurav Gupta said in an email to CIO Dive.
OpenAI reached a six gigawatt deal with AMD earlier this month to power OpenAI’s infrastructure across multiple generations of the AMD Instinct GPUs starting in Q3 2026.
“AMD offers cloud infrastructure companies like Oracle, hyperscalers and frontier model providers like OpenAI an option and diversity from Nvidia,” Gupta said.
Increased competition and greater end user choice come as CIOs grapple with soaring cloud costs. Enterprise AI infrastructure spending is expected to increase drastically from around $60 billion in 2024 to $230 billion in 2026.
Meanwhile, cloud costs alone are becoming unmanageable for IT leaders due to the rise in AI adoption. The next three years will see 42% of firms spending more than 30% of their cloud budgets on generative AI, according to TD Cowen data.