As organizations integrate generative AI across business functions, vendors are pouring billions of dollars into the infrastructure needed to support high-capacity workloads. The spending blitz is reshaping the cloud from the ground up.
Investments in data center semiconductors, servers and storage components grew 44% year over year to nearly $80 billion during the second quarter of 2025, according to research published Thursday by Dell’Oro Group. The firm expects elevated growth to persist through the second half of 2025, driven by hyperscaler infrastructure buildouts pushed data center spend to over $450 billion last year.
Earlier this week, Oracle added $10 billion to its annual capital expenditure plans, bumping up the total to more than $35 billion. Google Cloud made a similar adjustment to its CapEx numbers in July, tacking $10 billion onto the $75 billion it initially earmarked for cloud and AI buildouts. Microsoft and AWS signaled their intentions to spend $100 billion and $80 billion, respectively, on capital investments earlier in the year.
A shift to AI compute catapulted Nvidia to the top of the surging semiconductor market for the first time last year, according to Gartner research. The GPU giant’s revenues soared to $46.7 billion in its most recent quarter, a 56% boost over the same period in 2024 and a nearly eightfold increase compared to three years ago.
Vendor moves reflect and refract market signals, illuminating enterprise IT priorities. Here are insights from five executives at companies that play key roles in the technology supply chain, garnered from presentations at the Goldman Sachs Communacopia + Technology Conference this week.
Hock Tan, CEO at Broadcom
While the acquisition of VMware made Broadcom an enterprise software heavyweight, the company’s center of gravity is still infrastructure. Semiconductors accounted for more than half of the chipmaker’s $16 billion in Q3 2025 revenue, CFO and chief accounting officer Kirsten Spears said during an earnings call last week.
The company’s hardware division remains focused primarily on supplying cloud providers with the infrastructure needed to train LLMs, according to CEO Hock Tan.
“We're driving resources of this company to address the specific needs from, to be honest, a very narrow group of customers,” he said Tuesday, at the Goldman Sachs conference, according to a Seeking Alpha transcript. “What we're seeing for the next three years is accelerating demand for that compute capacity.”
Broadcom is happy to leave the more dispersed enterprise market to others.
“I don't think enterprises, at least in the foreseeable future, would ever want to consider developing the core technology for them to enable AI computing,” Tan said. “If you're not an LLM player, you're a little enterprise and you just run one rack or maybe no more than 36 GPUs … direct-attach copper, you're done.”
Colette Kress, EVP and CFO at Nvidia
As Nvidia’s revenue bonanza among hyperscaler and model builders slowed from triple-digit growth to the mid-double digits, the company broadened its focus to enterprise customers earlier this year.
The company is currently helping Yum Brands deploy AI processing power in KFC, Pizza Hut and Taco Bell restaurants, Kress said during the company’s Q1 2026 earnings call in May.
Nvidia’s cloud service provider business remains robust, Kress said at the Goldman Sachs conference Monday, according to a Seeking Alpha transcript.
“CSPs today have literally doubled the amount of capital that they are spending from what they had just two years ago,” said Kress. “The four cloud providers … are doing a tremendous job of being helpful in the early stages of a way to use AI in the cloud and get started in terms of AI, but there is so much more that needs to happen.”
The next step in Nvidia’s strategy is to provide individual enterprises with hardware to run generative and agentic AI workloads in hybrid environments.
“Compute that you'll put together at any type of enterprise is probably a full AI factory where all of your data and all of your pieces are together,” Kress said. “These AI factories will continue to grow and be a significant piece of how enterprises are thinking about their data.”
Nvidia leveraged Meta’s Llama LLM family to develop an enterprise agent-building toolkit called Llama Nemotron. Microsoft, SAP, ServiceNow, Accenture and Deloitte are some of the technology partners Nvidia is leaning on to create industry specific solutions, the company announced in March.
Enterprises want practical AI capabilities, said Kress. “You want it to actually get work done,” she said. “Your lovely model telling you and talking to you all evening to answer all your questions is a great thing. But it would be even more impressive, as we show up for the next day, the amount of work that we’ll be able to accomplish.”
Thomas Kurian, CEO of Google Cloud
Google Cloud and its hyperscaler peers AWS and Microsoft have seen revenues climb as enterprises leverage infrastructure, data and software services to move generative AI pilots into production. Alphabet’s cloud division benefitted from in-house AI chip development and broad deployment of its Gemini LLMs.
As enterprises prepared their data estates for AI ingestion, Google’s BigQuery data cloud saw a 27-fold increase in usage by volume, Kurian said during the Goldman Sachs event Tuesday.
The hyperscaler is seeing demand from traditional enterprises and specialized markets, according to Kurian.
“As capital markets shift from using classical computation for algorithms and are shifting to use inference, the same systems we offer can be used to provide very high-frequency calculations,” he said.
The technology is broadening Google Cloud’s base through new channels, crossing from IT departments over to marketing, customer service and commerce functions.
“In the past, people chose cloud primarily as a mechanism to get developer efficiency, meaning I can get infrastructure on demand and to host applications and to save money in hosting applications by consolidating compute and storage,” Kurian said. "That continues to be important, but that's not the primary driver. The big driver now is ‘I really want to transform my organization. Can you help me by bringing AI expertise and products to help me?’”
Forrest Norrod, EVP and GM of AMD’s Data Center Solutions Business Group
AMD is one of the main chipmakers currently making a run at Nvidia’s GPU dominance. The company spent $4.9 billion last year to acquire cloud architecture engineering firm ZT Systems to bolster its bench of tech talent.
The company is still bullish on traditional CPU-powered computing as AI adoption spreads across industries.
“We actually are seeing AI driving additional new incremental demand on the CPU side,” Norrod said during the Goldman Sachs conference Monday, according to a Seeking Alpha transcript. “There's almost a direct correlation that we're now seeing, particularly over the last three to four quarters.”
AMD’s strategy revolves around integrating chip technologies to support cloud and enterprise workloads.
“AI is, by its nature, a very distributed problem,” Norrod said. “Particularly when we get to agentic cases, deploying AI systems means really deploying a number of different workloads, a number of different models supported by other applications across a very large network.”
The race to manufacture and distribute data center hardware is running up against another roadblock to infrastructure build outs, Norrod cautioned.
“The pace right now, quite candidly, is modulated more by data center and power availability than anything else.”
John Pitzer, corporate VP of corporate planning and investor relations at Intel
Intel’s chip business hit several road bumps in the race to deploy AI. The company installed new leadership and announced layoffs after revenues stalled earlier this year. CEO Lip-Bu Tan, who took the helm in March, instituted a 50% reduction in management layers and pledged to “build what customers need, when they need it, and earn their trust through consistent execution,” during a July earnings call.
More recently, Intel agreed to hand over a 10% stake of the company to the U.S. government in return for the remaining share of a $7.86 billion CHIPS and Science Act grant, Pitzer said during the Goldman Sachs conference Monday, according to a Seeking Alpha transcript.
“It put $5.7 billion of cash on the balance sheet, which we got about two weeks ago,” said Pitzer, who stressed that the federal government already had an interest in Intel.
“Even before the government owned a share of stock, they were a critically important stakeholder in the room around what they do on tariff policies, around what they do on exports and, clearly, the CHIPS Act as well.”
The company is banking on an expected enterprise refresh to buoy its PC and server sales.The server market sparked to life earlier this year, with year-over-year sales surging 134%, IDC research found.
The AI PC market has been sluggish, partially due to tariff-fueled economic uncertainty, according to Gartner. However, HP saw an uptick in shipments during the three months ending July 31.
“We're still very early on the AI PC trend,” Pitzer said. “We don't think the crossover really happens until the second half of next year.”