Analysis·April 18, 2026

The $1 Trillion AI Buildout — Who's Actually Getting Paid?

From chips to cooling, the capital cycle behind the AI boom, and the companies capturing it

IG
Ian Gross
Chief Editor, The Big Market Report
The $1 Trillion AI Buildout — Who's Actually Getting Paid?

The AI boom is not a software story. It never was.

Behind every chatbot, every AI assistant, every model that can write code, analyze contracts, or generate images, there is a physical machine. That machine sits in a building. The building needs power. The power needs cooling. The cooling needs hardware. And all of it needs to be connected, maintained, and expanded, constantly.

The numbers are staggering. In 2025, the four largest hyperscalers, Amazon, Microsoft, Google, and Meta, spent a combined $416 billion on capital expenditures, up 66% from $251 billion the year before. Projections for 2026 put that figure closer to $700 billion. McKinsey estimates that data center infrastructure globally will require $7 trillion in capital investment by 2030.

This is not a software cycle. This is a capital cycle. And the question every investor should be asking is simple: who actually captures the money?

WHAT THE AI BUILDOUT REALLY IS

Strip away the hype and the AI buildout is a construction project. A massive, multi-year, multi-trillion-dollar construction project.

The core components are straightforward. Data centers are the physical buildings that house the computing hardware, they can span hundreds of thousands of square feet and consume as much electricity as a small city. Inside those buildings are chips, specifically graphics processing units (GPUs) designed to handle the parallel computation that AI training and inference require. Connecting those chips are high-speed networking systems, switches, cables, and interconnects that move data between processors at extraordinary speeds. And keeping all of it from melting requires sophisticated cooling infrastructure, increasingly liquid-based rather than traditional air cooling, because the heat density of modern AI chips is far beyond what conventional systems can handle.

Each of those layers represents a distinct market. Each has its own set of winners.

LAYER 1: CHIPS, THE MOST DIRECT BENEFICIARIES

If you want to understand who is getting paid first and most, start with Nvidia (NVDA).

Nvidia's data center segment generated $194 billion in revenue in fiscal year 2026, roughly 90% of the company's total revenue. That is not a typo. A company that most people knew as a gaming chip maker a decade ago now derives nearly all of its revenue from selling AI accelerators to the largest technology companies on earth.

The reason is straightforward: training and running large AI models requires a specific type of chip, and Nvidia's CUDA software ecosystem has made its GPUs the default choice for AI workloads. Switching costs are high. Supply has been constrained. And pricing power has been extraordinary, Nvidia's H100 GPU was selling for upward of $30,000 per unit at peak demand.

Advanced Micro Devices (AMD) is the credible alternative. Its MI300X accelerator offers competitive memory capacity at a lower price point, and AMD has been gaining ground. But Nvidia still controls roughly 80% of the AI accelerator market. For now, the chip layer is effectively a duopoly with one dominant player.

LAYER 2: INFRASTRUCTURE, THE HYPERSCALERS

The companies spending the most on AI infrastructure are also, in many cases, the companies building the products that run on it. Amazon (AMZN), Microsoft (MSFT), and Alphabet (GOOGL) are simultaneously the largest buyers of AI chips and the largest providers of AI cloud services. Meta (META) is building infrastructure primarily for its own AI products.

Microsoft committed $80 billion to data center construction in 2025 alone. Amazon's AWS capital expenditures have been running at a pace that suggests the company is building the equivalent of a new major data center campus every few weeks. Google has been expanding its custom chip program (TPUs) while also buying Nvidia GPUs in volume.

These companies are not spending out of optimism. They are spending because falling behind in AI infrastructure is an existential competitive risk. The logic is: if you do not have the capacity, you cannot offer the service; if you cannot offer the service, you lose the customer; and in cloud computing, losing a customer is very hard to reverse.

The data center developers, companies that build and lease facilities to hyperscalers, are also benefiting. Equinix (EQIX) and Digital Realty (DLR) have seen demand for AI-ready colocation space outpace supply.

LAYER 3: THE SECONDARY SUPPLIERS

This is where the less obvious money flows.

Networking is a critical bottleneck in AI infrastructure. When thousands of GPUs need to communicate with each other at high speed, the switches and cables connecting them become as important as the chips themselves. Arista Networks (ANET) reported $9 billion in revenue for full-year 2025, up 29% year over year, driven almost entirely by AI data center demand. Broadcom (AVGO) is supplying custom AI networking chips to hyperscalers and reported its AI revenue more than doubled in its most recent fiscal year.

Cooling is the other constraint. Modern AI chips generate heat at densities that air cooling cannot handle efficiently. Vertiv (VRT), which makes power and cooling infrastructure for data centers, saw organic orders surge 252% year over year in Q4 2025. Modine Manufacturing (MOD), a smaller player in data center thermal management, has seen its data center revenue grow 50-70% annually. The liquid cooling market, valued at $6.6 billion in 2025, is projected to reach $61.8 billion by 2034.

Power infrastructure is the final constraint. Data centers are now competing with cities for electricity. Companies like Eaton (ETN) and Schneider Electric supply the power distribution and management equipment that keeps data centers running.

WHERE THE MONEY FLOWS

The capital flow is sequential and predictable.

Hyperscalers commit capital. That capital flows to chip manufacturers (primarily Nvidia), to data center construction (developers, contractors, equipment suppliers), and to the secondary layer of networking, cooling, and power vendors. Each layer captures a portion of the spend.

The chip layer captures the highest margin. Nvidia's gross margins have been running above 70%. The infrastructure layer, data center construction, cloud services, captures enormous volume but at lower margins. The secondary supplier layer captures smaller dollar amounts but often at attractive margins because these companies are selling specialized products with limited competition.

RISKS: WHAT COULD BREAK THIS

Three risks are worth taking seriously.

The first is overbuilding. The history of infrastructure booms, fiber optic cable in the late 1990s, shale oil in the 2010s, is a history of overinvestment followed by painful corrections. If AI demand does not scale as fast as capacity is being built, utilization rates fall, pricing comes under pressure, and capital expenditure plans get cut.

The second is margin compression. As AI infrastructure becomes more standardized, the pricing power that Nvidia currently enjoys could erode. AMD is gaining share. Custom chips from hyperscalers (Google's TPUs, Amazon's Trainium) are reducing dependence on third-party suppliers. If the chip market becomes more competitive, the most profitable layer of the stack gets squeezed.

The third is demand disappointment. The entire buildout is predicated on the assumption that enterprises will pay for AI services at scale. If AI adoption in enterprise software proves slower or more uneven than expected, the revenue that justifies $700 billion in annual capital expenditure may not materialize on schedule.

WHAT IT MEANS FOR MARKETS

AI infrastructure stocks have dominated market returns for a reason: the spending is real, the numbers are large, and the beneficiaries are identifiable. Nvidia's market capitalization reflects the market's belief that it will remain the dominant chip supplier for years. Arista and Vertiv trade at premium valuations because their order books are full and their customers are committed.

The theme can persist as long as hyperscaler capital expenditure keeps growing. The key indicators to watch are quarterly earnings calls from Amazon, Microsoft, Google, and Meta, specifically their capex guidance and commentary on AI demand. When those numbers start to slow, the entire supply chain feels it.

THE BOTTOM LINE

This is not a story about software valuations or price-to-earnings multiples on speculative growth companies. This is a story about physical infrastructure, capital allocation, and supply chains.

The AI boom is a capital cycle. Capital is flowing from the largest technology companies on earth into chips, buildings, networking, and cooling systems. The companies positioned in the path of that spending, Nvidia most directly, then the hyperscalers, then the secondary suppliers, are capturing real revenue from real contracts.

Capital cycles end. But they rarely end quickly, and they rarely end before the infrastructure is built. The buildout is still in early innings. The money is still moving.

Watch the capex. Follow the spending. That is where the story is.


This article is for informational purposes only and does not constitute financial advice. Past performance is not indicative of future results.

IG
About the author
Ian Gross
Chief Editor, The Big Market Report

Ian Gross is the founder and chief editor of The Big Market Report. With over a decade of equity research, he writes analysis that cuts through the noise to explain the "why" behind every major market move.

More by Ian →
Not financial advice. The Big Market Report provides analysis for informational purposes only. Nothing on this site constitutes investment advice. Always do your own research and consult a qualified financial advisor before making any investment decisions. Full disclaimer →

Never miss an analysis

Back to all analysis