Large data centers often have an oddly artificial feel to their air, with cold hallways, blinking lights, and a constant, low-pitched mechanical hum. Places like these are where Meta‘s most recent wager starts to make sense. The company’s multi-year agreement with AMD, which guarantees up to six gigawatts of AI processing power, is not just another agreement with a supplier. It has more of the feel of a dependency insurance policy.
Nvidia has been at the forefront of AI computing for many years. From recommendation engines to research labs, its GPUs became the standard engine of the current AI boom. However, the months-long waiting lists, growing expenses, and shortage cycles have subtly changed Silicon Valley executives’ perspectives. Meta’s agreement indicates that the company is unwilling to wait any longer.
| Category | Details |
|---|---|
| Companies | Meta Platforms, Inc. & Advanced Micro Devices (AMD) |
| Agreement Scope | Up to 6 gigawatts of AI compute capacity |
| Equity Component | AMD to issue ~160 million shares tied to milestones |
| First Deployment | MI450 GPUs in Helios rack-scale systems (2026 H2) |
| Additional Hardware | EPYC CPUs, Venice & next-gen Verano processors |
| Meta AI CapEx | Up to $135 billion through 2026 |
| Industry Context | Big Tech AI spending projected near $650 billion |
| Strategic Goal | Secure supply chain & scale AI infrastructure |
| Headquarters | Meta (Menlo Park, CA); AMD (Santa Clara, CA) |
| Reference | https://www.amd.com |
AMD will issue about 160 million shares under the agreement, with the first tranche being triggered when one gigawatt of chips ships. The shares will be tied to performance milestones. That arrangement seems illuminating. It reflects uncertainty as well as incentives, acknowledging that delivering AI hardware at this scale is a logistical as well as a technological challenge.
Later this year, AMD’s MI450 GPUs, in conjunction with EPYC CPUs, will serve as the foundation for Meta’s Helios rack-scale systems. Additionally, Meta is purchasing next-generation Verano and Venice processors, indicating a shift in the center of gravity in AI infrastructure. While model training is still crucial, the actual workload is shifting to inference, or continuously operating AI systems for users. In a subtle but noticeable way, CPUs are becoming more and more relevant.
Investors responded rapidly. Early trading saw a sharp increase in AMD shares, indicating that the market was eager to see any serious contender to Nvidia’s hegemony. Wall Street seems to want a second pillar in AI hardware, if only to keep supply and prices stable. Investors are uneasy about monopoly physics, even when it is technological.
But instead of defecting, Meta is hedging. Only a few days prior, the business had reiterated its support for Nvidia’s Blackwell and Rubin GPUs. Meta’s infrastructure will also house Nvidia’s Grace CPU servers. It is difficult to overlook the strategy as you watch this play out: diversify your suppliers, stay out of lock-in, and maintain leverage.
The amount of money spent is almost unbelievable. Through 2026, Meta intends to invest up to $135 billion in AI infrastructure. The total expenditure across Big Tech may amount to $650 billion. Before being translated into the actual world, these figures seem fictitious: miles of server halls, transmission lines spanning deserts, and water systems designed to keep racks cool all day and all night.
A small city’s electricity consumption is approximately equivalent to one gigawatt of compute power. The metaphor changes when you multiply that by six: Meta is purchasing land use, energy demand, and long-term operational commitments in addition to chips. Instead of being a software revolution, the AI race is starting to resemble an industrial expansion.
Investors appear conflicted. In contrast to Microsoft, Amazon, and Google, whose stocks fell more precipitously after revealing comparable spending plans, Meta’s stock has remained comparatively stable. Both optimism and exhaustion are present. Some question if the scale will be justified by the returns. Others are afraid of the slow-motion bubble’s contours.
Meanwhile, rumors of custom silicon continue to circulate. Google’s TPUs, Amazon’s Trainium chips, and Meta’s in-house designs all allude to a time when hyperscalers will be less dependent on outside suppliers. Analysts are still doubtful that these substitutes can completely take the place of general-purpose GPUs. Nevertheless, the path of travel seems clear.
This story also has a consumer-facing component. Beyond servers, Meta also wants to develop wearable technology and interfaces with artificial intelligence. Demands for real-time processing may change edge hardware requirements, strengthening the connection between personal devices and data centers. It’s possible that unannounced devices play a role in today’s chip supply security.
Energy is the more general tension. In 2024, data centers used about 1.5% of the world’s electricity, a percentage that is predicted to rise. With the same urgency that was previously reserved for transistor density, engineers discuss cooling systems, power distribution, and efficiency improvements. AI is turning into a power issue on par with computing issues.
Last year, dust swirled across concrete pads as rows of steel framing stretched toward the horizon outside a new data center construction in the American Southwest. It appeared more like the infrastructure of a new utility than the software of the future. I get the same impression when I watch Meta’s AMD pact: this is cloud computing on an industrial scale.
It’s unclear if this marks the beginning of a post-Nvidia era. The ecosystem advantage of Nvidia is still very strong. However, Meta’s action implies that the days of relying solely on one vendor are coming to an end. Businesses constructing the AI of the future seem committed to managing their own energy requirements, supply chains, and possibly even silicon in the future.
It’s still unclear if these wagers will result in long-term benefits or merely drive up industry costs. But the race is on, with gigawatts as the unit of measurement rather than megabytes or model parameters.

