Meta has signed a deal to use millions of Amazon's Graviton AI chips through AWS, marking a major win for Amazon's homegrown silicon and a significant shift in how AI companies are sourcing the compute they need to run the next generation of AI agents.
Why CPUs, Not GPUs
The deal centers on AWS Graviton — an ARM-based CPU, not a GPU. While GPUs remain the chip of choice for training large AI models, the workloads are changing as the industry shifts from model training toward running AI agents in production. Agents create compute-intensive tasks like real-time reasoning, code writing, search, and multi-step coordination that are better handled by CPUs optimized for general-purpose AI workloads.
Amazon's latest Graviton chip was designed specifically for these AI-related compute needs. For Meta, which is projecting $115 to $135 billion in capital expenditure for 2026 as AI spending surges, the deal provides access to purpose-built silicon at scale without the supply constraints that affect Nvidia hardware.
A Strategic Coup for Amazon
The timing of Amazon's announcement was deliberate — dropping the news just as Google Cloud Next wrapped up, where Google had unveiled its own new TPU chips. The move read as a direct competitive jab at Google, which signed a $10 billion six-year cloud deal with Meta just last August.
The Meta deal brings more of Meta's spending back to AWS after that Google Cloud agreement had shifted the balance. Meta had historically been primarily an AWS customer before diversifying to Google Cloud and Microsoft Azure. Amazon is using its custom chips as the wedge to reclaim that business.
Amazon CEO Andy Jassy signaled this strategy earlier this month in his annual shareholder letter, where he took aim at both Nvidia and Intel, arguing that enterprises want better price-performance ratios for AI and that Amazon intends to win deals on exactly that basis.
The Anthropic Connection
The Meta deal also highlights the increasingly complex web of chip deals that Amazon is managing. Earlier this month, Anthropic signed a $100 billion ten-year agreement to run its workloads on AWS, with a particular focus on Amazon's Trainium AI accelerator chips. Amazon invested another $5 billion in Anthropic as part of that deal, bringing its total stake to $13 billion.
The Anthropic deal effectively locked up much of Amazon's Trainium capacity for years to come. The Meta deal showcases a different part of Amazon's chip portfolio — Graviton CPUs rather than Trainium GPUs — demonstrating that Amazon now has multiple custom silicon products serving different segments of the AI market.
Together, these deals position Amazon as something more than just a cloud provider. It is becoming an AI chip company in its own right, competing with Nvidia not by making the most powerful individual processors but by offering an integrated stack of custom silicon, cloud infrastructure, and pricing that its rivals cannot match.
The Nvidia Angle
Amazon's Graviton chips compete directly with Nvidia's new Vera CPU, which is also ARM-based and designed for AI agentic workloads. The critical difference is distribution: Nvidia sells its chips to enterprises and cloud providers, while AWS only offers access to Graviton through its cloud service.
For Meta, the decision to use Graviton over Nvidia's alternative likely came down to economics and integration. Running AI workloads on AWS with native Graviton chips eliminates the complexity and cost of managing third-party hardware, and Amazon's pricing is designed to undercut Nvidia on a per-compute basis.
But the deal does not mean Meta is abandoning Nvidia. Like most major AI infrastructure buyers, Meta uses a mix of chip providers depending on the workload. GPUs for training, custom silicon for inference and agentic workloads, and traditional CPUs for everything else. The Graviton deal adds another layer to that mix.
The Bigger Picture
The Meta-Amazon chip deal is the latest evidence that the AI chip market is fragmenting in ways that would have seemed unlikely just two years ago. Google is building TPUs. Amazon has Graviton and Trainium. Microsoft is developing Maia. And all three hyperscalers are simultaneously buying more Nvidia hardware than ever.
The shift from GPU-dominated training workloads to CPU-heavy agentic workloads represents a structural change in what the AI industry needs from its chips. As AI agents become more prevalent — handling tasks from coding to customer service to scientific research — the demand for always-on, cost-efficient compute will only grow. And that is exactly the market Amazon's Graviton was designed to serve.
For Amazon, winning Meta as a marquee Graviton customer validates years of investment in custom silicon. For the AI industry, the deal signals that the era of Nvidia's unchallenged dominance may be giving way to something more complex — a multi-chip world where the right processor depends on the workload, and where cloud providers' own chips increasingly compete with the suppliers they also depend on.







