Amazon Web Services posted its fastest growth in 15 quarters. Revenue climbed 28 percent year-over-year to $37.6 billion. CEO Andy Jassy attributed the surge to AI demand, saying he has never seen a technology grow as rapidly. But the growth comes at a cost — Amazon is spending enormous amounts on infrastructure that will take years to monetize.
The AI Revenue Engine
Jassy put the numbers in perspective. Three years after AWS launched in 2006, it had a $58 million revenue run rate. Three years into the AI wave, AWS's AI revenue run rate has reached over $15 billion — nearly 260 times larger. The comparison illustrates just how different the AI era is from anything that came before it in cloud computing.
AWS is the largest cloud provider in the world. Its role as the infrastructure backbone for the AI industry — powering companies from Anthropic to OpenAI to Meta — is driving growth at a pace that Jassy said is very unusual for a business this large.
The Capital Spending Problem
The growth has a price tag. Amazon is spending aggressively on the infrastructure that supports AWS — land, power, buildings, chips, servers, and networking gear. All of it must be purchased and deployed before it can generate revenue.
Free cash flow dropped to $1.2 billion for the trailing twelve months. The squeeze is driven by capital expenditure growth outpacing revenue growth. Jassy acknowledged the strain, saying that in periods of high growth where capex meaningfully outpaces revenue, free cash flow is challenged in the early years.
He framed the spending as a long-term investment. Data centers last more than 30 years. Chips and servers have a useful life of five to six years. The infrastructure being built now will generate returns for decades. But in the near term, the cash burn is significant.
The AI Infrastructure Arms Race
Amazon's earnings arrive on the same day as Google Cloud's $20 billion quarter and Microsoft's Copilot milestone of 20 million paid users. All three hyperscalers are reporting surging AI revenue — and all three are spending billions to build the infrastructure needed to keep up.
The numbers paint a clear picture. Google Cloud's backlog doubled to $462 billion. AWS is growing at its fastest rate in nearly four years. And Microsoft's AI business is generating billions in new revenue through Copilot and Azure. The AI infrastructure market is not slowing down — it is accelerating.
Amazon's custom chips are central to its competitive strategy. Graviton CPUs and Trainium AI accelerators give AWS pricing and performance advantages that pure Nvidia-based competitors cannot match. The recent deals with Anthropic and Meta validate that approach — major AI companies are choosing Amazon's custom silicon alongside or instead of Nvidia hardware.
The Dual Bet
Amazon is uniquely positioned in the AI economy. It is both the largest infrastructure provider and one of the largest AI investors. Its $13 billion total investment in Anthropic makes it the company's biggest backer alongside Google. Its $50 billion deal with OpenAI gives it a stake in both sides of the AI rivalry. And its custom chips give it a hardware advantage that no other cloud provider can fully replicate.
Jassy compared the current cycle to AWS's first major growth wave. Back then, heavy capex preceded years of massive revenue and free cash flow generation. He expects the same pattern with AI — short-term pain for long-term dominance.
The Bigger Picture
Amazon's Q1 earnings confirm that the AI boom is real, measurable, and accelerating. AWS alone generated more revenue in one quarter than most technology companies generate in a year. The AI revenue run rate of $15 billion within three years of the wave's start is unlike anything the industry has seen before.
But the free cash flow squeeze is a reminder that building the physical infrastructure for AI is extraordinarily expensive. The companies winning the AI race are spending tens of billions per quarter on land, power, and chips — a pace that only the largest companies in the world can sustain. For everyone else, the message is clear: the AI infrastructure era belongs to the hyperscalers.







