Google has signed a new multi-billion dollar agreement with Thinking Machines Lab, the AI startup founded by former OpenAI executive Mira Murati. The deal gives Thinking Machines access to Google Cloud's latest AI infrastructure including systems powered by Nvidia's new GB300 chips marking the secretive lab's first cloud provider partnership and Google's latest move to lock in fast-growing frontier AI companies.
What the Deal Includes
The agreement, valued in the single-digit billions, covers access to Google's latest AI systems built on Nvidia's GB300 chips, along with infrastructure services to support model training and deployment. Thinking Machines is among the first Google Cloud customers to access GB300-powered systems, which Google says offer a 2x improvement in training and serving speed compared to prior-generation GPUs.
The deal is not exclusive, meaning Thinking Machines may use multiple cloud providers over time. But it signals that Google is aggressively pursuing frontier labs as anchor customers for its cloud platform bundling compute with storage, Kubernetes, and its Spanner database product to create stickier relationships.
A Competitive Cloud Landscape
The partnership arrives amid intense competition among cloud providers to secure AI infrastructure deals with leading AI labs. Earlier this month, Anthropic signed an agreement with Google and Broadcom for multiple gigawatts of TPU capacity. This week, Anthropic also struck a separate deal with Amazon to secure up to 5 gigawatts of capacity for training and deploying Claude a $5 billion investment that brings Amazon's total Anthropic stake to $13 billion.
Google's strategy differs from Amazon's. Where Amazon is investing directly in AI companies and tying those investments to cloud commitments, Google is competing on infrastructure quality and speed offering early access to the latest Nvidia hardware as a differentiator.
Who Is Thinking Machines Lab
Mira Murati left her role as OpenAI's chief technologist and founded Thinking Machines in February 2025. The company raised a $2 billion seed round at a $12 billion valuation shortly after and has remained highly secretive about its research direction.
Its first product, launched in October, is called Tinker a tool that automates the creation of custom frontier AI models. The Google Cloud deal provided some insight into the lab's technical approach: Tinker's architecture relies heavily on reinforcement learning, the training method behind recent breakthroughs at labs including DeepMind and OpenAI. Reinforcement learning is extremely compute-intensive, which explains why Thinking Machines needs cloud infrastructure measured in billions of dollars.
The lab also partnered with Nvidia earlier this year in a deal that included an investment from the chipmaker. The Google Cloud agreement represents a natural next step moving from securing GPU access to securing the full cloud infrastructure stack needed to train and deploy models at scale.
Why Google Wants Frontier Labs
For Google, signing deals with frontier AI labs serves multiple strategic purposes. Each agreement generates significant cloud revenue, locks customers into Google's ecosystem of tools and services, and demonstrates that Google Cloud can compete with AWS and Azure for the most demanding AI workloads.
The approach mirrors what Microsoft did with OpenAI securing a deep cloud partnership that generated billions in Azure revenue while giving Microsoft preferential access to OpenAI's technology. Google is now running the same playbook with multiple labs simultaneously: Anthropic, Thinking Machines, and others.
The Bigger Picture
The Thinking Machines deal underscores a structural shift in the AI industry. The frontier labs building the most powerful models are no longer scrappy startups operating on modest budgets. They are multi-billion dollar operations that require data center scale infrastructure to train each generation of models.
The cloud providers who secure these relationships early and deliver the compute, storage, and networking these labs need will capture a disproportionate share of the AI economy's infrastructure spending. Google, Amazon, and Microsoft are all racing to be that provider. And former OpenAI executives like Murati, who understand exactly what it takes to build frontier models, are the customers everyone wants to win.







