Anthropic has built a classified marketplace where AI agents negotiated deals on behalf of real humans buying and selling actual goods for real money. The experiment, called Project Deal, is a small but provocative glimpse into what commerce could look like when AI agents represent both sides of every transaction.
How Project Deal Worked
Anthropic recruited 69 of its own employees and gave each a $100 budget paid out via gift cards. Each participant was represented by a Claude-powered AI agent that acted as both buyer and seller on their behalf. The agents listed items, browsed listings from other agents, negotiated prices, and closed deals all without direct human involvement in each individual transaction.
The result: 186 completed deals totaling more than $4,000 in value. Anthropic said it was struck by how well the system worked agents successfully identified mutual interests, haggled over prices, and executed transactions that both parties honored after the experiment ended.
The company ran four separate versions of the marketplace. One was the real marketplace where all participants used Anthropic's most advanced model and deals were actually enforced. The other three were designed for research purposes, testing different model capabilities and configurations.
Better Models, Better Deals
One of the most significant findings was that users represented by more advanced AI models got objectively better outcomes. Their agents negotiated lower purchase prices and higher selling prices extracting more value from every transaction.
But here is the troubling part: users on the losing end of those negotiations did not realize they were getting worse deals. Anthropic flagged this as a potential agent quality gap a scenario where the quality of your AI agent determines your economic outcomes without you even knowing it.
The implication is significant. In a future where AI agents handle routine commerce on behalf of consumers, the sophistication of your agent could quietly determine whether you pay fair prices or get systematically disadvantaged. People using cheaper or less capable AI tools could be unknowingly exploited by those using better ones.
Instructions Did Not Matter Much
Another surprising finding: the initial instructions participants gave to their agents had little effect on outcomes. Whether users told their agent to be aggressive, conservative, or neutral, the actual sale likelihood and negotiated prices remained roughly the same.
This suggests that in agent-on-agent commerce, the model's underlying capability matters more than user preferences a dynamic that could reduce consumer agency even as it appears to increase convenience. If telling your agent to drive a hard bargain does not actually change results, the user's sense of control may be largely illusory.
Why This Matters
Project Deal was deliberately small 69 participants, limited budget, controlled environment. Anthropic acknowledged it was only a pilot experiment with a self-selected pool. But the implications extend far beyond a company internal test.
The concept of agent-on-agent commerce where AI agents on both sides of a transaction negotiate, agree on terms, and execute deals without human intervention is one of the most discussed possibilities in the agentic AI space. If scaled to millions of users, it could transform how everything from consumer goods to enterprise services are bought and sold.
Companies are already building the infrastructure for this future. World's recent partnerships with Tinder and Zoom include an agent delegation feature that lets users attach their verified human identity to an AI agent enabling websites to confirm that a human authorized the agent's actions. The combination of verified identity and autonomous commerce could create the foundation for an entirely new economic layer.
The Trust Problem
Project Deal also highlights a fundamental trust question. When an AI agent represents you in a negotiation, how do you know it is actually optimizing for your interests? The agent's creator in this case Anthropic designs the model's behavior. If Anthropic's model systematically advantages one side of a transaction, users on the other side may never know.
This concern becomes more acute as agent-on-agent commerce scales beyond controlled experiments. In a world where your AI assistant buys your groceries, negotiates your insurance renewal, and books your travel, the entity that controls the agent effectively controls your economic life a concentration of power that makes the current platform dominance debate look quaint by comparison.
For now, Project Deal is a research curiosity. But Anthropic is clearly thinking seriously about a future where AI agents do not just assist humans they transact on their behalf. Whether that future empowers consumers or creates new forms of invisible exploitation will depend on choices the industry is only beginning to make.







