The software development world has undergone a dramatic transformation in recent months. Developers are increasingly relying on AI tools that accept plain-language instructions and churn out large volumes of code in seconds a practice widely known as "vibe coding." While this approach has accelerated the pace of building software, it has also introduced a wave of new bugs, security vulnerabilities, and code that developers themselves may not fully understand.
Peer review has always been a cornerstone of good software engineering. It catches errors early, keeps codebases consistent, and lifts overall quality. But when AI is writing more code than ever, the old review process starts to buckle under the pressure. That's the gap Anthropic is now stepping in to fill.
What Is Code Review and How Does It Work?
Anthropic on Monday launched a new product called Code Review, built directly into its Claude Code developer tool. The feature is designed to act as an AI-powered reviewer that automatically examines code changes before they are merged into a project's main codebase.
According to Cat Wu, Anthropic's head of product, enterprise customers have been asking a key question: with Claude Code generating so many pull requests, how can teams review them efficiently? Pull requests are the standard mechanism through which developers submit changes for peer review before those changes go live. The sheer volume of AI-generated pull requests has created a serious bottleneck.
Once Code Review is enabled, it integrates with GitHub and automatically analyzes incoming pull requests. It leaves comments directly in the code, explaining what issues it found and how they might be fixed. Engineering leads can turn on the feature as a default for every developer on their team.
Focused on Logic, Not Style
One of the most notable design decisions behind Code Review is its focus on logical errors rather than stylistic preferences. Wu explained that developers have often been frustrated by automated feedback tools that flag trivial style issues instead of meaningful bugs. Anthropic deliberately chose to prioritize catching high-impact logic errors the kind that can break functionality or introduce subtle problems into production.
The AI provides step-by-step reasoning for each flagged issue: what the problem is, why it matters, and how it could be resolved. A color-coded severity system helps developers quickly triage feedback red signals critical issues, yellow marks potential problems worth a second look, and purple highlights bugs tied to pre-existing or historical code.
A Multi-Agent Architecture Under the Hood
Behind the scenes, Code Review relies on a multi-agent system. Multiple AI agents examine the codebase simultaneously, each analyzing it from a different angle. A final aggregation agent collects all the findings, removes duplicates, and ranks issues by importance.
This parallel approach allows the tool to be thorough without wasting developers' time on repetitive feedback. However, Wu acknowledged that the architecture is resource-intensive. Pricing is token-based, and the cost varies with code complexity, though she estimated each review would run between $15 and $25 on average positioning it as a premium but necessary investment as AI-generated code volumes continue to grow.
Enterprise-First Rollout
Code Review is launching first for Claude for Teams and Claude for Enterprise customers in a research preview. Wu described the product as specifically aimed at large-scale enterprise users companies like Uber, Salesforce, and Accenture that already use Claude Code and need help managing the flood of pull requests it produces.
The tool also includes a light security analysis layer, and engineering leads can customize additional checks based on their team's internal coding standards. For deeper security scanning, Anthropic pointed to its recently released Claude Code Security product as a complementary offering.
A Pivotal Moment for Anthropic
The launch comes at a significant time for the company. On the same day, Anthropic filed two lawsuits against the U.S. Department of Defense over a supply chain risk designation. This legal dispute may push the company to lean even more heavily on its fast-growing enterprise business. Claude Code's run-rate revenue has reportedly surpassed $2.5 billion since launch, and enterprise subscriptions have quadrupled since the beginning of 2026.
The Bigger Picture
As AI continues to reshape the software development lifecycle, tools like Code Review represent a logical next step. The same technology that is accelerating code creation must also be deployed to ensure quality and safety at scale. Anthropic is betting that developers who build faster with AI will also need AI to review faster and that enterprises will pay a premium for that peace of mind.







