Code Review for Claude Code
4 hours ago
- #AI Agents
- #Software Development
- #Code Review
- Claude Code introduces a new Code Review feature, modeled after Anthropic's internal system, now in research preview for Team and Enterprise.
- The Code Review system uses a team of agents to deeply analyze PRs, catching bugs that human reviewers might miss, increasing substantive review comments from 16% to 54%.
- Agents work in parallel to identify, verify, and rank bugs by severity, providing a high-signal overview and in-line comments on PRs.
- Review depth scales with PR size and complexity, with average reviews taking about 20 minutes.
- Internal testing shows 84% of large PRs (over 1,000 lines) receive findings, averaging 7.5 issues, while small PRs (under 50 lines) see 31% with 0.5 issues on average.
- A case study highlights a critical bug caught in a one-line change that would have broken authentication, demonstrating the system's value.
- Early access customers, like TrueNAS, have benefited from identifying latent bugs in adjacent code during refactors.
- Code Review is more thorough and expensive than lighter solutions, averaging $15–25 per review, with controls for admins to manage spend and usage.
- Available now in beta for Team and Enterprise plans, with detailed documentation available.