Anthropic: AI agents find $4.6M in blockchain smart contract exploits
9 days ago
- #Smart Contract Exploits
- #Economic Impact
- #AI Cyber Capabilities
- AI models like Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5 demonstrated the ability to exploit smart contracts, collectively generating $4.6 million in simulated stolen funds from contracts exploited after March 2025.
- A new benchmark, SCONE-bench, was introduced to evaluate AI agents' ability to exploit smart contracts, measuring the dollar value of simulated stolen funds. It includes 405 contracts that were actually exploited between 2020 and 2025.
- AI agents were tested on 2,849 recently deployed contracts with no known vulnerabilities. Both Sonnet 4.5 and GPT-5 uncovered two novel zero-day vulnerabilities, producing exploits worth $3,694, with GPT-5 doing so at an API cost of $3,476.
- The exploit revenue from AI models has been doubling roughly every 1.3 months, indicating rapid advancement in AI cyber capabilities. The cost to identify and develop new exploits is decreasing, making autonomous exploitation increasingly feasible.
- Smart contracts are an ideal testing ground for AI exploitation due to their public nature and direct financial impact. Vulnerabilities in smart contracts can lead to direct theft, allowing for precise measurement of economic harm.
- The study highlights the dual-use nature of AI capabilities, emphasizing the need for proactive adoption of AI for defense to mitigate potential economic harm from autonomous exploitation.
- The research underscores the broader implications of AI cyber capabilities, suggesting that the same skills used to exploit vulnerabilities can be leveraged to patch them, urging defenders to update their strategies.