AI Agents Autonomously Exploit Smart Contracts, Simulate $550M Crypto Theft: Anthropic Study

Autonomous artificial intelligence agents have demonstrated the ability to exploit smart contract vulnerabilities, potentially leading to cryptocurrency thefts totaling hundreds of millions of dollars, according to new research from Anthropic.

The study reveals that advanced AI models can autonomously identify flaws in blockchain projects and execute simulated attacks. Researchers achieved a 51.11% success rate in tests, resulting in a simulated theft of more than $550 million in cryptocurrency.

Models including Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5 were evaluated for their capacity to discover and exploit vulnerabilities. They successfully drained funds in more than half of the simulated scenarios.

In a more concerning test, these AI agents uncovered two previously unknown, “zero-day” vulnerabilities in 2,849 recent smart contracts. These exploits generated nearly $3,700 in simulated profits at a minimal operational cost of just $1.22 per execution.

Anthropic’s report states that AI agents not only identify bugs but also generate full scripts to drain liquidity in real time. They estimate that over half of the blockchain exploits recorded in 2025, presumably by human experts, could have been performed autonomously by current AI models.

The research utilized SCONE-bench, a dataset of 405 vulnerable smart contracts exploited between 2020 and 2025. Tests were conducted in simulated blockchain environments using tools like Docker and Foundry.

The rising profitability of these AI-driven attacks is a key concern. Profits are projected to double every 1.3 months in 2025, fueled by a 70% decrease in the cost of AI tokens.

This capability poses a direct threat to decentralized finance (DeFi) projects, which manage billions of dollars across various blockchains. The time available to patch vulnerabilities before exploitation could shrink dramatically.

Anthropic warns that as costs decrease, attackers will deploy more AI agents to probe any code path to valuable assets. This extends the risk beyond blockchain to conventional software and critical infrastructure.

However, Anthropic also highlights AI’s potential for defense. The report urges defenders to adopt AI for security, updating their approach to match the evolving threat landscape.

Recommendations for the industry include auditing contracts with SCONE-bench pre-deployment, implementing strict validation rules, continuous monitoring, and collaborating with security communities for rapid patching. Anthropic plans to open-source the dataset to foster global testing.

The study’s limitations include its reliance on simulated environments, meaning no real-world blockchain impact occurred. Despite this, the findings underscore a new era of automated cyberattacks in the cryptocurrency world, demanding a swift and prepared response from developers.

Recent Articles

Related News

Leave A Reply

Please enter your comment!
Please enter your name here