The digital world is currently under siege. Hackers are using AI to find holes in software faster than humans can fix them. To fight back, Anthropic just launched a massive defensive project called Project Glasswing on April 7, 2026. This isn’t just a small update. It is a huge shield for the code that keeps our world running.
At the center of this is a new model called Claude Mythos Preview. This AI is so powerful that Anthropic says it is too dangerous for the public to use. It scored a massive 93.9% on the SWE-bench Verified test. That is way better than the 80.8% score from their current best public model. Because it is so good at finding vulnerabilities, Anthropic is keeping it locked away for only trusted defenders to use.
The Star-Studded Defense Coalition
Anthropic isn’t doing this alone. They have teamed up with the biggest names in tech. Companies like AWS, Google, NVIDIA, and Microsoft are all on board. Even security giants like CrowdStrike and Cisco are joining the fight. These partners get early access to Mythos to scan their own systems before the bad guys do.
This initiative is fueled by Anthropic’s massive compute deals that provide the raw power needed for such advanced reasoning. The company is also putting its money where its mouth is. They are giving away $100 million in Mythos usage credits. On top of that, they gave $4 million in cash to open-source groups like the Linux Foundation.
Finding 16-Year-Old Secrets in Code
The power of this AI is already showing. In one test, Mythos found a bug in a tool called FFmpeg. This tool is used in almost every video app on Earth. The bug had been hiding there for 16 years. Millions of other security tools missed it. Mythos found it and showed how to fix it.
This kind of speed is a total game-changer. Usually, it takes weeks or months for humans to find and patch these things. Now, it can happen in seconds. According to a report by the Linux Foundation, this gives maintainers the upper hand they have desperately needed. It turns the tide against AI-powered attacks like the ones we saw with the LiteLLM breach recently.
Anthropic is also trying to set a new standard. They want other AI companies to be more careful. By restricting access to Mythos, they are showing that “safety first” isn’t just a slogan. As reported by Silicon Republic, this project uses advanced AI specifically to stop AI-driven hacks before they start. It is a proactive wall against the next wave of cyber warfare.
The project is also reaching into the open-source community. More than 40 different groups are getting help to secure the “boring” code that everything else sits on. This is huge because one tiny bug in an open-source library can take down banks or hospitals. According to the Indian Express coverage, this is one of the biggest collaborative efforts in tech history. It’s a lot of people working together to make sure the AI era doesn’t break the internet.
How Claude Mythos Changes the Open-Source Security Paradigm
Project Glasswing marks a sharp turn in how we think about open-source code. For years, the rule was “many eyes make all bugs shallow.” But those eyes were human. Now, those eyes are AI. This shifts the focus from manual review to automated, intelligent hardening. It effectively ends the era where security relied on hackers simply not noticing a mistake in a mountain of code.
This is a direct response to the “AI-Insecurity” trend. When attackers have Claude-level tools, defenders need something even stronger. By providing the Linux Foundation with $4 million and specialized AI access, Anthropic is building a buffer that hasn’t existed since the 2016 DARPA Cyber Grand Challenge. The result is a move toward a “secure by default” world where the AI patches the holes before the human programmer even knows they made a mistake. It is the first time a major AI lab has prioritized defensive utility over the profit of a public release.
