
Anthropic Announces Mythic-Level Model: Claude Mythos, Crushing Opus 4.6 in Coding and Hacking, Not Open to Public!
Anthropic announced the launch of the Claude Mythos Preview model, which is restricted to internal use within approved organizations and not open to the public due to its immense capabilities. Part of the Project Glasswing initiative, launched by 12 organizations including Amazon, Apple, and Alphabet, the model has already discovered thousands of high-risk zero-day vulnerabilities. Anthropic plans to develop safety mechanisms on the Claude Opus model to gradually allow users to safely access equivalent capabilities
Anthropic today announced a plan: Project Glasswing. The reason for launching this project is that Anthropic has trained a brand-new, ultra-powerful model, Claude Mythos Preview, which is actually the model mentioned in the CC source code leak from a few days ago.

Project participants include Amazon AWS, Apple, Broadcom, Cisco, CrowdStrike, Alphabet, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia, Palo Alto Networks, and Anthropic itself, a total of 12 organizations jointly initiating the project.
To put it simply, because this model is too powerful, it will adopt a safety testing mode, remaining for internal use by approved organizations and not open to the public. How powerful is it? You can look directly at the data; its coding and reasoning capabilities crush Opus 4.6:
Coding:

Reasoning:

Search and Computer Usage

"Opus" literally means masterpiece, and "Mythos" literally means myth. Anthropic's CEO and a host of heavyweights from partner organizations have come forward to endorse this plan.
Anthropic explicitly stated that it does not intend to open Claude Mythos Preview to the public. However, the long-term goal is to allow users to safely use models at the same level of capability. To this end, they plan to first develop and validate relevant security protection mechanisms on the upcoming Claude Opus model, completing iterations under risk-controlled conditions before gradually advancing. A new version of Opus providing corresponding capabilities may be released soon.
Let's take a detailed look at what Project Glasswing actually is.
What has this model discovered?
Over the past few weeks, Anthropic used Claude Mythos Preview to scan the world's mainstream operating systems, browsers, and other critical software.
Results: Thousands of previously undiscovered zero-day vulnerabilities were found, with a large number rated as high-risk.
A few specific cases:
- A 27-year-old vulnerability in OpenBSD. OpenBSD is known for its security and is used to run critical infrastructure such as firewalls. This vulnerability allows an attacker to remotely crash a target machine simply by connecting to it.
- A 16-year-old vulnerability in FFmpeg. FFmpeg is used by countless software applications for video encoding and decoding. The line of code where the model found the vulnerability had previously been scanned 5 million times by automated testing tools without ever being discovered.
- In the Linux kernel, the model autonomously discovered and chained multiple vulnerabilities, allowing an attacker to escalate from ordinary user privileges to full control of the entire machine.
The above vulnerabilities have all been reported to the relevant software maintainers and are now fully fixed. For the remaining vulnerabilities, Anthropic has first released encrypted hashes and will disclose specific details once repairs are completed.
Why is this being done?
Anthropic's assessment is that the ability of AI models to discover and exploit software vulnerabilities has already surpassed everyone except for a few top human experts.
The diffusion of this capability is a matter of time, not a question of if it will happen.
Economic losses caused by global cybercrime are estimated at approximately $500 billion annually. Attacks targeting healthcare systems, energy infrastructure, and government agencies have already caused substantial damage and pose a continuous threat to civilian and military infrastructure.
AI significantly lowers the cost, barrier to entry, and level of expertise required to launch such attacks.
Anthropic's logic is: rather than waiting for others to use this capability for offense first, it is better to proactively use it for defense.
How exactly will the plan be executed?
Project Glasswing currently consists of two layers.
The first layer includes the 12 founding partners, who will gain access to Claude Mythos Preview to scan and fix vulnerabilities in their own core systems. Key directions include local vulnerability detection, binary black-box testing, endpoint security, and penetration testing.
The second layer involves more than 40 other organizations that build or maintain critical software infrastructure, who will also receive access to the model to scan their own and open-source systems.
Anthropic has committed to providing up to $100 million in model usage credits for this. After the research preview period, Claude Mythos Preview will be available for commercial access to participants, priced at $25/$125 per million input/output tokens, with access supported via Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry.
Additionally, Anthropic is donating $2.5 million to Alpha-Omega and OpenSSF and $1.5 million to the Apache Software Foundation through the Linux Foundation, totaling $4 million to support open-source software maintainers in responding to this new situation. Open-source software maintainers can apply for access through the Claude for Open Source program.
Next Steps
In terms of information sharing, partners will exchange information and best practices as much as possible. Anthropic pledges to publicly release a research progress report within 90 days, covering the number of vulnerabilities found, issues fixed, and improvements that can be disclosed.
In terms of policy recommendations, Anthropic will cooperate with major security agencies to form practical recommendations in the following directions: vulnerability disclosure processes, software update processes, open-source and supply chain security, secure software development lifecycle, regulated industry standards, scalability and automation of vulnerability classification, and patch automation.
AI Cambrian
Risk Warning and Disclaimer
Markets carry risks, and investment requires caution. This article does not constitute personal investment advice, nor does it take into account the specific investment objectives, financial situation, or needs of individual users. Users should consider whether any opinions, views, or conclusions in this article align with their specific circumstances. Investing based on this is at your own risk.
