The AI model that Anthropic deemed too dangerous to release publicly has reportedly been accessed by unauthorized users. Bloomberg reported Monday that a private group gained access to Mythos — Anthropic's powerful cybersecurity AI tool — through a third-party vendor, undermining the company's carefully controlled release strategy and raising serious questions about whether restricted AI models can truly be kept contained.
How They Got In
The unauthorized group, whose members have not been publicly identified, reportedly gained access to Mythos on the same day it was publicly announced. According to Bloomberg, the group made an educated guess about the model's online location based on knowledge about the format Anthropic has used for other models — a surprisingly low-tech method of accessing what was supposed to be one of the most restricted AI tools in the world.
The group also exploited access enjoyed by a person employed at a third-party contractor that works for Anthropic. Members of the group are part of a Discord channel that seeks out information about unreleased AI models. They provided Bloomberg with screenshots and a live demonstration of the software as evidence.
Anthropic confirmed it is investigating the report. A spokesperson told reporters the company is looking into claims of unauthorized access through one of its third-party vendor environments, but said it has found no evidence that the activity has impacted Anthropic's own systems.
The Irony of Restricted Release
Anthropic's entire justification for withholding Mythos from the public was that its cybersecurity capabilities were too powerful and too dangerous in the wrong hands. The company released the model to a select group of major corporations — including Apple — under an initiative called Project Glasswing. The limited release was specifically designed to prevent bad actors from weaponizing the tool.
The fact that unauthorized users accessed the model almost immediately after its announcement exposes a fundamental tension in Anthropic's restricted release strategy. Critics had already questioned whether limiting access to enterprise customers was genuinely about safety or primarily about creating premium contracts. The breach adds a third concern: whether restricted distribution is even technically enforceable.
Not Malicious — But That Misses the Point
Bloomberg reported that the group is interested in experimenting with new models rather than causing harm. They have been using Mythos regularly since gaining access, apparently treating it as a research curiosity rather than a weapon.
But the intent of this specific group is almost irrelevant to the larger issue. If a Discord community can guess the model's location and access it through a contractor's credentials within hours of its announcement, the security posture around Mythos is far weaker than Anthropic's public messaging suggested.
The incident echoes the Axios supply chain attack that hit OpenAI earlier this month and the Vercel breach that originated through a third-party AI tool. In each case, the weak link was not the primary company's infrastructure but a vendor or contractor that provided an entry point.
What This Means for Anthropic
The unauthorized access could create significant complications for Anthropic at a critical moment. The company recently briefed the Trump administration on Mythos and has been working to repair its relationship with Washington following the Pentagon's supply-chain risk designation. A security failure around the very model it presented as requiring government-level secrecy could undermine that effort.
It also complicates Anthropic's enterprise sales pitch. The company has warned that the Pentagon designation could cost it billions in lost enterprise revenue, with over 100 customers expressing concerns. If Mythos — the crown jewel of Anthropic's security portfolio — cannot be kept out of unauthorized hands, enterprise customers may question the company's ability to protect their own data and deployments.
The Bigger Question
The Mythos breach raises a question that extends well beyond Anthropic: can any AI model truly be restricted once it exists? The history of software security suggests that controlled distribution is difficult to maintain indefinitely. Models get leaked, APIs get discovered, credentials get compromised, and determined communities find ways in.
Anthropic's co-founder Jack Clark had previously warned that competitors — including Chinese firms — would likely develop comparable cybersecurity capabilities within six to twelve months. The unauthorized access to Mythos suggests the timeline for containment failure may be even shorter than that.
For now, Anthropic says its investigation is ongoing and its own systems remain uncompromised. But the incident serves as a reminder that in the AI era, the gap between "restricted" and "available" may be measured in hours rather than months.







