Anthropic’s new AI model, Claude Mythos, is so good at finding security flaws that experts say it is too dangerous to release to the public. It has already found thousands of hidden bugs in major operating systems and web browsers, including one in OpenBSD that had been missed for 27 years.

What Mythos Can Do
Anthropic gave early access only to about 40 big tech companies, like Apple, Amazon, and Microsoft, so they could use it to scan their systems for weaknesses.

Mozilla used Mythos to find and fix 271 security problems in the Firefox browser.

A former top U.S. cyber official warned that Mythos “can hack nearly anything” and that the country is not ready for this kind of threat.

Security Breach and Global Warnings
A report says a small private Discord group got into Mythos without permission through a third-party contractor’s access, and Anthropic is investigating.

Some security experts say that if random users got in, foreign governments may already have access too.

South Korea’s intelligence agency warned that new AI models like Mythos can find security holes, build attack chains, and carry out hacks on their own, and it listed “AI‑powered hacking” as a top cyber risk for 2026.

How Leaders Are Reacting
The World Economic Forum said Anthropic is limiting Mythos mainly for safety reasons, not just business reasons.

U.S. financial leaders met with Wall Street CEOs to talk about the cyber dangers from Mythos and similar models.

OpenAI is working on a similar high‑power security model, and while its CEO called Anthropic’s warnings “fear‑based marketing,” many security professionals say defenders must use these powerful tools too, or they will fall behind hackers who do.