
Claude Mythos Just Made Frontier AI a Systemic Financial Risk
I've been watching the Claude Mythos story unfold for a week now, and the thing that keeps nagging at me isn't the model itself. It's the reaction.
When the Bank of England, the Financial Conduct Authority, and the Treasury start holding emergency calls with the National Cyber Security Centre about an AI model, something has shifted. When the US Treasury Secretary and Fed Chair pull bank CEOs into a room to warn them about a single company's product, we're in new territory.
This is not a model launch story. This is the moment frontier AI became a financial stability question.
What actually happened
Let me piece together the timeline, because it matters.
On April 7, Anthropic announced Project Glasswing. The pitch: give select organizations early access to Claude Mythos Preview so they can use it for defensive cybersecurity work. Partners include Amazon, Microsoft, Apple, CrowdStrike, Palo Alto Networks, Google, and Nvidia. Anthropic committed up to $100 million in usage credits and $4 million in donations to open-source security groups.
The reason for the controlled rollout? Mythos had already identified thousands of major vulnerabilities across operating systems, web browsers, and other widely used software. Anthropic said it would extend access to about 40 additional organizations responsible for critical software infrastructure.
Two days later, on April 9, Reuters reported that Treasury Secretary Scott Bessent and Fed Chair Jerome Powell convened an urgent meeting with bank CEOs. The subject: cyber risks from the same model. Anthropic had been in ongoing discussions with US government officials about Mythos's offensive and defensive capabilities. The model can identify and exploit weaknesses across every major operating system and every major web browser, according to Anthropic.
Then on April 12, UK regulators entered the picture. The Bank of England, the FCA, and Treasury officials started urgent talks with the NCSC and are expected to brief major British banks, insurers, and exchanges within the next two weeks.

Why banks care about this
If you're a founder or executive reading this and thinking "this is a cybersecurity story, not a finance story," you're missing the point.
Banks run on software. Every major bank runs on layers of legacy systems, patched together over decades, sitting behind firewalls and access controls that were designed for a world where finding zero-days required significant human expertise and time. A model that can systematically find and exploit vulnerabilities across every major operating system and browser doesn't just threaten one bank. It threatens the infrastructure that all of them share.
That's the word the regulators keep using: systemic. This is the same framing they use for contagion risk in financial crises. When the Bank of England starts treating an AI model the way it treats a potential market crash, pay attention.
The concern isn't that Anthropic will attack banks. The concern is that the capabilities exist, the model exists, and the question of who else builds something similar (or steals it) is now a matter of when, not if. Canada's Finance Ministry and the Bank of Canada also held discussions with bank executives about the same topic. This is a coordinated, multi-country response. That doesn't happen for press releases.
The Glasswing calculation
Here's where it gets interesting, and where I think Anthropic is making a genuinely difficult bet.
Project Glasswing is Anthropic's answer to the obvious question: if your model can find thousands of critical vulnerabilities, what do you do with it? Their answer is controlled disclosure. Give the model to the companies that own the vulnerable software, let them patch before anything goes public, and commit serious money to make it happen.
It's a reasonable strategy. Maybe the only responsible one, given what the model can apparently do. But it also creates a strange dynamic. About 40 technology companies now have access to a tool that the rest of the world doesn't. Those companies include Microsoft and Google. The competitive advantage of knowing about vulnerabilities before anyone else is enormous, even if the stated purpose is purely defensive.
Anthropic says its eventual goal is for users to safely deploy Mythos-class models at scale. That "eventual" is doing a lot of work in that sentence. Right now, they've built something too dangerous for broad release and are managing access through a combination of partnerships, government coordination, and what amounts to a gentleman's agreement about responsible use.

What this tells us about the next twelve months
I keep coming back to the speed of the government response. Reuters reported the UK regulator story on April 12. The Bessent-Powell meeting with bank CEOs happened on April 9. The Glasswing announcement was April 7. That's five days from product announcement to the central bank of a G7 country treating it as a threat to financial stability.
Compare that to the speed at which governments responded to ChatGPT in early 2023. It took months for anyone official to say anything meaningful. The gap between those two responses tells you how much has changed.
A few things I think are worth watching:
The regulatory response will set precedent. How the Bank of England and the FCA decide to handle this will probably become a template for how other countries regulate frontier AI capabilities. If they require banks to treat Mythos-class models as a systemic risk factor in their stress testing, that changes the economics of both AI development and financial technology.
The "40 organizations" list matters more than people realize. Who gets early access to Mythos is a power question, not a technical one. Anthropic is essentially deciding which companies get a head start on patching (and on understanding what offensive AI capabilities look like). That's a lot of influence for a private company to hold, even one acting in good faith.
And the limited release model itself is the real story. Anthropic stopped short of a broad release because of the risk. That decision, more than any benchmark or capability demo, tells you where we are. We've built models that their own creators think are too risky to ship normally. The question for the industry now is whether "controlled access plus government coordination" is a model that scales, or whether it's a temporary fix that falls apart the moment a competitor decides to skip the caution.
The bottom line
Frontier AI just crossed a line that it can't uncross. Financial regulators treat it as a systemic risk now. Not a hypothetical, not a "what if" scenario from a think tank report. An actual, present-tense risk that requires emergency coordination between central banks, cybersecurity agencies, and the private sector.
For founders building in this space: your compliance landscape just changed. For executives at large companies: your board should be asking about this. For everyone else: the conversation about AI safety just stopped being abstract.
The Bank of England doesn't hold emergency meetings about things that might matter someday. They hold them about things that matter right now.