7–17 April 2026 — Artificial-intelligence developers including Anthropic and OpenAI have begun restricting access to their most advanced systems after Anthropic confirmed on 7 April that its new “Mythos” model would be released only to a limited group of organisations under “Project Glasswing”, prompting urgent discussions among finance ministers, central banks and major lenders at International Monetary Fund meetings in Washington, as concerns grow that the technology could expose critical vulnerabilities across global financial and digital infrastructure, The WP Times reports.
The model is being tested under controlled conditions by selected partners, including JPMorgan Chase, with access deliberately limited ahead of any wider rollout while regulators and industry leaders assess the risks linked to its reported ability to identify weaknesses in core systems ranging from banking networks to operating systems and web browsers.
Mythos AI risks: why governments and banks are responding now
Senior policymakers say the level of concern escalated rapidly following early briefings. François-Philippe Champagne, Canada’s finance minister, confirmed the issue was discussed extensively during IMF meetings: “Certainly it is serious enough to warrant the attention of all the finance ministers… the issue that we’re facing with Anthropic is that it’s an unknown, unknown” (François-Philippe Champagne, Finance Minister of Canada, IMF meetings, Washington DC, April 2026).
Anthropic has stated that Mythos has already identified “thousands of high-severity vulnerabilities”, including flaws that had remained undetected despite years of human review and automated testing. The system is described as capable of analysing complex codebases and generating exploit pathways, marking a shift in how cybersecurity risks can be discovered and potentially weaponised. Banking leaders have warned that the implications are immediate. C.S. Venkatakrishnan, chief executive of Barclays, said: “It’s serious enough that people have to worry. We have to understand it better, and we have to understand the vulnerabilities that are being exposed and fix them quickly” (C.S. Venkatakrishnan, Barclays CEO, BBC interview, April 2026).
Central banks are also examining the potential impact on financial stability. Andrew Bailey, governor of the Bank of England, said: “We are having to look very carefully now what this latest AI development could mean for the risk of cyber crime… bad actors could seek to exploit them” (Andrew Bailey, Bank of England Governor, BBC, April 2026). The US Treasury has confirmed it has raised the issue directly with major banks, encouraging them to test resilience ahead of any broader release. The coordinated response reflects concern that once such tools become widely available, the barrier to sophisticated cyberattacks could fall significantly.

Project Glasswing and AI access limits: how Anthropic controls Mythos rollout
Anthropic has positioned Project Glasswing as a defensive collaboration aimed at securing critical infrastructure. The initiative includes major technology and infrastructure partners such as Amazon Web Services, Microsoft, Google and NVIDIA, alongside more than 40 organisations responsible for maintaining essential software systems. Participants are using early access to scan for vulnerabilities and strengthen defences before similar capabilities become more widely available. Anthropic has committed up to $100m in usage credits and additional funding for open-source security initiatives as part of the programme. The restricted rollout has also intensified competition. Financial institutions not included in the initial group have reportedly sought ways to secure access, highlighting the commercial value of advanced AI systems. Exclusivity has become part of the strategy, creating a divide between organisations with early defensive capabilities and those without.
Industry analysts note that this approach is increasingly common. Companies including OpenAI are limiting access to their most powerful models through staged releases and enterprise partnerships, balancing safety concerns with commercial incentives. The broader context is a rapid acceleration in AI capability. Models are now able to analyse and interpret code at scale, identifying vulnerabilities that have persisted for years. Some flaws identified by Mythos had survived decades of review, suggesting existing security processes may be insufficient against AI-driven analysis.
The potential impact extends beyond finance to sectors including healthcare, energy, transport and government systems. Global cybercrime is already estimated at around $500bn annually, and more advanced AI tools could increase both the frequency and effectiveness of attacks.
At the same time, policymakers emphasise that the technology could strengthen defences if deployed responsibly. The same capabilities that expose vulnerabilities can also be used to fix them, making controlled access central to current regulatory thinking. Industry sources indicate that another major US AI developer may release a similarly powerful model without equivalent safeguards, raising concerns about uneven standards across the sector. For now, access to Mythos remains tightly controlled, with testing under way across selected institutions. But developments between 7 April and the IMF meetings that followed point to a wider shift: control over access to advanced AI models is becoming a defining issue for both global cybersecurity and competition in the technology sector.
Read about the life of Westminster and Pimlico district, London and the world. 24/7 news with fresh and useful updates on culture, business, technology and city life: How Claude AI Mythos shifts crypto risk to exchanges as regulators and ECB assess cyber threats