- US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell met with major bank CEOs to address cybersecurity risks linked to a new AI model by Anthropic.
- The upcoming Claude Mythos model has identified thousands of software vulnerabilities—some over 25 years old—raising concerns about its potential impact on cybersecurity and financial stability.
- Anthropic restricted access to the model, granting it only to select companies such as Amazon, Apple, and Microsoft amid fears it could be misused for cyberattacks.
US Treasury Secretary Scott Bessent has convened a meeting with top executives from major American banks in Washington to address growing concerns over cybersecurity risks linked to a new artificial intelligence model developed by Anthropic.
The meeting, which was also attended by Federal Reserve Chair Jerome Powell, comes in response to reports surrounding the capabilities of Anthropic’s upcoming model, “Claude Mythos.” The model, which has not yet been publicly released, is said to represent a significant leap in identifying and exploiting software vulnerabilities.
According to the company, the model has uncovered thousands of previously undetected vulnerabilities across widely used systems and applications, some dating back more than 25 years. This has raised serious concerns about potential implications for cybersecurity, financial stability, and national security.
Attendees included several CEOs of systemically important US banks, such as David Solomon, Brian Moynihan, Jane Fraser, Ted Pick, and Charlie Scharf. Jamie Dimon was invited but did not attend.
In his annual letter to shareholders, Dimon emphasized that cybersecurity remains one of the most critical risks facing the banking sector, warning that advancements in AI are likely to intensify these challenges.
In an unusual move, Anthropic has restricted access to its new model, making it available only to a limited group of organizations, including Amazon, Apple, and Microsoft, along with selected technology partners.
These developments come amid increasing concerns that advanced AI tools could be misused for cyberattacks, including breaking encryption systems or compromising sensitive data—posing a growing threat to global digital infrastructure.













