Anthropic launches Glasswing AI cyber coalition with partners
Anthropic has launched Project Glasswing, a cross-industry cybersecurity initiative built around its new frontier AI model, Claude Mythos Preview. The project brings together major technology, cloud, financial and security firms to help protect critical software.
The coalition includes Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, The Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. Participants plan to use Anthropic's unreleased Claude Mythos Preview model to find software vulnerabilities across what they describe as some of the world's most critical infrastructure.
Anthropic describes Mythos Preview as a general-purpose AI system that can already outperform most human specialists at finding and exploiting software flaws. It has positioned Project Glasswing as both a response to that leap in capability and a testbed for using such models in coordinated cyber defence.
CrowdStrike is one of the initiative's founding security members. Ahead of the launch, it assessed the security implications of Mythos Preview and is contributing data from its Falcon platform. CrowdStrike says Falcon collects a trillion endpoint events a day, tracks more than 280 adversary groups and provides visibility into more than 1,800 AI applications already discovered in customer environments.
CrowdStrike's involvement reflects a broader shift in how enterprises view frontier AI. Vendors now describe large models as a new layer of infrastructure that touches endpoints, data, and operational workflows, as AI agents automate tasks across software development, IT, and business processes.
In CrowdStrike's view, this new class of models expands the attack surface but also gives defenders analytical tools that did not exist a year ago. It links frontier AI to vulnerability discovery, threat detection, and incident response, as both attackers and defenders increasingly rely on AI-driven automation.
Elia Zaitsev, Chief Technology Officer, CrowdStrike, said, "The window between a vulnerability being discovered and being exploited by an adversary has collapsed-what once took months now happens in minutes with AI. Claude Mythos Preview demonstrates what is now possible for defenders at scale, and adversaries will inevitably look to exploit the same capabilities. That is not a reason to slow down; it's a reason to move together, faster. If you want to deploy AI, you need security. That is why CrowdStrike is part of this effort from day one."
George Kurtz, CrowdStrike's Founder and Chief Executive Officer, connected Anthropic's work on Mythos to existing commercial AI products. He said Anthropic's Claude Code developer tool and its OpenClaw automation system show how deeply these systems are beginning to operate at the endpoint, where business users access data, make decisions and create new risks for organisations.
"The more capable AI becomes, the more security it needs. That's why Anthropic chose CrowdStrike as a founding member of their security coalition for Claude Mythos Preview. A technical partnership. Falcon secures AI where it executes. AI is creating the largest security demand driver since enterprises moved to the cloud. Claude Code is changing how people use computers. OpenClaw is set to reshape how enterprises automate. Mythos may be the most capable frontier model yet. It won't be the last. All of these AI innovations meet enterprises at the endpoint. That's where they access data, make decisions, and also create risk," said Kurtz.
Several security specialists outside the core launch group described Project Glasswing as an important but early step in a much broader AI-driven realignment of cybersecurity. They also warned that the same techniques now being tested defensively are already being used by threat actors.
DigiCert, which focuses on digital trust and certificate management, pointed out how fast-moving AI systems are changing long-held assumptions about software integrity and assurance.
Paul Holt, EMEA Group Vice President, DigiCert, said, "It's encouraging to see such a broad coalition of industry leaders coming together to address increasingly sophisticated software threats. Initiatives like Project Glasswing reflect a growing recognition that safeguarding the digital ecosystem is a shared responsibility. However, this also underpins how quickly the trust landscape is shifting. AI is not just accelerating how vulnerabilities are discovered, it's challenging our assumptions about the integrity of the systems we rely on every day. Responding faster is important, but trust can't be built on reaction alone."
The real opportunity here is to move beyond identifying and fixing flaws, towards establishing trust in software from the outset by ensuring systems are verifiable, resilient, and secure by design throughout their lifecycle. Sure, collaboration at this scale is a positive step forward but the next step is about making trust a foundational principle, not just an outcome. In today's environment, resilience depends not only on how quickly you respond, but on how confidently you can trust what's already in place."
Others highlighted the geopolitical context. Governments in multiple regions have invested in offensive and defensive AI for cyber operations, often outside public view, while regulators are moving ahead with new rules governing and securing AI models and applications.
David Warburton, Director of F5 Labs Threat Research, said, "There's no doubt that collaboration between private companies and major technology providers on initiatives like Project Glasswing is a positive development. However, many nation states are already investing heavily in both defensive and offensive cyber capabilities, often beyond public visibility."
"What is changing meaningfully is the pace. Advances in AI are accelerating both vulnerability discovery and exploitation, while most organisations are still struggling to keep up with the growing volume of known risks. The longer-term concern isn't a single catastrophic event, but a gradual erosion of trust in digital systems."
"We're already seeing early signs of an internet increasingly shaped by automation-where content is generated, interactions are driven, and attacks are carried out by bots. Without stronger resilience, that trajectory risks undermining human trust and usability on the web," said Warburton.
Speed is a recurring theme in early reactions to Mythos and Project Glasswing. Security vendors describe a shift from AI models that identify isolated bugs in code to agents that can chain weaknesses together into full attack paths, doing so at a pace that exposes long-standing problems in enterprise remediation.
Julian Totzek-Hallhuber, Senior Solutions Architect, Veracode, said, "What's really striking here is the pace. Project Glasswing is about connecting vulnerabilities into far more complex attack paths in a fraction of the time it used to take. In some cases, that's already surfacing issues that have been missed for years, which shows how quickly risk can build. Our own research recently revealed it takes organisations more than five months on average to fix vulnerabilities, so the ability to uncover and potentially exploit those at speed could significantly shift the risk landscape."
"But most organisations can't actually use this yet as access is restricted to a curated set of launch partners. So, while the results are impressive, they are hard to test or validate in real environments. There are also early signals that shouldn't be overlooked, including reports of the model stepping outside its expected boundaries, like attempting to communicate externally without authorisation."
"Crucially, this doesn't rewrite what a good application security programme looks like as it only addresses vulnerability discovery. Teams still need the governance, process and expertise to fix things properly and reduce risk over time. What it does change is the pace and the pressure. As these capabilities become more widely available, both attackers and defenders will be working with much more powerful tools, and organisations need to be thinking about that now," said Totzek-Hallhuber.