SecurityBrief US - Technology news for CISOs & cybersecurity decision-makers
Flux result 7d0a5cbf d950 4e71 8582 c3061c56e0d1

OpenAI broadens AI cyber tools as arms race heats up

Thu, 16th Apr 2026 (Yesterday)

OpenAI is making advanced AI cybersecurity tools available to verified users, in contrast to Anthropic's more limited release of similar technology.

The different approaches have intensified debate in the security industry over whether broader access to AI-driven vulnerability discovery will help defenders move faster or give attackers new ways to find software flaws. The issue comes as ransomware groups continue to put heavy pressure on US organisations.

Researchers at Check Point found that nearly half of ransomware attacks linked to the United States in March 2026 were tied to three groups: Qilin at 20%, Akira at 12% and Dragonforce at 8%. Those figures add to concern that tools designed to identify software weaknesses could further intensify the race between security teams and criminal gangs.

Roger Grimes, CISO Advisor at KnowBe4, said the industry is already seeing a marked shift in how vulnerabilities are discovered and addressed as AI systems become more widely used.

"There is no doubt that AI is significantly and permanently changing vulnerability discovery and patch management. The time needed to find and mitigate exploitable vulnerabilities has been decreasing throughout the cybersecurity industry's history.

AI has absolutely accelerated the vulnerability discovery and response cycle. It has also, at least temporarily, increased the discovery of zero-day vulnerabilities. All of this means cyber defenders need to use the same AI to identify and resolve vulnerabilities faster and beat criminals to the punch.

We do not yet know whether AI is truly saving humans time, resources and money, because we do not know the rate of false positives. In the vulnerability space, a false positive is when a tool says it has found an exploitable vulnerability and it has not. Previously released findings involving vulnerability-hunting AI had false-positive rates of around 95%, which means that for every 20 vulnerabilities the AI says it found, only one was verified. The other 95% wasted additional time and effort, often mostly human time and effort, only to be disproved.

Anthropic has yet to share its false-positive rate. It found thousands of verified vulnerabilities, and that is a good thing. But how many claimed vulnerabilities were not really vulnerabilities? If it is anything close to 95%, then it might not be a revenue-positive expenditure. That said, false-positive rates will come down, likely to zero over time, and probably quickly. But we do not know what the false-positive rate is right now. If it is 95% or close to that, fewer people will use it. If it is far lower, it will be used by millions. The devil is in the details."

False Positives

His comments point to a central issue in the emerging market for AI-led security tools. Speed in finding bugs matters, but so does precision, because security teams still need to verify whether a reported weakness is real, exploitable and urgent enough to patch immediately.

That trade-off is likely to shape adoption. If systems can surface large numbers of real vulnerabilities with a low error rate, companies may broaden access quickly. If they continue to produce large volumes of inaccurate alerts, security teams could struggle to justify the extra workload needed to investigate each claim.

Anthropic's approach has drawn attention because its Mythos AI cybersecurity tools were considered powerful enough to warrant restricted access for a select group of companies. OpenAI has taken the opposite route, aiming to make similar tools available to thousands of verified users and hundreds of security teams.

Arms Race

The gap between those strategies reflects a broader divide in the industry over how tightly such systems should be controlled. One side argues that limiting access reduces the chance that malicious actors can misuse AI to uncover zero-day flaws before vendors can respond. The other argues that defenders need broad access so they can match or outpace attackers already experimenting with the same methods.

Security specialists increasingly describe the contest as an AI arms race, with both defenders and criminals seeking faster ways to identify weaknesses in code, test exploitability and prioritise targets. In that environment, the release model chosen by major AI companies could influence who gains the advantage first.

For businesses, the immediate question is less whether AI will alter vulnerability management and more how quickly reliable tools can be integrated into existing security operations. Even where AI helps reduce discovery times, companies still need patching processes, verification steps and staff able to triage findings under pressure.

The economics are also likely to matter. A tool that identifies thousands of valid vulnerabilities could still be hard to justify if it also generates a much larger stream of false leads that consume analyst time. Grimes said uncertainty over error rates remains one of the most important unanswered questions as the technology moves into wider use.

"If it is far lower, it will be used by millions. The devil is in the details," Grimes said.