SecurityBrief US - Technology news for CISOs & cybersecurity decision-makers
Stressed ciso confronts overgrown cloud ai web over servers night

CISOs warn AI adoption outpaces ability to secure it

Thu, 19th Feb 2026

Security leaders at large US organisations say artificial intelligence is spreading through their environments faster than they can map and secure it, according to a new survey of chief information security officers and senior security executives.

The study, commissioned by Pentera, surveyed 300 CISOs and security leaders in North America. It suggests the biggest barriers are visibility, skills, and tooling rather than a lack of funding.

Two-thirds of respondents said they have limited visibility into how AI is being used across their environment. Another 44% said their AI security posture is already lagging behind the rest of their security programme.

The results reflect how AI is being adopted within existing IT estates rather than deployed as a clean, standalone system. Many organisations are embedding AI across identity systems, cloud platforms, applications, and data stores, increasing the number of touchpoints security teams must monitor and test.

Visibility gaps

Limited understanding of where AI is deployed emerged as a central theme. Respondents pointed to fragmentation across systems and inconsistent ways of validating security controls, problems that predate AI but become more acute when AI features are added to workflows.

The most common challenge was a lack of internal expertise, cited by 50% of respondents. Limited visibility into AI usage ranked highest at 48%, followed by insufficient AI-specific security tools at 36%.

These findings suggest a mismatch between the pace of adoption and security teams' ability to build new processes. AI introduces new interfaces for data access and automation, and it changes how decisions are made inside applications. Together, these shifts can expand the range of failure modes and attack paths security teams must consider.

Legacy controls

Most CISOs said they are relying on existing security controls rather than deploying AI-specific systems. In the survey, 75% reported extending controls originally designed for other attack surfaces to cover AI-driven workflows and infrastructure.

Only 11% reported having tools specifically designed to protect AI systems. This gap suggests many security programmes still treat AI as an extension of application, cloud, and identity security rather than a distinct area requiring dedicated safeguards.

That approach may reduce short-term disruption, but it can create blind spots. Traditional controls often focus on known assets and predictable interfaces. AI features can introduce new data flows, dependencies, and interaction patterns, requiring new methods to validate how controls behave when AI is involved in a workflow.

Budget signals

The survey suggests AI security is being funded, but rarely ringfenced. Some 78% of organisations said they fund AI security through existing security budgets, while only 1% reported having a dedicated AI security budget.

At the same time, 21% said they plan to introduce a dedicated AI security budget, suggesting some organisations expect AI to become a separate line item as programmes mature and accountability for AI risks becomes clearer.

The findings also indicate that spending alone is not the decisive factor at this stage. A lack of internal expertise and limited visibility both ranked higher than tooling gaps, pointing to organisational and operational constraints such as unclear ownership of AI deployments, inconsistent documentation, and limited ability to test AI-related changes in environments that mirror production conditions.

Consolidation debates

AI is also shaping discussions about rationalising security tools, though it has not yet driven widespread consolidation. In the survey, 58% of CISOs said AI is influencing their security-stack consolidation strategy.

Despite that, only 3% said they are actively consolidating because of AI. Another 11% said they are consolidating for reasons unrelated to AI.

The gap between intent and action may reflect ongoing efforts to determine which security functions AI should change first. It may also reflect procurement cycles and the difficulty of replacing established tools while teams are still learning what AI introduces in practice.

One respondent noted that AI is affecting multiple layers of the enterprise simultaneously, making it harder to define the problem's boundaries. Pentera chief executive Amitai Ratzon said AI is changing how data and systems interact, complicating efforts to map exposure.

"AI represents a fundamental shift because it touches every part of the enterprise. It's changing how data and systems interact, expanding organizational exposure beyond what most security programs have fully mapped," said Amitai Ratzon, CEO, Pentera.

Pentera focuses on security validation and adversarial testing. It argues that attacker-perspective testing can help validate controls and prioritise gaps with business impact as AI becomes more embedded across systems.