SecurityBrief US - Technology news for CISOs & cybersecurity decision-makers
Isometric enterprise control room ai cloud security governance

Strong AI governance drives faster secure adoption

Thu, 25th Dec 2025

Organisations with mature governance structures report stronger progress on artificial intelligence deployment and security than peers with less developed oversight, according to new global research from the Cloud Security Alliance and Google Cloud.

The study links comprehensive AI policies with faster adoption of advanced systems, greater experimentation with AI in security operations and higher levels of staff training. It also highlights a widening gap between executive enthusiasm for AI and confidence that organisations can manage the associated risks.

The State of AI Security and Governance Survey Report draws on responses from 300 IT and security professionals. Respondents work in organisations of different sizes and across multiple regions.

Governance as differentiator

The survey finds that governance maturity is a key factor in how quickly organisations move beyond pilots and experiments. Organisations with comprehensive AI policies are nearly twice as likely to report early adoption of agentic AI, at 46 per cent. Only 25 per cent of those with partial guidelines report early use of agentic AI. Among organisations where policies are still in development, the figure falls to 12 per cent.

The pattern is similar in security experimentation. Seventy per cent of organisations with comprehensive governance say they have tested AI in security use cases. That compares with 43 per cent among those with partial governance and 39 per cent among organisations still developing policies.

Training and awareness also correlate with governance maturity. Among organisations whose boards are fully aware of AI security implications, 55 per cent have comprehensive governance policies in place. The report says 65 per cent of organisations with comprehensive AI governance already train staff on AI tools.

Hillary Baron, Senior Technical Research Director at Cloud Security Alliance, said the latest findings point to a transition in how organisations use AI.

"This year's survey confirms that organizations are shifting from experimentation to meaningful operational use. What's most notable throughout this process is the heightened awareness that now accompanies the pace of deployment. Even as organizations continue to grapple with foundational challenges in risk understanding, data protection, staffing, and policy, there are encouraging signs in the progress they're making," said Hillary Baron, Senior Technical Research Director, Cloud Security Alliance.

Security out in front

The survey reports that security teams are among the earliest adopters of AI inside organisations. More than 90 per cent of security functions are exploring AI for detection, investigation or response processes. Nearly half, at 48 per cent, say they have already tested AI tools within security operations. A further 44 per cent plan to test AI in security within the next year.

Security teams are also taking on formal responsibility for AI protection. In a majority of organisations, at 53 per cent, security teams lead work on AI safeguards. The report states that this positions security as a central part of responsible AI implementation efforts.

Dr Anton Chuvakin, Security Advisor at the Office of the CISO at Google Cloud, said governance quality now marks out more advanced adopters.

"As organizations shift from experimentation to full operational deployment, strong security practices and mature governance are emerging as the critical differentiators for successful AI adoption," said Chuvakin.

Concentration on big models

The research indicates that organisations are increasingly using more than one AI system. Respondents often combine multiple models in a single organisation. Despite this, a small group of large providers dominates current deployments.

Seventy per cent of respondents use GPT. Forty-eight per cent use Gemini. Twenty-nine per cent use Claude. Twenty per cent use LLaMa. The report notes that this concentration suggests growing operational maturity. It also notes industry concerns about resilience, interoperability and vendor lock-in when use clusters around a limited number of model families.

Confidence gap

Senior executives show strong interest in AI, according to the survey. Seventy per cent of respondents say their leadership teams are moderately to fully aware of AI's security implications. Organisational confidence in secure implementation lags that awareness.

A majority, at 73 per cent, describe themselves as neutral or lacking confidence in their organisation's ability to execute an AI security strategy. The report links part of this gap to constraints in staffing and in-house expertise, along with evolving threat models.

New risks underweighted

Many organisations continue to treat AI security as an extension of established privacy, data protection and governance controls. The survey finds that 52 per cent of respondents cite data exposure as their top AI security concern. AI-specific risks attract far lower attention.

Only 16 per cent of respondents list regulatory compliance issues stemming from AI as a primary concern. Twelve per cent prioritise model integrity compromise. Ten per cent rank data poisoning risks highest. The report suggests that this pattern reveals a gap between traditional data protection priorities and wider AI safety governance.

Google commissioned the Cloud Security Alliance to design the survey and report. Google financed the project and co-developed the questionnaire with analysts from the association. Cloud Security Alliance researchers conducted the online survey and carried out the data analysis.

The organisations state that they will continue to track AI security and governance trends as deployments expand and regulatory scrutiny increases.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X