SecurityBrief US - Technology news for CISOs & cybersecurity decision-makers
Flux result b75b6c52 41bc 48f4 a421 ca157511836e

NSS Labs backs AI guardrail tests amid security fears

Wed, 25th Mar 2026

NSS Labs has published a white paper on testing AI security guardrails, with support from F5, AWS and Microsoft. The paper argues that many enterprise AI systems fail basic adversarial guardrail tests.

It focuses on runtime security testing rather than the underlying model alone, arguing that businesses need ways to assess how guardrails perform under real-world conditions. It sets out a framework for independent validation, adversarial testing and governance checks for organisations using AI in complex operational settings.

The central concern is that AI systems are creating new attack surfaces even as companies embed them more deeply in day-to-day operations. The paper highlights risks including data exfiltration, prompt injection and broader operational failures, and argues that buyers need a clearer way to assess whether security controls work as claimed.

Testing Framework

The main recommendations cover three areas: input threat detection, controls to manage data leaving systems through outputs, and safeguards for agentic AI. The framework also calls for validation testing that is independent, adversarial and grounded in real-world use, alongside governance measures to help businesses manage failures, stress and degraded system performance.

The report presents these steps as a way to bring more structure to a market where security claims can be hard to verify. That matters as boards and regulators pay closer attention to the safety, accountability and resilience of AI deployments across large organisations.

NSS Labs said the work was intended to offer a repeatable approach for enterprises assessing AI security products and internal controls. It framed the issue as one of evidence rather than marketing claims, particularly where AI is used in higher-stakes environments.

"Our research underscores the importance of independent validation for AI guardrails," said Ian Foo, Chief Technology Officer and EVP of Product at NSS Labs.

"AI is transforming global enterprises, and without rigorous, repeatable validation tests, security claims are just empty promises. We believe this framework will empower enterprises to make informed decisions and set new standards for AI safety," he said.

F5 said its contribution drew on its background in application delivery and security, particularly across layers 4 to 7 of network and application infrastructure. It described the project as a joint effort by security, cloud and AI specialists to give enterprises a more practical basis for evaluating guardrails before deploying systems at scale.

That emphasis reflects a broader shift in the enterprise market, where AI security is increasingly being treated as an operational control problem rather than only a model development issue. In practice, that means testing how systems behave when exposed to hostile prompts, unusual workloads or attempts to extract sensitive information through outputs.

"This collaborative effort reflects the critical importance of bringing together diverse expertise from across the industry to address the complex challenges of securing AI systems," said Jeanette Hur, Global Solutions Architect at F5.

"At F5, we bring decades of layer 4-7 application delivery and security expertise to this collaboration with NSS Labs, AWS and Microsoft. This framework will empower organizations to rigorously evaluate AI guardrails, ensuring that enterprises can confidently deploy AI innovations while maintaining resilience, security and accountability across their application ecosystems," she said.

Microsoft said the document was intended to help organisations of different sizes evaluate AI systems using vendor-neutral guidance. It linked the work to its broader focus on ethical and secure AI use, arguing that customers need practical tools to assess systems regardless of which model or provider they choose.

Buyer Pressure

The paper arrives as technology buyers face pressure to show that AI deployments are subject to meaningful oversight. Security teams are increasingly being asked not only whether systems are useful, but also whether they can withstand manipulation, prevent unauthorised disclosure of data and continue to operate safely when controls fail or degrade.

The framework is designed to help buyers move beyond broad vendor claims by asking more specific questions about how controls were tested, by whom and under what conditions. It also places governance alongside technical testing, signalling that accountability structures and failure management are part of AI security rather than separate compliance exercises.

"At Microsoft, we're very passionate about advancing AI in an ethical and secure way," said Zachary Riffle, Security Architect at Microsoft.

"This white paper is a step forward, and we hope it gives buyers of all sizes the right tools and vendor-agnostic guidance to secure whichever AI model or system they use while maintaining transparency and accountability," he said.