SecurityBrief US - Technology news for CISOs & cybersecurity decision-makers
Cinematic sf socat night ai dashboards it team analyzing agents

Vijil launches platform to harden enterprise AI agents

Wed, 11th Mar 2026

Vijil has launched a platform that connects testing, runtime controls, and post-incident learning for AI agents used in business-critical work. A new module, Darwin, uses production telemetry from attacks and failures to recommend changes to an agent's instructions, configuration, and source code.

The release addresses a common barrier for enterprises trying to move AI agents from pilots into production: proving agents will behave reliably under stress. Security and governance teams also want evidence that agents follow policy and can withstand common manipulation attempts.

AI agents can fail in several ways once they face real users and real data. Some produce incorrect output and present it as fact. Others can be influenced through prompt injection, which steers behaviour through crafted inputs. Jailbreak attempts can also push an agent to ignore constraints. These risks have led some organisations to pull projects back from production after early deployments.

Three modules

The platform has three modules-Diamond, Dome, and Darwin-covering an agent's lifecycle from development to live operation and post-deployment improvement.

Diamond generates an evaluation framework based on an agent profile, user personas, and organisational policies. It produces a set of tests and checks to run before deployment and sets measurable expectations for behaviour.

Dome applies evaluation results at runtime. It enforces policy over agent behaviour and provides observability into performance against defined standards, making governance part of operational controls rather than a separate review step.

Darwin uses production telemetry to suggest targeted improvements, including updates to instructions, configuration, and source code. Developers review and apply the changes, then re-run evaluations to measure impact.

Governance focus

The platform is designed to give technical and business teams a shared view of agent behaviour. Developers can run tests tailored to each agent and its environment. Business owners, application security teams, and governance, risk, and compliance groups can set behaviour standards and monitor performance as agents evolve.

The approach reflects a broader shift in enterprise AI deployments. Early agent pilots often sit within a single product team and rely on informal monitoring. Wider rollout brings stricter policy and audit expectations, along with more integrations and user journeys that can trigger unexpected behaviour.

Framework integration

Vijil integrates with several agent development frameworks, including Google Agent Development Kit, LangGraph, CrewAI, and Strands, and supports deployment on AWS AgentCore.

The onboarding process focuses on establishing a baseline level of trust in an agent and then tracking progress over time. Vijil says organisations can see improvements within weeks as Darwin proposes changes based on real-world failures.

"Developers building agents for critical workflows must prove to business owners that their agents behave as expected under normal, noisy, and nasty conditions," said Vin Sharma, co-founder and CEO of Vijil.

Sharma described the platform as an end-to-end layer spanning deployment and operations.

"Vijil provides a layer of trust across the agent lifecycle, from discovery to deployment. With Vijil Darwin adapting agents to production telemetry, we close the loop from operations back to development, so agents can improve continuously," said Sharma.

Market context

Enterprises have increased investment in agent-style systems that can plan tasks, call tools, and act across workflows. These systems also raise questions about accountability. Agents can interact with internal systems, external services, and customer-facing channels, increasing the potential impact of a single failure.

Security teams have warned that prompt injection and jailbreak techniques can bypass controls that work in controlled tests. Many organisations now require explicit evaluation and runtime monitoring before allowing AI agents to handle sensitive data or execute actions.

Vijil will present the platform at the RSA Conference in San Francisco. Founded in 2023 by former AWS leaders, the company is backed by Brightmind, Gradient, and Mayfield.

Future development will focus on deepening how production telemetry feeds into development workflows, with Darwin positioned to turn incidents and failures into repeatable fixes that can be evaluated and rolled out across agents.