SecurityBrief US - Technology news for CISOs & cybersecurity decision-makers
United States
DigiCert launches AI Trust architecture for agents

DigiCert launches AI Trust architecture for agents

Thu, 30th Apr 2026 (Today)
Joseph Gabriel Lagonsin
JOSEPH GABRIEL LAGONSIN News Editor

DigiCert has introduced an AI Trust architecture to secure AI agents, models and digital content, adding new capabilities to its DigiCert ONE platform.

The architecture is aimed at organisations seeking cryptographic verification for AI systems and the material they generate. It is designed to cover identity, governance, integrity and provenance across the AI lifecycle.

The rollout is grouped into three areas: AI Agent Trust, AI Model Trust and Content Trust. Each addresses a different part of the AI stack as businesses expand their use of autonomous software, large models and synthetic or AI-assisted media.

AI Agent Trust focuses on discovery, identity, governance and lifecycle management for AI agents. It enables organisations to authenticate, authorise and audit autonomous systems through cryptographic identities and policy-based controls.

The approach is intended to help businesses treat AI agents as accountable digital actors within corporate systems. Organisations need to know what an agent is, what it is allowed to do and how its actions can be traced.

AI Model Trust focuses on the model supply chain. It provides secure packaging, signing and runtime validation so organisations can verify that models have not been altered and are operating in trusted environments, including third-party infrastructure.

Content Trust, which is available now, is designed to cryptographically sign and verify digital content using the C2PA standard. This can help establish where content came from, how it was created and whether it has been changed.

The move reflects a wider shift in how organisations assess digital trust as AI systems take on more autonomy. In DigiCert's view, conventional security and compliance processes are not enough when agents can act at machine speed, models move through complex supply chains and content is harder to authenticate by sight alone.

"AI has created a new trust challenge," said Amit Sinha, Chief Executive Officer, DigiCert. "Organisations are relying on agents, models, and content they can't always verify. At DigiCert, our purpose is to give people confidence in the security, privacy, and authenticity of their digital interactions. With our AI Trust solution, we help organisations confirm what's real, secure, and approved so AI can be used with confidence."

DigiCert has built the architecture as a unified trust layer rather than a set of separate controls. The aim is to let organisations apply cryptographic checks across agents, models and content within one framework instead of relying on disconnected manual processes.

Growing concern

Businesses and regulators have been paying closer attention to the risks around AI deployment, particularly in decision-making, data handling and generated media. Concerns have centred on provenance, unauthorised access, model tampering and the use of content that may be manipulated or falsely attributed.

DigiCert's announcement places public key infrastructure principles at the centre of that debate. It is extending those principles to AI systems so organisations can establish verifiable identity, tamper-evident integrity and ongoing validation.

That framing positions cryptography as a control point for AI governance. Rather than relying only on policies or platform-level permissions, the proposed model allows identity and approval to be independently verified.

Industry analysts have argued that this kind of verification is likely to become more important as AI systems move deeper into business operations. The issue is not only whether a system is useful, but whether its actions, inputs and outputs can be trusted and audited.

"AI is forcing organisations to rethink trust from the ground up," said Jennifer Glenn, Research Director for IDC Security and Trust Group. "Bringing cryptographic assurance to AI systems gives enterprises the ability to independently verify identity, integrity, and provenance of content, enabling these organisations to build trustworthy AI at scale."

Platform expansion

The launch also broadens DigiCert ONE beyond its established roles in PKI, DNS and certificate lifecycle management. By adding AI-focused controls, DigiCert is seeking to position the platform more directly in the governance of machine-driven systems and the verification of digital media.

AI Agent Trust and AI Model Trust are being introduced as preview offerings. Content Trust is already available.

The intended outcome is to help organisations reduce reputational and regulatory exposure while improving auditability. DigiCert argues that as AI adoption grows, the ability to verify systems and outputs will become a core operational requirement rather than an optional security layer.

For companies deploying AI across internal workflows, customer interactions and media production, that could make provenance and identity controls more central to procurement and governance decisions. The architecture is intended to help verify who or what produced an action or piece of content, whether a model remains intact and whether an autonomous agent is acting within approved limits.

Content Trust is available now and uses the C2PA standard to provide tamper-evident provenance and transparency for digital media.