SecurityBrief US - Technology news for CISOs & cybersecurity decision-makers
Split human ai face on monitor with swiss alps view illustration

Umanitek launches Guardian Agent to combat AI deepfakes

Tue, 3rd Feb 2026

Umanitek has launched Guardian Agent, a subscription service that it says focuses on identity protection and the detection of harmful synthetic content, alongside the formation of an advisory board and the close of a funding round backed by investors including Bill Tai and a Piëch family office investment arm.

The Swiss-based company describes its work as "AI harm reduction". It positions Guardian Agent as a response to deepfakes, impersonation and misrepresentation generated by large language models. Umanitek also claims it can address content and intellectual property infringement across AI tools and social platforms.

The company said it has attracted backing from several institutional and angel investors. It named Bill Tai, Magnus Grimeland of Antler, KBW Ventures and the investment arm of a Piëch family office. Umanitek did not disclose the size of the round.

Data sharing

Umanitek said it aims to tackle what it calls a structural issue in online safety. It argues that major social platforms and law enforcement agencies struggle to share information because of privacy, competitive dynamics and geopolitical sensitivities. It said harmful content still travels quickly across services and devices.

Guardian Agent uses a decentralised infrastructure that, according to Umanitek, allows platforms and law enforcement bodies to signal and verify whether content has been flagged as harmful. The company said the system does not require sharing the underlying data.

Umanitek framed the approach as a form of shared signalling layer. The company said that such a layer could allow coordinated action across multiple parties while keeping data under the control of the organisation that holds it.

Three areas

Umanitek said Guardian Agent targets three "risk surfaces". It listed hallucinations and misrepresentations from AI models including ChatGPT, Claude, Grok, Gemini and Perplexity. It also listed fake accounts and impersonation across social networks, starting with TikTok and X. The third area covers content and intellectual property infringement across those AI systems and platforms.

The company said its platform monitors hundreds of millions of accounts for fake profiles that use a person's images, usernames or bios. It also said it flags sudden narrative shifts and coordinated attacks in posts and mentions.

Users receive a live risk score, Umanitek said. The score aims to quantify exposure to online threats across the categories it tracks.

Evidence packs

Umanitek said it generates "verifiable digital evidence packs" when it identifies harmful content. It said the material records when and where content appeared, what it contained and how it changed over time.

The company said this evidence can support takedown requests and legal action. It also said the process reduces investigative and legal costs for users. Umanitek did not provide pricing details, beyond saying it will offer monthly and annual subscription options.

Umanitek also positioned Guardian Agent as a way to detect misinformation embedded in AI language models. It said the service monitors what language models say about an individual, brand, content or organisation. It said it generates evidence when it detects misrepresentation. It also said it implements a mechanism for correction requests.

Advisory board

Umanitek said it has assembled an advisory board spanning Silicon Valley, Europe and Asia. It listed Bill Tai, HRH Prince Khaled bin Alwaleed bin Talal Al Saud of KBW Ventures, Joe Betts-LaCroix of Retro Biosciences, Yalda Aoukar, Jeremy Achin, Mark Love, Victoria Vysotina, Anastasios Economou, Magnus Grimeland and Ismael Sassi.

The company also detailed its founding team. It named Chris Rynning, Aza Raskin, Tomaz Levak, Žiga Drev and Branimir Rakić.

Umanitek said Rynning has worked with the Piëch family office for more than 20 years and directs investments at AMYP Ventures, which it described as a Piëch and Porsche investment vehicle. It said Raskin co-created the documentary "The Social Dilemma" and co-founded the Centre for Humane Technology.

Umanitek also referenced OriginTrail, which it said Levak, Drev and Rakić helped to build. It said organisations including the British Standards Institution, Swiss Federal Railways and the Supplier Compliance Audit Network use OriginTrail technology for physical assets. Umanitek said it has incorporated OriginTrail technology into its decentralised infrastructure for digital assets.

"We are entering a race toward intimacy where people will talk more to machines than to humans," said Chris Rynning, Co-Founder, Umanitek. "In this new reality, data isn't gold anymore - trust is the new gold. When we can no longer distinguish what's human or real from what's not, we need a digital immune system for the web. If we don't build decentralized infrastructure now, which gives everyone the ability to signal and verify content, gather evidence, and request takedowns without forcing people to give up control of their data, we never will. 2026 will be the year we will fully realize that we are living fake lives online without even knowing it."

Umanitek said it will first make Guardian Agent Version 1 available to an initial cohort of high-profile users. It said a waitlist will open for general access and that it expects to reach full commercial scale during Q1 2026.

"We're mid-crisis where AI systems can fabricate worlds of information about anyone at scale, eroding the basis of trust," said Aza Raskin. "The problem will only accelerate from here. Umanitek represents the kind of infrastructure we need - giving individuals and institutions the tools they need to fight back."

"Umanitek's decentralized approach creates an immune system for the internet that allows platforms or law enforcement to verify if content has been flagged as harmful without ever sharing the underlying data," said Tomaz Levak. "This respects everyone's concerns about privacy and control while still enabling coordinated action against deepfakes, impersonation and abuse."