‘BodySnatcher’ flaw lets hackers hijack ServiceNow AI agents
AppOmni has disclosed a security flaw in ServiceNow that it said could allow attackers to impersonate users and trigger AI agent actions under those identities, including accounts with administrative permissions.
The issue, tracked as CVE-2025-12420, centres on the ServiceNow Virtual Agent API and the Now Assist AI Agents application. AppOmni said an attacker could start from a position with no credentials and use only an employee email address to impersonate that user during a conversation routed through Virtual Agent.
AppOmni described the vulnerability as "BodySnatcher". It said the exploit chain could lead to privileged actions inside a ServiceNow environment, including the creation of backdoor accounts and access to sensitive data stored in ServiceNow workflows and records.
What's affected
AppOmni said the vulnerability affected on-premise ServiceNow instances running specific versions of two components. For Now Assist AI Agents, it listed affected versions as 5.0.24 to 5.1.17 and 5.2.0 to 5.2.18, with fixed versions 5.1.18 and 5.2.19. For the Virtual Agent API, it listed affected versions as 3.15.1 and earlier and 4.0.0 to 4.0.3, with fixed versions 3.15.2 and 4.0.4.
AppOmni said ServiceNow's cloud-hosted customers required no action. It said customers using the on-premise product should upgrade to patched versions listed for the affected applications.
How it worked
ServiceNow Virtual Agent provides a conversational interface for tasks such as filing tickets and resetting passwords. Organisations often connect Virtual Agent to external tools such as Slack and Microsoft Teams through an API. ServiceNow uses "providers" and "channels" inside the platform to map incoming messages to a workflow and to associate a request with a user identity.
AppOmni said a set of AI agent channel providers introduced with the Now Assist AI Agents application used a message authentication mechanism based on a static secret. It said those providers shipped with the same secret across ServiceNow instances. AppOmni also said the associated "auto-linking" logic accepted an email address as sufficient to link an external requester to a ServiceNow account, without enforcing multi-factor authentication or single sign-on checks.
AppOmni said an attacker could use the combination to impersonate any ServiceNow user during a chat session. AppOmni said the attacker could then route requests through internal workflows that run AI agents, and do so as the impersonated user.
In its write-up, AppOmni described an internal topic that it said executed AI agents through Virtual Agent. It said this created an unexpected path for executing AI agent workflows outside typical deployment constraints.
Potential impact
AppOmni said the exploit chain could allow an attacker to run ServiceNow AI agent tasks as a targeted user, including administrators. It said that could allow actions such as creating new user records and assigning roles, which could result in persistent administrative access.
AppOmni said the resulting access could extend broadly inside an organisation, depending on how it uses ServiceNow. The company cited examples including "customer Social Security numbers, healthcare information, financial records, or confidential intellectual property".
AppOmni also described a phishing scenario. It said an attacker could send a message that appeared to originate from a trusted internal user and force a hand-off to a live support agent, if the organisation configured that option.
"BodySnatcher is the most severe AI-driven vulnerability uncovered to date: Attackers could have effectively 'remote controlled' an organization's AI, weaponizing the very tools meant to simplify the enterprise," said Aaron Costello, Chief of Security Research, AppOmni.
AppOmni also said the affected ServiceNow AI applications are widely used in large enterprises. "The ServiceNow AI applications susceptible to this flaw are used by nearly half of AppOmni's Fortune 100 customers," said Costello.
Broader questions
AppOmni framed the issue as a case where traditional authentication weaknesses become more severe when connected to AI agent workflows. It said organisations increasingly grant AI agents authority over configuration changes and account management. It said that raises the impact of errors in identity linking and provider configuration.
"As AI agents are granted more autonomy to manage accounts and modify configurations, they become high-value targets that can be manipulated if robust guardrails aren't in place," said Costello.
AppOmni said the immediate changes included rotating provider credentials and removing a built-in example AI agent that it described as powerful. It also said organisations should review provider configurations, enforce stronger controls around account linking, and adopt processes for approving and de-provisioning agents.
"Our mission is to ensure that the adoption of agentic AI remains an asset for productivity rather than a security liability," said Costello.