SecurityBrief US - Technology news for CISOs & cybersecurity decision-makers
Digital network shadowy humanoid figures hidden non human presences modern office

Organisations struggle with non-human identity risks & AI demands

Sat, 22nd Nov 2025

Omada has released nine predictions outlining changes and persistent risks affecting identity governance and administration. The analysis warns that inadequate oversight of non-human identities (NHIs) continues to leave organisations vulnerable to disruptive cyber incidents and regulatory penalties.

Non-human identities

Organisations are experiencing a surge in machine identities-such as service accounts, application credentials, and API keys-that outnumber human-managed identities. These non-human identities are often excluded from existing governance frameworks, making them difficult to monitor and decommission when no longer needed.

"Traditional identity governance and administration (IGA) was built for humans. We're discovering huge numbers of machine identities that have never been governed. OWASP released their Top 10 Non-Human Identity Risks for 2025, and 'improper offboarding' is number one. The fact that 'improper offboarding' ranks as the number one risk reveals a fundamental gap: organizations have no systematic process for deprovisioning machine identities when services are deprecated, applications are sunset, or integrations are discontinued," said Paul Walker, field strategist, Omada.

Privilege creep

The proliferation of NHIs has led to an increase in "privilege creep", where machine identities accumulate excessive permissions as projects shift and staff turnover occurs. These access rights are rarely reviewed or rescinded, especially when the original creators have moved on. This dynamic leaves access controls opaque and error-prone.

"With human users, we at least have some natural forcing functions. People change roles, leave companies, trigger offboarding workflows. Not ideal, but it's something. Over-permissive access is the norm, with identities being granted more permissions than necessary, increasing the likelihood of privilege abuse and unauthorized actions. Unlike humans where we might notice someone has 'Finance Analyst + HR Admin + Sales Manager' roles, machine identities accumulate permissions across platforms in ways that are completely opaque," said Walker.

Real-world breaches

The financial consequences of neglect in identity governance can be severe. Recent major breaches-including production shutdowns at Jaguar Land Rover and disruptions at Marks & Spencer-originated from compromised machine credentials and unmonitored third-party access. These incidents underscore the risks associated with "orphaned" service accounts and poorly managed API keys.

"2025 marked an inflection point where non-human identity security transitioned from niche concern to mainstream crisis. It is surprising that in late 2025, mature organizations with significant security investments could still be completely paralyzed by compromised machine credentials that hadn't been rotated in years and social engineering attacks on third-party helpdesks," said Walker.

Regulatory impact

Legal frameworks such as the EU AI Act and California's transparency legislation are beginning to mandate that organisations provide detailed audit trails of automated decision-making by AI agents. The focus is shifting to ensure that decisions made by autonomous agents are fully explainable to regulators and impacted individuals.

"The EU AI Act and California's transparency laws now mandate that organizations document every decision made by AI agents, justify its reasoning, and maintain complete audit trails of what systems agents accessed and what actions they took," said Walker.

Persistent sprawl

Growth in digital identities-both human and non-human-continues to strain legacy identity and access management practices. This identity sprawl raises the risk of credential-based threats and increases the attack surface for cybercriminals.

"With organizations struggling to govern an expanding mesh of digital identities across human, machine, and AI entities, over-permissioned roles, shadow identities, and disconnected IAM systems will continue to expose organizations to credential-based attacks and lateral movement. AI will also reshape traditional social engineering: synthetic voices, deepfakes, and adaptive phishing will erode the reliability of static authentication, forcing organizations to adopt continuous and context-aware verification as the new baseline," said Benoit Grange, chief product and technology officer, Omada.

Sector adaptation

Industries newly subject to Europe's NIS2 directive-including manufacturing, logistics, and certain digital services-are anticipated to face a steep learning curve in adopting stricter cybersecurity controls and reporting requirements.

"The NIS2 directive has ushered in stricter cybersecurity measures and reporting for a wider range of critical infrastructure and essential services across the European Union. For industries newly brought under this directive, including manufacturing, logistics and certain digital services, 2026 will bring new growing pains. The sectors, many long accustomed to minimal compliance oversight, now face strict governance and reporting requirements. In contrast, mature sectors like finance and healthcare will adapt more smoothly. The disparity will expose structural weaknesses in organizations unfamiliar with continuous compliance, making them attractive targets for attackers exploiting regulatory confusion," said Niels Fenger, advisory practice director, Omada.

Data classification

Data governance remains a foundational issue. Many organisations lack structured data classification frameworks, which limits their ability to secure information and undermines efforts to use AI securely and effectively.

"In 2026, organizations will continue to struggle with foundational data governance. Despite the widespread adoption of AI-driven tools, most enterprises still lack formal data classification frameworks, which is a prerequisite for risk-based security and trustworthy AI. Without structured and governed input, AI systems will only amplify existing weaknesses, not fix them. The result: 'Shaky Input, Shaky Output.' Until organizations align with standards like ISO 27002 and NIST and treat classification as strategic, AI will potentially be more of a liability than an advantage," said Fenger.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X