Straiker predicts shadow AI & agentic cyber threats in 2026
Straiker has released a set of predictions for enterprise AI security and adjacent technology trends expected to shape the sector in 2026, highlighting concerns ranging from emerging cyber threats to changes in workforce expectations.
Shadow AI risks
Straiker anticipates that so-called "shadow AI" projects will become a flashpoint for security incidents in the coming year. According to the company, employees at large organisations are increasingly able to build AI-powered "mini-apps" that connect enterprise data and systems using simple language inputs. This trend could elevate productivity but also expose companies to unapproved data flows and other risks.
"Shadow AI isn't just prompts; it's homegrown agents wiring into your crown-jewel systems, and organizations will have to take proactive steps to avoid the limelight - and significant losses," said Girish Chandrasekar, Head of Product, Straiker.
Autonomous cyber threats
Researchers at Straiker predict the rise of AI-powered Persistent Threats (AiPTs). These are described as autonomous, persistent agents developed for economic gain that can adapt and evade existing cyber defences in enterprise settings. Unlike traditional advanced persistent threats (APTs), which require human oversight, AiPTs are envisioned as highly adaptable and self-sustaining.
"A new class of threats will emerge: AI-powered Persistent Threats (AiPTs). These are autonomous malicious agents that can replicate, adapt, and re-plan against defenses. As AI tool-calling, memory, and multi-agent orchestration mature, attackers will weaponize autonomy just as defenders have. Enterprise security teams will need to assume adaptable, self-healing adversaries that live between tools and APIs, and they will need to adopt agent-level telemetry, containment, and deception as part of their overall strategy. APT was human-directed; AiPT will be goal-directed. Watch for multi-stage LLMs, botnets using autonomous planning, and red-team findings citing agentic persistence," said Daniel Regalado and Amanda Rousseau, Principal AI Security Researchers, Straiker.
Transformation of security roles
Straiker expects AI-based agents to accelerate all key cybersecurity functions within five years. This shift is projected to affect workflows in areas such as detection engineering, digital forensics and incident response (DFIR), and vulnerability management. AI agents are predicted to serve as specialists in structured, rule-driven workflows, facilitating quicker containment and reducing the burden of manual error.
"Every core cybersecurity function, from detection engineering to digital forensics and incident response (DFIR) and vulnerability management, will be AI agent-accelerated. Wherever workflows have structured inputs, known decision trees and measurable service-level objectives (SLOs), agents can be embedded as narrow specialists. The result is faster triage and containment, fewer manual errors, and human attention reserved for novel investigations. Security won't be replaced by AI; instead, it will have tighter coverage thanks to security AI agents, measured by outcomes, and governed by policy guardrails," said Phimm Phonpaseuth, Head of AI Security Solutions Engineering, Straiker.
Formalising AI defence
Standard operating procedures for AI red-teaming and blue-teaming are set to become more structured. As AI agents become a regular part of live enterprise workflows, organisations are predicted to formally integrate pre-deployment agent penetration testing and runtime safeguards. Shifts in procurement policies and the integration of runtime agent control frameworks are also anticipated.
"For enterprises, this will mean adding AI scenarios to existing attack simulations; it will also require pre-production AI penetration testing and deploying guardrails at the tool/API boundary with policy as code and audit trails. We should expect to see new AI red-team budgets in the coming year as a result, and we expect to see control frameworks adding "agent runtime sections," as well as procurement ask-lists for guardrails and AI incident response," said Phonpaseuth.
AI-native engineers
Straiker believes the coming generation of software engineers will predominantly work in AI-orchestrated environments, focusing less on coding from scratch and more on managing the architecture and safety of AI-driven agents. This transition is expected to prompt changes in the software development lifecycle and skills sought by employers.
"They'll be incredibly productive, but they'll fall into pitfalls if they start relying too much on AI. AI isn't infallible. Already, we're starting to see applications designed for the agentic future - orchestrating agents, tools, and business policies. For enterprises, this change will mean updating SDLC to AADLP (agentic app dev lifecycle) and will impact design tool scopes, abuse cases, and runtime policy tests. Hiring will shift from 'lines of code' to 'agent architecture and safety.' Tomorrow's 10x engineer is a 1x coder and a 10x agent orchestrator," said Amy Heng, Head of Marketing, Straiker.