Akto widens AI agent security with new integrations
Akto has announced partnerships and native integrations with LangChain, Portkey, TrueFoundry, Arcade and LiteLLM, extending its AI agent security coverage across a wider range of development and deployment platforms.
The integrations are designed to place security controls inside the tools engineering teams already use to build and run AI agents. The controls focus on runtime monitoring and policy enforcement as companies move AI systems from testing into live business workflows.
Akto's announcement comes as businesses increase their use of AI agents in production, where software can make decisions, call tools and interact with internal systems with limited direct human supervision. That expansion has widened the attack surface across model gateways, orchestration frameworks, deployment platforms and tool runtimes.
According to Akto, 79% of enterprises have limited or no visibility into what their AI agents are executing at runtime. The integrations are intended to address risks including prompt injection, uncontrolled tool access, privilege escalation, sensitive data leakage, shadow AI agents and ungoverned usage.
Each partnership covers a different layer of the AI agent stack. Portkey and LiteLLM operate at the gateway layer, where requests and responses are routed across models. TrueFoundry serves as a control plane for production AI systems, while Arcade focuses on the runtime used for agent tools. LangChain is widely used by developers building agent-based applications and workflows.
For Portkey, Akto's guardrails will be embedded within the gateway so traffic moving through the system is automatically checked for prompt injection, exposure of sensitive data and policy breaches. The TrueFoundry integration is intended to secure traffic in real time and apply controls to agent interactions and MCP tool calls before actions are taken.
With Arcade, the partnership secures tool interactions at runtime and records actions before results are returned to the language model. The LiteLLM integration applies security controls to requests and responses flowing through the proxy, while the LangChain integration is intended to provide continuous visibility and policy enforcement across the broader LangChain ecosystem.
Developer Workflow
Akto is aiming to solve a practical problem for enterprise security teams: how to add controls without forcing developers to rebuild agent architectures or carry out separate instrumentation work. In some environments, the native integrations do not require changes to existing agent code.
That approach reflects a broader shift in enterprise software buying, as security teams increasingly want controls introduced through existing development workflows rather than separate stand-alone systems. In AI environments, that need has become more urgent because agents often operate across several software layers at once.
"Security has to be embedded where developers build, not where security teams wish they would build. The enterprises deploying AI agents today are betting their most critical workflows on these platforms. Our job is to make sure security is never the reason those AI workloads slow down or get blocked. By partnering with the platforms teams already use to build and operate AI agents, we're making agentic runtime protection the default, not an afterthought," said Ankita Gupta, Chief Executive Officer and Co-Founder of Akto.
Stack Coverage
The partnerships also highlight how fragmented the AI agent market has become. A single production deployment may involve a framework for orchestrating agent behaviour, a gateway for model traffic, a platform for governance and access control, and a toolkit for connecting to third-party or internal tools.
That fragmentation has made it harder for security teams to maintain a clear view of what systems are doing in real time. It also raises operational concerns for companies that need auditable records of what an AI agent was allowed to access, what actions it took and what information it returned.
LangChain's inclusion is notable because of its broad adoption among developers building agentic applications and multi-step workflows. LiteLLM adds an open-source gateway that has gained significant developer traction, while Portkey and TrueFoundry target organisations operating AI systems at scale. Arcade extends coverage to the runtime layer, where agents discover and use tools under policy controls.
Akto says the collective aim of the integrations is to secure AI workloads across the full path from development through deployment and live operation. More broadly, enterprise AI security appears to be moving towards controls built into the stack by default rather than added after deployment, as businesses seek to govern AI agents and connected MCP environments in production.