SecurityBrief US - Technology news for CISOs & cybersecurity decision-makers
Ai cyber threat reaching for floating credit cards office scene

Tenable hack of Copilot AI agent exposes fraud risks

Fri, 12th Dec 2025

Tenable has disclosed research showing that no-code agentic artificial intelligence tools can be manipulated for financial fraud and data theft, after it jailbreaked an AI agent built with Microsoft Copilot Studio.

The cyber security company said its work highlighted emerging risks from the rapid spread of employee-built AI agents inside large organisations.

No-code platforms let staff construct AI agents without software development skills. These tools sit inside existing business systems and handle tasks that once needed human oversight.

Tenable said this trend increases the risk that powerful automation will run without clear guardrails or security review.

Its researchers focused on Microsoft Copilot Studio, which allows users to create custom AI agents that interact with data sources and business workflows.

The team built a demonstration AI travel agent within Copilot Studio. The agent handled customer travel reservations and could create new bookings or modify existing ones without human intervention.

The researchers loaded the system with demonstration customer records. These records included names, contact details and credit card information.

They configured the agent with explicit rules. The agent had to verify a customer's identity before sharing any information or changing a booking.

Tenable then attempted to subvert the agent using a prompt injection technique. Prompt injection uses crafted instructions that sit alongside or override original rules inside an AI system.

The researchers said they successfully hijacked the agent's workflow. They booked a free holiday and extracted sensitive payment card data.

The company said this illustrated how an AI agent designed for routine customer service could become a channel for fraud and data exposure.

Data exposure

Tenable reported that the agent bypassed its own identity checks. It disclosed payment card information belonging to other customers, including full customer records.

The travel agent handled data that would normally fall under payment card industry standards. The researchers said the agent revealed this information after malicious instructions.

They said organisations that deploy similar agents face risks of regulatory scrutiny and penalties. These risks arise if AI systems leak protected personal or financial data.

Tenable said the experiment also exposed the impact of broad edit permissions. The travel agent could update booking details such as travel dates.

The same access also covered financial fields inside the booking records. The researchers instructed the agent to change the trip price to zero.

The agent carried out the change. The researchers said this effectively granted services without authorisation.

Tenable said the incident showed that financial fraud could stem from misconfigured AI workflows. It said non-technical staff may not see which fields and systems their agents can change.

"AI agent builders, like Copilot Studio, democratise the ability to build powerful tools, but they also democratise the ability to execute financial fraud, thereby creating significant security risks without even knowing it," said Keren Katz, Senior Group Manager of AI Security Product and Research, Tenable. "That power can easily turn into a real, tangible security risk."

Hidden permissions

The company's researchers said many AI agents sit on top of existing applications with high levels of access. They said the permissions are often excessive and not obvious to the people who assemble the agents.

Tenable framed governance and enforcement as key lessons from the work. It said business leaders should treat AI agents as production systems, not as informal experiments.

The company urged organisations to review which systems and data stores each agent can reach before deployment. It recommended that teams create a clear map of every linked application and dataset.

Tenable also outlined access control measures. It said agents should have the minimum write and update rights that are needed for a single, defined use case.

The researchers advised that security teams should monitor agent behaviour. They said logging and analysis could identify unusual activity, such as large data exports or repeated price changes.

They said monitoring should track both data flows and deviations from established business logic. They highlighted prompt injection as a type of activity that may not show up as a traditional cyber intrusion.

Tenable said it will continue to study AI exposure risks as more organisations integrate no-code agents into critical workflows.