SecurityBrief US - Technology news for CISOs & cybersecurity decision-makers
780

UK financial firms lack shared AI governance standard

Wed, 29th Apr 2026 (Today)

Zango has published research warning that UK financial firms lack a shared standard for AI governance. The report includes contributions from senior leaders at banks, fintechs and payment companies, including Ecommpay.

The findings point to a gap between the pace of AI adoption in financial services and the controls used to oversee it. Business and technology teams are deploying AI tools faster than risk and compliance functions can track them, the research says, and some institutions cannot identify all the AI systems in use across their organisations.

The report draws on interviews with 27 C-suite and senior executives responsible for risk, compliance and AI governance at UK and European financial institutions, as well as four industry roundtables involving 60 additional senior practitioners. Contributors included executives from Santander, Stripe, St James's Place, Standard Chartered, Lloyds Banking Group, Monzo, Allica Bank, Commerzbank, Revolut and Ecommpay, alongside John Glen, a member of the Treasury Committee.

The warning comes as UK authorities examine risks linked to Anthropic's Mythos model. The Bank of England is preparing to convene the Treasury, the Financial Conduct Authority and the National Cyber Security Centre to assess the threat.

Governance gap

AI use in finance is shifting from systems that produce predictable outputs to generative and agentic models with context-dependent results, the report says. That shift changes governance demands because institutions cannot fully validate every outcome in advance.

This has created a widening oversight gap, it argues. Without a common operating standard, firms are tackling similar governance questions independently, resulting in uneven controls across the sector.

That, in turn, raises the risk of weaknesses being exploited at scale. The research cites global fraud losses of USD $579 billion in 2025 and says 90 per cent of financial professionals reported an increase in AI-enabled attacks.

Ritesh Singhania, chief executive of Zango, set out the concern in direct terms. "Compliance teams are trying to keep pace with AI systems their own colleagues have deployed, and with criminal networks scaling faster than anyone's defences. Weak governance doesn't just create individual risk - it creates systemic vulnerability across the entire sector. What's missing is a shared implementation standard that gives firms a consistent basis for governing AI as they adopt it," he said.

Industry model

A central theme in the report is that the UK lags the US in turning broad regulatory principles into operational guidance for firms. Zango points to a Financial Services AI Risk Management Framework published in the US through a Treasury-led public-private collaboration involving 108 financial institutions, with input from agencies including NIST. It also notes that Singapore's regulator has issued a similar framework.

According to the report, no equivalent standard exists in the UK or the EU. That absence has prompted senior industry figures to argue that the sector should take the lead in developing practical rules for day-to-day use.

Lord Clement-Jones, Liberal Democrat spokesperson for science, innovation and technology in the House of Lords and co-chair of the All-Party Parliamentary Group on AI, wrote in the foreword: "What is immediately missing is the translation of high-level regulatory principles into day-to-day operational practice. We cannot simply wait for the aftermath of the first major AI-fuelled financial scandal to force us into action."

Dean Nash, Adviser to Zango and Global Chief Operating Officer (legal) at Santander, said the structure of oversight itself needs to change. "Closing the accountability gap requires a fundamental rethink of governance architecture. We must shift from controlling and auditing a system's internal logic to governing the dynamic environment in which it operates. Risk management is no longer about predicting every result, but about orchestrating an ecosystem of continuous, real-time guardrails," he said.

Ecommpay view

Among the contributors was Willem Wellinghoff, chief compliance officer at Ecommpay, who argued that financial institutions should not wait for formal regulation before establishing practical standards for AI use.

"Artificial intelligence is no longer just a technological discussion topic; it is becoming a fundamental part of the engine driving the future financial ecosystem. But to truly harness its potential, we must take responsibility for its governance today.

"It is critical that we, as an industry, take the lead in defining AI standards. The technology is evolving at significant pace. If we wait for government and regulators to mandate legislation, we risk being constrained by static frameworks that fundamentally cannot keep pace with innovation."

"Instead, we must champion an industry-led approach that is dynamic, nuanced and continuously adaptable. By establishing robust guidelines that earn the trust and endorsement of governments and regulators, we build a collaborative model that helps us embed ethical, transparent practices across our organisations, protecting consumers while keeping us agile enough to innovate. Ultimately, we must be the architects of our own compliance," Wellinghoff said.

The report proposes a sector-specific model shaped by practitioners and developed with regulator engagement. It points to the Joint Money Laundering Steering Group as a precedent for an industry-built standard that carries government endorsement without a direct regulatory mandate.

Its core message is that AI governance in finance remains fragmented as adoption spreads across institutions. Without a shared framework, firms must build controls individually even as regulators and cyber agencies intensify scrutiny of the risks.