AI-fuelled cyber onslaught to hit critical systems by 2026
Cyber security specialists are warning that 2026 will bring a sharper escalation in attacks on critical infrastructure and government networks, as artificial intelligence reshapes the tactics of both state-backed and criminal groups and exposes weaknesses in ageing operational technology.
The alerts follow fresh guidance from Western governments on recent intrusions into operational technology, a high-profile breach at the UK Foreign Office, and growing concern in Australia over cyber-enabled fraud and critical infrastructure risk.
OT systems
Authorities in the US, UK, Australia, Canada, France, Germany and other countries recently set out how pro-Russia hacktivists had targeted operational technology in critical infrastructure over recent weeks. Operational technology underpins industrial processes in sectors such as energy, water and manufacturing.
Floris Dankaart, Lead Product Manager, Managed Extended Detection and Response at NCC Group, said the attack pattern reflects a shift in the threat landscape for industrial operators.
"Historically, operational technology cyber security incidents were the domain of nation states, or sometimes the act of a disgruntled insider. But recently, we've seen year-on-year rises in operational technology ransomware from criminal groups as well and with hacktivists: All major threat actor categories have bridged the IT-OT gap. With that comes a shift from highly targeted, strategic campaigns to the types of opportunistic attacks CISA describes. These are the predators targeting the slowest gazelles, so to speak," said Dankaart.
Dankaart added that the risks go beyond temporary disruption or data theft. "Although these methods seem crude compared to APT activities, this doesn't mean they can't be dangerous. Disabled safety mechanisms in an OT environment can have a range of serious consequences, including injury and death."
Exposed remote access tools are a common entry point. "It's concerning, but not surprising, that publicly exposed VNC interfaces are one of the entry routes. VNC was never designed for secure remote access and typically lacks strong encryption," he said.
Dankaart emphasised that asset management should go beyond documentation. Real-time monitoring is essential, and since active OT scanning is risky, passive tools like port-mirrored sensors can maintain inventories and detect rogue devices, such as unauthorised 4G routers.
AI and cybercrime
Security firms predict AI will increase both the volume and sophistication of attacks in 2026, including AI-assisted social engineering, deepfakes, and automated fraud, alongside more use of machine learning in defence.
Scott Morris, Managing Director for ANZ at Infoblox, said, "AI will encourage cybercriminals to pioneer new creative ways to use classic techniques, in an attempt to evade detection. However, these new methods may have the adverse effect of exposing previously unknown intrusions, exposing longer-term threat campaigns by nation-state actors lurking in the networks. This may expose geopolitically driven activity."
Australian policymakers are expected to revise cybersecurity legislation and regulations for critical sectors. Morris added that organisations are looking at overseas case studies to reduce fraud and infrastructure-level attacks. "For example, the success of using protective DNS to reduce financial cybercrime in the Ukraine, will encourage Australian organisations to use similar strategies, blocking access to malicious domains, preventing cybercriminals from carrying out their attacks at the infrastructure level," said Morris.
Fraud escalation
John Wojcik, Senior Threat Researcher at Infoblox, said AI will accelerate cyber-enabled fraud targeting Australian consumers and businesses. "In 2026, we will see a significant acceleration of automation within the cyber-enabled fraud industry. Though most of these groups are concentrated in Southeast Asia, they are actively targeting Australia. There will be a continuation of the push and pull scenario we have seen in recent years, in which law enforcement increases the pressure and cybercriminals respond by doubling down on AI enabled attacks. Deepfake software and jailbroken large language models will become more prevalent, making it increasingly difficult to detect and prevent fraudulent activity."
Authorities have started exposing the financial infrastructure behind large-scale scams. "The scam ecosystem will continue to be exposed globally, raising new awareness of the many aspects of these crimes, including payment processors, geographic distribution of call centres and connected financial crimes. We should expect to see more law enforcement actions across the world as they come to understand the complexities and find ways to bring enforcement. In late 2025, we have seen both huge financial actions against Chinese actors through the largest cryptocurrency confiscation ever and a fine of AUD $256M against Canadian processors who were supporting Russian cybercriminal activities. Australia will follow this path, taking a harder, more tangible stance against cybercriminals. This is already taking shape, with the Australian Federal Police launching the National Security Investigations team to tackle increasing cybercrime," said Wojcik.
He added, "The use of AI in cyberwarfare has led to fewer barriers to entry. For young, disenfranchised people across the region, this will drive them towards the cybercrime economy, exacerbating existing cybersecurity challenges. In Australia, some states - such as Victoria - are experiencing a major uptick in alleged youth offenders. It's clear cybercriminal groups are targeting young Australians and this trend will necessitate targeted interventions to provide alternative pathways and reduce the appeal of cybercrime. Addressing the root causes of disenfranchisement will be crucial in mitigating this issue."
AI oversight
Vendors also expect changes in how organisations deploy AI in defensive roles. They forecast a greater emphasis on human accountability, the sharing of collective intelligence, and the consolidation of tools around unified platforms.
Gregor Steward, Chief AI Officer at SentinelOne, said security teams will focus on supervisory control rather than manual execution of repetitive tasks. "AI models and tools can now handle a major portion of the procedural security work that humans currently do, and the challenge will become supervision rather than execution. Even when machines do the work, humans must remain responsible for the outcomes, but reviewing the output of, say, 1,000 AI agents is impossible with traditional alert-centric methods."
"The solution will be to find the 'Goldilocks Spot' of high automation and human accountability, where AI aggregates related tasks, alerts and presents them as a single decision point for a human to make. Humans then make one accountable, auditable policy decision rather than hundreds to thousands of potentially inconsistent individual choices; maintaining human oversight while still leveraging AI's capacity for comprehensive, consistent work."
Steward also highlighted the growing threat of deepfakes. "In 2026, the smartest enterprises will move beyond single-layer defenses against deepfakes. The technology to replicate someone's identity in video, doing practically anything, should concern every CISO… Sophisticated attackers can iterate indefinitely at minimal cost, refining their approach until they succeed. The path forward requires combining detection tradecraft… with out-of-band verification methods… Detection remains a critical element of defense, but the organisations that will stay ahead in 2026 are those recognising that deepfakes require a fundamentally different approach to identity verification across the enterprise."
He further predicted greater sharing of threat intelligence. "In 2026, organisations will finally realise that collective security requires collective contribution… The key will be increasing customer comfort with the understanding that sharing some of their information will ultimately benefit and de-risk them. This will help customers realise, concretely, that the safety of one is the safety of all. An individual customer is not an island, and an individual alone can't defend against attackers who share information freely."
Steward expects consolidation of security products into integrated platforms. "In the next year, security's strategy of having a smorgasbord of acronyms and siloed tools is going to fall apart… If there's one system that can detect identity attacks as well as behavioral ones, why are we maintaining artificial product boundaries? That unification of systems is happening across the SaaS landscape, and security is next in line."
State-linked threats
Concerns over state-linked activity intensified after the UK Foreign Office breach.
Joseph Rooke, Director of Risk Insights at Recorded Future's Insikt Group, said, "Cyber threat groups linked to nation states often focus on long-term monitoring rather than quick financial gain. They want to quietly observe government or commercial activity, and they only need one small weakness to do it. They work much like skilled burglars: they look for the one open window or forgotten spare key that lets them quietly slip inside."
He added, "Older technology, slow upgrades, and cost-cutting can all create easy entry points for hackers. This is why cyber threat intelligence is so important… Intelligence-led security helps organisations identify vulnerabilities early, understand who might be targeting them, and take action before real harm is done. It supports scrutiny of outside contractors, spots insider risks and catches fake identities designed to fool staff."
"While the UK government believes the risk to individuals from this incident is low, the breach is another reminder that even well-resourced institutions can be vulnerable if they don't keep a close, constant eye on their digital front door," said Rooke.