SecurityBrief US - Technology news for CISOs & cybersecurity decision-makers
United States
emma launches GPU governance tools for cloud AI ops

emma launches GPU governance tools for cloud AI ops

Fri, 15th May 2026 (Today)
Joseph Gabriel Lagonsin
JOSEPH GABRIEL LAGONSIN News Editor

emma Technologies has launched new tools for managing AI infrastructure in its cloud operations platform, extending its governance model to GPU-based workloads.

The update adds GPU compute, monitoring, cross-cloud networking and inference deployment to a platform that already manages cloud-native workloads across distributed infrastructure. The new capabilities are intended to bring AI systems under the same governance policies and operating model used for other enterprise workloads.

The changes address a common problem as AI projects move from experimentation to production. Businesses are increasingly deploying GPU-backed systems, but often manage them through separate tools and cloud-specific processes, creating gaps in oversight, cost control and operational consistency.

emma says many organisations still provision GPU infrastructure manually in each cloud environment, while governance remains split across different hardware and software layers. As a result, engineering teams may have limited visibility into spending and performance until costs appear in cloud billing reports.

The new functions are presented as a connected stack within the existing platform. GPU Virtual Machines and GPU Managed Kubernetes let organisations provision infrastructure at both the virtual machine and cluster level under the same governance rules as other workloads.

GPU Observability provides a single interface to monitor those environments instead of relying on separate dashboards from cloud providers. Cross-Cloud AI Networking connects workloads across providers over emma's private backbone, reducing manual configuration and egress charges linked to routing over the public internet.

Inference Workflows adds deployment templates designed to standardise how models are put into production. The goal is to avoid rebuilding inference environments for each new model while keeping deployments within an established governance framework.

Governance focus

emma argues that AI infrastructure has become production infrastructure, but governance has not kept pace. In practice, firms may have invested in GPU capacity without resolving broader operational questions, such as where workloads should run, how data should move between environments, and how to balance performance, sovereignty and cost requirements.

Research cited by the company points to broad adoption of GPU workloads while suggesting that infrastructure bottlenecks remain common. "With 76% of organizations already running GPU workloads, making high-performance parallel computing a baseline infrastructure requirement, the need for unified governance frameworks that extend to AI infrastructure, as emma now provides, has never been more urgent," said Paul Nashawaty, Practise Lead and Principal Analyst at ECI Research and theCUBE Research.

Nashawaty added: "Our research confirms that despite unprecedented investment in AI infrastructure, organizations continue to encounter bottlenecks related to data movement, orchestration and utilization efficiency, confirming that GPU capacity alone is insufficient for production AI, and that governance platforms like emma are essential to bridging that gap."

Existing platform

emma is framing the launch around integration rather than as a separate layer of AI tooling. Standalone GPU tools and MLOps platforms often create another operational tier for engineering teams to manage; its approach is to extend a platform already used for governance and cloud operations.

That may appeal to organisations trying to avoid adding more fragmented systems as AI deployments expand across different clouds and regions. By placing GPU infrastructure and inference deployment within the same control plane, emma aims to give teams a more unified way to govern infrastructure choices and enforce policy.

Dmitry Panenkov, Founder and Chief Executive Officer of emma Technologies, said the issue is no longer experimental use of AI hardware but production control. "GPU infrastructure has been operating outside the governance frameworks that apply to everything else in the enterprise. That's not sustainable when it's running production AI. These capabilities bring GPU workloads into the platform that already governs everything else - same policies, same visibility, same operational model. We're not chasing the AI wave. We're extending the answer we already had."

The launch reflects a broader shift in enterprise technology, where the challenge is moving from access to AI hardware to managing distributed systems that can support sustained production use. For companies operating across multiple cloud providers, governance, networking and observability are becoming as important as raw GPU supply.

emma says its platform is designed to provide unified access, deployment and management across distributed infrastructure, with an emphasis on reducing operational fragmentation, controlling costs at infrastructure boundaries and maintaining governance across environments without vendor lock-in.