Nudge Security adds new tools to govern AI in SaaS
Nudge Security has expanded its SaaS and AI security platform with new tools that track, analyse and control how employees use artificial intelligence across cloud applications.
The Austin-based company now monitors chatbot conversations, browser activity, OAuth integrations and other connections inside the SaaS stack. It says the launch addresses security and governance risks that arise as employees adopt hundreds of AI tools and AI-enabled features.
The new release adds six main functions. These cover monitoring of AI conversations, browser-based policy enforcement, AI usage analytics, identification of risky integrations, summaries of vendors' data training policies and automated playbooks for ongoing governance.
New monitoring tools
Nudge Security now scans employee interactions with popular AI chatbots such as ChatGPT, Gemini, Microsoft Copilot and Perplexity. It detects sensitive data that users share through file uploads and text conversations.
Browser-based controls deliver usage rules directly as employees interact with AI tools. The browser presents guardrails that reflect an organisation's acceptable use policy and prompts users in real time.
The platform also tracks AI usage trends. Security and compliance teams can view Daily Active Users by department, individual user and AI product. The view includes both sanctioned and unsanctioned tools and supports incident response and planning.
New discovery features identify data-sharing integrations and OAuth or API grants that give AI tools access to corporate systems. The platform flags permissions that place sensitive data at risk.
Nudge Security now provides condensed summaries of AI and SaaS vendors' data training policies. These summaries describe how each vendor uses, retains and handles customer data for model training and service operation.
Automated playbooks support ongoing governance tasks. The workflows track employee acknowledgements of acceptable use policies, revoke risky data-sharing permissions and orchestrate account removals.

Extending earlier work
The new functions build on AI security and governance features that the company has offered since 2023. The platform already discovers AI applications, users and integrations from the first day of use. It maps AI dependencies across the SaaS supply chain and maintains security profiles for thousands of AI providers.
Nudge Security positions the expanded platform as a way for customers to pursue AI projects while keeping security controls and policy compliance in place.
Productivity software company Notion uses the system as part of its AI rollout. It tracks employee exploration of AI tools against internal rules.
"As part of Notion's commitment to secure AI adoption, we've built a governance framework that requires visibility into the tools our teams explore. Nudge Security provides this visibility and gives our compliance and legal teams aggregated data on emerging AI tools, which we then evaluate against our established privacy, security, and compliance requirements," said JJ Macias, IT Systems Engineering Manager, Notion.
Rising AI exposure
Nudge Security has published data from organisations that use its service. The company reports that it has discovered more than 1,500 unique AI tools across customer environments. Each organisation runs an average of 39 different AI products.
The data also shows that more than half of SaaS applications list at least one major large language model provider as a data subprocessor. The average employee has 70 OAuth grants, many of which allow ongoing data sharing between SaaS apps and AI tools.
The company says it is the only AI security provider that offers visibility and control across the wider SaaS ecosystem, not just dedicated AI products. It focuses on AI-linked features in productivity suites, integrations through Model Context Protocol servers and persistent OAuth grants that keep access open long after initial approval.
Jaime Blasco, Co-founder and Chief Technology Officer at Nudge Security, said employees often create unseen access paths for AI tools.
"The risk isn't just in the AI tool itself - it's in the access pathways employees create without considering the security implications," said Jaime Blasco, CTO and co-founder, Nudge Security. "A single OAuth grant can give an AI vendor continuous access to your organization's most sensitive data. Nudge Security makes these integrations visible and manageable for the first time."
Workforce focus
The company bases its approach on the idea that AI risk starts with human behaviour. Employees create AI agents, approve OAuth grants, connect APIs for MCP servers and use AI features inside other SaaS products.
Nudge Security's system engages workers directly in governance processes. It delivers prompts and guardrails at the moment users make decisions that affect an organisation's security posture.
The company offers a free trial that generates an inventory of AI assets within hours of activation. The inventory covers AI applications, accounts, integrations and supply chain dependencies, including items that existed before activation.
Nudge Security was founded in 2021 by Russell Spitler and Jaime Blasco. It is backed by investors including Cerberus Ventures, Ballistic Ventures, Forgepoint Capital and Squadra Ventures. The firm plans further work on AI-driven risk insights and behavioural engagement as customers expand their use of AI in SaaS environments.