Pangea Labs launches to boost AI security & tackle new threats
Pangea has launched Pangea Labs, a research division focused on AI security, alongside new AI Red Teaming services aimed at helping organisations assess and secure large language models (LLMs) against current and emerging attack techniques.
Focus on AI threats
The new division, led by Chief Product Officer Rob Truesdell, will conduct research into advanced AI attack techniques. Pangea Labs' remit includes red team exercises designed to test AI systems for vulnerabilities before malicious actors can exploit them. Key research areas include advanced prompt injection attacks and defences, AI model manipulation, jailbreaking methods, enterprise security best practices, and emerging threat intelligence.
"As generative AI becomes deeply embedded in enterprise workflows, the attack surface is expanding exponentially," said Oliver Friedrichs, Founder and Chief Executive Officer of Pangea. "The launch of Pangea Labs alongside our Red Teaming services represents our commitment to staying ahead of these threats through rigorous research and real-world attack simulation. Our research team's proven ability to think like an attacker - combined with our platform's defensive capabilities - creates an unmatched advantage for our customers' security postures."
Red teaming services
Pangea's newly available Red Teaming services are built on the expertise found within Pangea Labs and are designed to simulate real-world cyberattacks on AI systems. These services move beyond traditional penetration testing by specifically targeting the risks and vulnerabilities inherent to LLMs and other AI technologies. Organisations using these assessments are provided with detailed insights regarding security risks specific to their AI infrastructure, effectiveness of deployed controls, and the resiliency of their incident response processes.
The new offering is distinguished by its comprehensive scope, with simulated attacks engineered to reflect actual adversary tactics and techniques commonly aimed at AI, as opposed to generic IT systems. The methodologies employed are based on research by Dr. Jim Hoagland and practical attack experience from Joey Melo.
Joey Melo, who joins as Pangea Labs' first AI Red Team Specialist, is recognised within the AI security field as the sole contestant to complete all of Pangea's 2025 Prompt Injection Challenge virtual rooms. Melo has also achieved 100% completion in the HackAPrompt 2.0 competition, addressing 39 security challenges across multiple AI models, and holds certifications including BSCP, OSCP, and OSCE3.
"Traditional security frameworks weren't designed for the unique challenges of AI systems," said Rob Truesdell, Pangea's Chief Product Officer. "Our taxonomy provides teams with the structured knowledge they need to identify vulnerabilities before attackers do. By understanding the full spectrum of AI attack methods, development teams can build more resilient systems from the ground up."
AI Attack Taxonomy
Along with the launch of the new research division and service line, Pangea has released what it describes as the industry's most up-to-date AI Prompt Injection Attack Taxonomy. This taxonomy is based on the foundational research of Dr. Jim Hoagland, further informed by Melo's practical offensive experience, and is designed to provide a comprehensive classification framework that enables security teams to map and mitigate the diverse attack vectors associated with AI systems.
The taxon characterised as a 'living' framework, meaning it will be regularly updated to adapt as new attack scenarios and countermeasures are developed. The goal is for organisations to continue protecting their AI investments as threat actors evolve their tactics.
According to Pangea, current strategies for securing AI components lack maturity, even as generative AI and related technologies become more deeply integrated into day-to-day operations. Recent incidents, such as the EchoLeak vulnerability affecting Microsoft Copilot, have highlighted the urgency with which businesses must address these challenges.
Pangea's AI security tools are reported to provide protections against a range of AI attack techniques, including prompt injection and sensitive data leakage, addressing a majority of the risks identified by the OWASP Top Ten Risks for LLM applications.
Pangea has been recognised by more than 150 Chief Information Security Officers in its sector as a top cybersecurity company and already supports clients seeking to secure their AI tools and workflows.