Pangea Rolls Out AI Guardrails to Secure AI Applications
In the rapidly evolving world of artificial intelligence, security is becoming a top priority. Pangea has launched two new products, AI Guard and Prompt Guard, designed to enhance the security of AI applications and protect against growing threats like prompt injection and the unintended disclosure of sensitive information. These innovative tools complement Pangea’s existing offerings, including AI Access Control and AI Visibility, to deliver a robust suite of guardrails that fortify AI applications.
“Businesses are racing to develop AI applications using Retrieval-Augmented Generation (RAG) and agentic frameworks. This integration of large language models (LLMs) with user data poses significant security risks,” said Oliver Friedrichs, CEO of Pangea. “New vulnerabilities appear daily, which means we must deploy countermeasures just as swiftly. Pangea is committed to identifying and addressing generative AI threats before they escalate into damaging issues.”
Kevin Mandia, founder of Mandiant and strategic partner at Ballistic Ventures, added, “I’ve seen firsthand how weaknesses in computer systems can result in serious consequences if left unaddressed. The potential for AI to act autonomously can magnify these effects. Pangea’s security protocols draw on years of cybersecurity knowledge, providing crucial protections for organizations developing AI software.”
Enhancing Safe AI Software Development
Pangea AI Guard primarily aims to prevent sensitive data leaks and block harmful or inappropriate content, including profanity, self-harm, and violence. With an extensive array of detection technologies, it evaluates AI interactions and safeguards against over 50 forms of confidential and personally identifiable information. Additionally, industry-leading partners like Crowdstrike, DomainTools, and ReversingLabs supply millions of threat intelligence data points, allowing users to monitor files, IPs, and domains for threats.
This system isn’t just reactive; it’s proactive. It can redact, block, or neutralize harmful content while also utilizing unique format-preserving encryption. This means data protection is upheld without disrupting database formats or structures—an essential feature for any organization looking to maintain seamless operations.
Meanwhile, Pangea Prompt Guard focuses on analyzing prompts from users and systems. It seeks to thwart jailbreak attempts and breaches of organizational limits by employing a multi-layered defense strategy. Through advanced heuristics, classifiers, and tailored large language models, the system detects prompt injection attacks with over 99% accuracy. This includes identifying token smuggling and indirect prompt injections, ensuring that even the most sophisticated attacks have a hard time getting past these safeguards.
One notable case is Grand Canyon Education, which chose Pangea to secure its internal AI chatbot platform. Mike Manrod, CISO at Grand Canyon Education, praised the service, saying, “What I love about Pangea is the ability to implement an API-centric solution that automatically redacts sensitive information at lightning speed—without any adverse effects on user experience. Rather than blocking AI utilization, we established a straightforward way to foster secure AI development.”
Karim Faris, General Partner at GV, emphasized the importance of Pangea’s initiatives, stating, “The launch of Pangea’s new security offerings marks a significant advancement in AI security, especially as the need for reliable guardrails continues to grow. The team has skillfully approached the OWASP Top Ten Risks for LLM Applications and has a strong track record of security innovation.”
Conclusion
In a digital landscape where AI applications are increasingly woven into the fabric of daily life and business, prioritizing security is essential. With the launch of AI Guard and Prompt Guard, Pangea sets a new standard for safety in AI application development. Their dual approach not only defends against present threats but also anticipates future vulnerabilities, allowing organizations to innovate with confidence.
The AI Buzz Hub team is excited to see where these breakthroughs take us. Want to stay in the loop on all things AI? Subscribe to our newsletter or share this article with your fellow enthusiasts.