Generative AI has taken the tech world by storm, especially since the launch of ChatGPT two years ago. As the technology continues to evolve, major companies like Microsoft are leveraging OpenAI’s foundation models while addressing how AI impacts cybersecurity. Siva Sundaramoorthy, a senior cloud solutions security architect at Microsoft, provides insight into this landscape, particularly regarding the security implications of generative AI.
What security risks can come from using generative AI?
At the ISC2 conference in Las Vegas on October 14, Sundaramoorthy outlined several key risks associated with generative AI. He pointed out that while generative AI primarily functions as a predictor of likely answers, it may not always provide accurate results depending on context. Cybersecurity experts must analyze AI from three perspectives: usage, application, and platform.
“Understanding the specific use case you aim to protect is crucial,” Sundaramoorthy emphasized. Many companies are already in the application phase, building their unique bots or utilizing pre-trained AI systems within their environments.
Once organizations identify their usage, applications, and platform concerns, they can begin to secure AI systems akin to traditional systems; however, generative AI poses unique challenges. Sundaramoorthy highlighted seven significant risks:
- Bias
- Misinformation
- Deception
- Lack of accountability
- Overreliance
- Intellectual property rights
- Psychological impact
These risks translate into distinct threats across the three angles:
- Usage: Exposing sensitive information, enabling shadow IT from third-party tools, and enhancing insider threats.
- Application: Vulnerabilities to prompt injection attacks, data leaks, or internal threats.
- Platform: Risks from data poisoning, potential denial-of-service attacks, model theft, model inversion, or “hallucinations” in outputs.
Sundaramoorthy warned that attackers could utilize prompt converters or jailbreaking methods to circumvent content filters, potentially manipulating AI systems to create backdoors or leak sensitive data.
Must-read security coverage
Security teams must balance the risks and benefits of AI
Sundaramoorthy acknowledged the tremendous value of tools like Microsoft’s Copilot for enhancing productivity. Yet, he stated, “The value proposition is too high for hackers not to target it.” Security teams should remain vigilant against several other challenges presented by AI:
- New technology can introduce vulnerabilities through its initial integration.
- User training is essential to adapt to AI capabilities.
- The processing and access of sensitive data through AI systems could give rise to novel risks.
- Transparency and control throughout the AI lifecycle are paramount.
- The AI supply chain can potentially harbor vulnerable or harmful code.
- The absence of clear compliance standards makes securing AI a nuanced challenge.
- Leaders should create a structured pathway to accessing generative AI applications.
- Unique challenges like AI hallucinations merit careful consideration.
- The real-world ROI of AI implementations remains unverified.
He further explained that both malevolent and innocent AI failures exist. For instance, an attacker might masquerade as a security researcher to bypass safeguards, extracting passwords. Conversely, biased training data could result in inaccurate outputs, posing a risk even without malicious intent.
Trusted ways to secure AI solutions
Despite the evolving AI landscape, established security measures can help organizations safeguard their AI systems. Renowned organizations like NIST and OWASP have drafted frameworks to address generative AI risks, while MITRE offers the ATLAS Matrix to outline known tactics attackers exploit against AI.
Moreover, Microsoft and Google deliver governance tools and secure AI frameworks to assist security teams in evaluating their AI solutions effectively. Organizations should ensure that user data doesn’t mistakenly incorporate into training datasets by adhering to stringent sanitation practices. Applying the principle of least privilege when fine-tuning models and ensuring strict access controls when connecting models to external data sources is also crucial.
Ultimately, Sundaramoorthy stated, “Best practices in cyber are best practices in AI.”
To use AI — or not to use AI?
In light of the risks, some may argue against using AI at all. Janelle Shane, an AI researcher speaking at the ISC2 Security Congress, highlighted that for some security teams, this might be a valid approach. However, Sundaramoorthy presented an alternative view: if AI has access to sensitive documents, the issue lies not within AI itself but in inadequate access controls.
In conclusion, the rapid adoption of generative AI has prompted a re-evaluation of security practices. It’s crucial to engage with these systems cautiously, balancing their benefits against potential risks and ensuring robust protective measures.
The AI Buzz Hub team is excited to see where these breakthroughs take us. Want to stay in the loop on all things AI? Subscribe to our newsletter or share this article with your fellow enthusiasts.