Exploring Generative AI in Information Security Training: Is It Effective?
In today’s fast-paced digital landscape, professionals across various industries are tapping into generative AI for a plethora of tasks, notably in crafting information security training materials. But does this trend translate to effective training outcomes?
At the ISC2 Security Congress in Las Vegas this past October, Rensselaer Polytechnic Institute’s Brian Callahan, a senior lecturer and graduate program director in information technology and web sciences, along with undergraduate student Shoshana Sugerman, shared intriguing findings from their recent study on AI-generated cybersecurity training.
The Study: Prompt Engineering and Cybersecurity Training
The primary question driving their research was: "How can we empower security professionals to effectively prompt an AI for realistic cybersecurity training?" This raises further queries about whether security professionals need to be adept at prompt engineering to generate useful AI-driven training content.
To explore this, the researchers assigned a task to three distinct groups: ISC2-certified security experts, self-proclaimed prompt engineering experts, and individuals with expertise in both areas. Each group was tasked with creating cybersecurity awareness training using ChatGPT. The resulting materials were then shared with the campus community to gauge their effectiveness.
The researchers initially hypothesized that the quality of the training would not vary significantly between the groups. However, they were curious to find out which skills—security knowledge or prompt engineering—would yield more effective training outcomes.
Feedback from Training Participants
After distributing the AI-assisted training materials (which underwent some editing), feedback from participants revealed some interesting insights:
- Training designed by prompt engineering experts led participants to feel more skilled at thwarting social engineering attacks and securing passwords.
- Participants of training crafted by security experts felt more confident in recognizing and evading social engineering attacks and phishing attempts.
- Those trained by individuals with expertise in both areas reported higher confidence in understanding various cyber threats, especially phishing detection.
Interestingly, while security-trained individuals felt confident in their prompt engineering capabilities, the creators of the content deemed the initial AI-generated output as inadequate. "No one felt their first attempt was good enough," Callahan remarked, underlining the need for extensive revisions.
In one instance, ChatGPT delivered a polished guide on reporting phishing emails, but, to the researchers’ surprise, the content was rife with inaccuracies. It invented processes and a fictitious IT support contact, showcasing the need for meticulous oversight. When prompted correctly to link to Rensselaer’s security portal, ChatGPT churned out accurate information, emphasizing the importance of effective prompting. Alarmingly, none of the training participants identified the erroneous information presented in their materials.
The Importance of Transparency in AI-Generated Training
"ChatGPT can be your best friend when prompted correctly," Callahan explained, noting that Rensselaer’s policies are publicly accessible online. However, the team chose to disclose that the content was AI-generated only after the training sessions were completed. Reactions were mixed, with students expressing indifference, suspicion, or even irony over the use of AI in information security training.
As Callahan aptly stated, any IT teams employing AI to create real training materials should be transparent about their AI usage. “We’ve gathered preliminary evidence that generative AI can be a worthy tool,” he shared, but he cautioned that like any tool, it carries risks and limitations, particularly regarding accuracy.
Limitations of the Experiment
Callahan identified a few limitations in their study. While there’s evidence suggesting that AI technologies like ChatGPT may lead users to overestimate their understanding, conducting skill-based tests rather than self-assessments would have been time-consuming, posing a challenge to the research timeline.
They briefly considered a control group comprised solely of human-generated training; however, the need to differentiate between cybersecurity experts and prompt engineers took precedence, and they lacked enough participants identifying as prompt engineering experts for an adequate control group. The initial study involved just 51 participants and three training creators, though they plan to expand this in their final publication.
Conclusion
The findings from this research spark curiosity about the intersection of generative AI and information security training. While there’s potential in leveraging AI tools for educational purposes, the necessity for human oversight and expertise remains critical to ensure accuracy and effectiveness.
The AI Buzz Hub team is excited to see where these breakthroughs take us. Want to stay in the loop on all things AI? Subscribe to our newsletter or share this article with your fellow enthusiasts.