DCSA Embraces AI to Modernize Security Clearance Processes, but Transparency is Key
By Emily Baker-White and Rashi Shrivastava, Forbes Staff
Before his team of over 13,000 at the Pentagon accesses sensitive data on American citizens, David Cattler, the director of the Defense Counterintelligence and Security Agency (DCSA), prompts them to consider a crucial question: “Would my mom be comfortable with the government doing this?” This concept, dubbed “the mom test,” reflects the agency’s commitment to accountability and transparency as it navigates the complexities of security clearance management using artificial intelligence.
DCSA oversees 95% of federal government employee security clearances, necessitating millions of investigations annually. With an ever-growing need to process private data effectively, DCSA has turned to AI tools in 2024 to enhance its workflows, seeking to better organize vast amounts of information.
However, Cattler emphasizes that DCSA’s approach differs significantly from popular generative AI models such as ChatGPT and Bard. Instead, the agency’s use of AI focuses on data mining and organization—an invaluable asset for prioritizing existing threats rather than unearthing new risks.
“We must trust the tools we use; they can’t be ‘black boxes,’” Cattler stated in an interview with Forbes. He highlighted the importance of understanding AI algorithms’ functionalities to ensure they maintain objectivity and compliance in their operations. One exciting initiative he mentioned is developing a real-time heatmap to visualize risks across facilities that DCSA oversees, helping streamline responses to potential threats based on existing data.
According to Matthew Scherer, a senior policy counsel at the Center for Democracy and Technology, while AI can be beneficial for organizing previously validated information, its application in critical decision-making—such as flagging concerns during background checks or scraping social media—can pose significant risks. An example of this is the AI challenge of distinguishing between individuals with common names, which can lead to serious misidentifications.
“I would be wary if an AI system started making recommendations or influencing outcomes for specific applicants,” Scherer cautioned, noting that such applications could venture into controversial automated decision-making territory.
DCSA remains cautious, steering clear of using AI for identifying new risks. Yet, even in prioritization, the potential for privacy violations and biases exists. As the agency partners with AI companies, Cattler acknowledges the importance of scrutinizing which data they share and how it may be processed. High-profile breaches in the tech industry illustrate the dangers of mishandling data, especially when it involves sensitive Pentagon information.
Furthermore, AI systems risk perpetuating biases that exist in the data they’re trained on, a concern Cattler recognizes. The agency relies on oversight from various governmental bodies to safeguard against such biases. A 2022 report from RAND Corporation specifically warned about the possibility of AI introducing biases into the security clearance vetting process, influenced by historical racial disparities or biases of the developers.
Cattler remarked on the evolving societal values that shape algorithms. For instance, perceptions of addiction and views on LGBTQ+ individuals have significantly transformed over time. He noted, “In many places in the U.S., being gay was literally illegal not too long ago. That bias needed to be addressed.”
Conclusion
The DCSA’s innovative use of AI tools aims to enhance the efficiency and integrity of security clearance processes while prioritizing transparency and ethical considerations. As these advancements unfold, the landscape of government data security is set to evolve. The AI Buzz Hub team is excited to see where these breakthroughs take us. Want to stay in the loop on all things AI? Subscribe to our newsletter or share this article with your fellow enthusiasts.