Unveiling the Hidden AI Landscape in Healthcare
As healthcare systems strive for efficiency and innovation, they’re increasingly turning to artificial intelligence (AI) tools. However, many leaders are unaware of the myriad of AI applications that have quietly found their way into their operations.
The Eye-Opening Findings
Itamar Golan, co-founder and CEO of Prompt Security, a New York-based cybersecurity company, revealed some startling insights during a recent interview with Newsweek. When his team conducts audits at healthcare organizations, they typically uncover around 70 different AI applications in use. Most security teams expect to find only one to five. “When they see the real number, it’s like a eureka moment,” Golan stated, highlighting the disconnect between perception and reality.
AI’s rapid integration into healthcare tools is a game-changer. “This market is becoming fragmented, and AI is being woven into nearly every application,” Golan explained. The implications of this integration are profound, especially in environments where safeguarding patient data is paramount.
The Silent AI Revolution
Surprisingly, many of these AI tools aren’t actively sought out by staff. Instead, everyday applications—think Microsoft Office, Adobe Acrobat, and Salesforce—are embedding AI functionalities behind the scenes. While this might be of little concern for individual users on personal accounts, it poses significant risks in healthcare. Golan stresses the importance of controlling access to AI, particularly for sensitive patient information, to prevent unintentional leaks.
Healthcare organizations often mistakenly believe that restricting access to known AI platforms, such as ChatGPT or Gemini, is sufficient to manage AI usage. However, “many are not yet aware that their existing tools—like Salesforce—may already leverage AI or another language model in the background,” Golan warned.
Potential Risks of Embedded AI
This unnoticed integration could spell trouble for health systems both internally and externally. If sensitive patient data inadvertently gets shared with an external language model, it could become a part of that model’s training data. Golan pointed out starkly, “Once the information is embedded in the model’s brain, it’s a lost battle.” The result? Anyone interacting with that model could potentially access that leaked sensitive information.
Moreover, the integration of AI into legacy systems can disrupt traditional permission settings. Golan shared anecdotes where junior staff unexpectedly accessed confidential information, such as executive salaries or strategic plans, simply by querying their AI-enhanced tools. This breakdown of data access protocols can lead to unwanted exposure of sensitive information.
A Call for Transparency and Oversight
While Golan encourages healthcare executives to embrace AI adoption, he emphasizes the necessity of providing oversight on AI applications. “You need to better understand which AI is being utilized, who’s accessing it, and what information is being shared,” he advised. Gaining visibility into existing AI usage is critical for crafting informed policies that protect both patients and employees.
For those interested in deepening their understanding of the intersection between AI and healthcare security, consider attending Newsweek’s upcoming virtual panel, “Is Your Hospital Cyber-Safe?” Hear from industry leaders at Zoom, Kyndryl, and LevelBlue on April 10 at 2 p.m. ET.
The integration of AI into healthcare systems holds immense promise but warrants careful governance. By taking the time to scrutinize the AI landscape within their organizations, healthcare leaders can foster a safe environment that harnesses the power of AI while safeguarding patient confidentiality.
The AI Buzz Hub team is excited to see where these breakthroughs take us. Want to stay in the loop on all things AI? Subscribe to our newsletter or share this article with your fellow enthusiasts.