The Role of AI in Cancer Care: Insights and Challenges
Preparing cancer patients for difficult medical decisions is a crucial responsibility that oncologists often juggle with various aspects of treatment. At the University of Pennsylvania Health System, an innovative approach employs an artificial intelligence (AI) algorithm designed to predict patients’ likelihood of death, prompting discussions about treatment options and end-of-life preferences. However, relying solely on technology can be misleading. A routine evaluation revealed that during the COVID-19 pandemic, this algorithm’s effectiveness diminished significantly, causing its predictive accuracy to drop by seven percentage points, according to a 2022 study.
Impact of AI Failures on Patient Care
Ravi Parikh, an oncologist from Emory University and the study’s lead author, emphasized the real-world consequences of this decline, noting that the tool’s shortcomings led to missed opportunities for doctors to engage in vital conversations that could have averted unnecessary chemotherapy for patients nearing the end of life. “Many institutions are not routinely monitoring the performance of their AI products,” Parikh pointed out, highlighting a broader issue affecting various algorithms in healthcare during the pandemic.
The complications surrounding algorithm use extend beyond individual instances like this. Hospital executives and healthcare researchers are increasingly confronting the realization that AI systems need regular monitoring and maintenance to remain effective. “Everybody thinks that AI will help us with our access and capacity and improve care,” said Nigam Shah, chief data scientist at Stanford Health Care. “But if it increases the cost of care by 20%, is that viable?”
The Hygiene Factor of AI in Healthcare
As AI becomes more integrated into healthcare, government officials share concerns about hospitals lacking the resources necessary to evaluate the efficacy of these technologies adequately. FDA Commissioner Robert Califf expressed doubt, stating, “I do not believe there’s a single health system in the United States that’s capable of validating an AI algorithm that’s put into a clinical care system.” AI is already becoming commonplace, helping clinicians predict patient risks, suggest diagnoses, and streamline documentation processes—yet questions about their reliability loom large.
Recent research at Yale Medicine assessed six “early warning systems” designed to alert clinicians about patients likely to deteriorate quickly. The study revealed significant performance discrepancies among the systems, raising further doubts about the criteria hospitals use to select the most effective algorithms for their needs.
Navigating Uncertainty in AI Performance
The absence of universal standards makes it challenging for medical professionals to gauge which AI tools deliver the best results. Jesse Ehrenfeld, a past president of the American Medical Association, lamented the lack of benchmarks for evaluating and monitoring AI algorithms in clinical environments.
Ambient documentation AI tools, designed to assist physicians by summarizing patient visits, have attracted considerable investment this year. However, Ehrenfeld pointed out that no standard exists to assess the accuracy and reliability of these products. Errors, particularly in a medical context, can lead to severe ramifications. A Stanford University team found that large language models—a type of AI similar to ChatGPT—had a staggering 35% error rate when summarizing patients’ medical histories.
The Human Element in AI Oversight
Some AI failures can be attributed to changes in the underlying data, while others occur seemingly without cause. Sandy Aronson, a tech executive from Mass General Brigham, noted issues with an application intended for genetic counselors that yielded inconsistent results based on the same input. Despite these challenges, there is optimism surrounding AI, provided that improvements are made.
With minimal metrics guiding AI performance and unforeseen errors occurring regularly, one suggested solution is investing in human resources to both enhance and supervise AI tools. Shah remarked that auditing AI models for fairness and reliability at Stanford required months of effort and a considerable amount of manpower.
The Future of AI in Healthcare
As the healthcare landscape continues to evolve with AI technologies, one potential future scenario includes AI systems monitoring each other—with human analysts overseeing both. However, this vision raises questions about the additional resources that hospitals will need, complicating an already strained budget situation.
As AI usage accelerates in the medical field, the balance between innovative technology and human oversight will be crucial to ensure quality patient care. The AI Buzz Hub team is excited to see where these breakthroughs take us. Want to stay in the loop on all things AI? Subscribe to our newsletter or share this article with your fellow enthusiasts.