Washington Struggles to Keep Pace with the Growing Influence of AI in Healthcare |
Artificial intelligence (AI) in healthcare presents a multifaceted challenge that policymakers and regulators are only beginning to grapple with, and there's a growing concern within the AI industry that missteps in regulation could hinder innovation. Bob Wachter, Chair of the Department of Medicine at the University of California-San Francisco, aptly describes it as an "incredibly daunting problem," emphasizing the risk of overregulation.
The pervasiveness of AI in healthcare is already evident, with the Food and Drug Administration (FDA) greenlighting around 692 AI products. These algorithms are streamlining various processes, from patient scheduling to staffing decisions in emergency rooms. Moreover, they're aiding radiologists in interpreting diagnostic images like MRIs and X-rays. Even Wachter himself occasionally seeks advice from GPT-4, a sophisticated language model, for complex medical cases.
Despite AI's transformative potential, government entities are playing catch-up. Michael Yang, a senior managing partner at OMERS Ventures, highlights policymakers' lag in understanding the rapid advancements in AI healthcare applications. Investments in the sector are booming, with significant capital flowing into digital health firms specializing in AI, according to Rock Health.
One of the key challenges for regulators is the dynamic nature of AI systems, unlike traditional drugs whose chemistry remains constant over time. However, efforts are underway to establish governance frameworks ensuring transparency and privacy. Congressional interest is also on the rise, with the Senate Finance Committee recently holding a hearing on AI in healthcare.
As regulatory discussions intensify, lobbying efforts are ramping up too, reflecting the industry's push to shape the regulatory landscape. Bob Kocher, a partner at Venrock and former Obama administration official, stresses the complexity of regulating AI in its nascent stage of development. He underscores potential hurdles in adoption within the healthcare system, where concerns about liability may deter physicians from embracing unfamiliar technology.
While the adoption of AI in healthcare is growing, it's not without risks. AI systems, like any medical product, can pose dangers to patients, including the propagation of misinformation. Wachter recounts an instance where an AI-generated prior authorization letter turned out to be alarmingly convincing despite its fictitious content.
Moreover, there's a risk of AI perpetuating biases inherent in the healthcare system. Studies have shown disparities in care based on race, and if AI algorithms are trained on biased data, they may exacerbate these disparities. For instance, Wachter's team at UCSF found that an AI tool designed to predict appointment no-shows disproportionately flagged people of color.
Addressing these risks requires sustained attention from policymakers, regulators, and researchers. Transparency in algorithm development and ongoing monitoring by human experts are crucial safeguards. Policymakers must invest in robust systems to track AI's evolution over time, recognizing that the most significant breakthroughs may be unforeseen.
In navigating the complex terrain of AI regulation in healthcare, policymakers must strike a delicate balance between fostering innovation and safeguarding patient welfare. As Katherine Baicker, University of Chicago Provost, aptly puts it, "The biggest advance is something we haven't thought of yet." This sentiment underscores the need for flexible, forward-thinking regulatory approaches to harness the full potential of AI in healthcare.