At a House subcommittee hearing on Wednesday, lawmakers expressed concerns about whether artificial intelligence is being appropriately used in healthcare and called for stronger guardrails to supervise the quickly evolving technology.
“With all these innovative advancements being leveraged across the American healthcare ecosystem, it is paramount that we ensure proper oversight is being applied, because the application of AI and machine learning will only increase,” said Energy and Commerce subcommittee Chair, Rep. Morgan Griffith, R-Va.
While House Democrats and Republicans said oversight of the technology was needed for applications ranging from mental health chatbots to prior authorization reviews, they proposed few concrete plans for future regulation or guardrails.
The hearing, called “Examining Opportunities to Advance American Health Care through the Use of Artificial Intelligence Technologies,” comes at a pivotal time for AI in healthcare. Although most healthcare leaders say the technology holds promise, the majority of providers still aren’t using the technology, citing concerns about data privacy and reliability.
A lack of federal oversight has contributed to that “foundational trust deficit,” according to Michelle Mello, professor of law and health policy at Stanford University.
The Biden administration took some steps toward regulating AI in healthcare, including creating a task force to build regulations, but the Trump administration halted those efforts. In July, the Trump administration unveiled its AI adoption plan, but the plan is light on healthcare details and favors deregulation — an approach that’s out of step with the recommendations of most witnesses at Wednesday’s hearing.
“This rule free space has left hospitals, clinicians and patients apprehensive about the risks of AI, and that fear is chilling adoption,” Mello said.
Concerns grow over prior authorization
Several lawmakers raised concerns about the role of AI in prior authorization, especially for services covered in Medicare Advantage.
Payers have faced growing scrutiny for automating their claims review process. A Senate report last year found the country’s three largest MA insurers — UnitedHealthcare, Humana and CVS — leveraged predictive intelligence to limit access to post-acute care and boost profits.
However, the federal government has recently proposed bringing AI into the claims review process. In July, the CMS unveiled a program to pilot prior authorization in traditional Medicare for some services that the Trump administration says are prone to abuse.
The federal government said it will contract with companies in the pilot program to use AI for prior authorizations. Although Stanford’s Mello told representatives that the pilot program will require a human to review claims denials, she worries they could be “primed” by AI to accept denials.
Some lawmakers at the hearing expressed concerns that contracted companies would receive financial incentives for reducing care.
Rep. Greg Landsman, D-Ohio, called for the program to be “shut down” until there was more information about what guardrails would be placed upon technology companies to ensure they weren’t properly denying care to eke out higher returns.
“You get more money if you're that AI tech company if you're able to deny more and more claims. That is going to lead to people getting hurt,” Landsman said.
A push to rein in therapy bots
Much of the hearing focused on regulating AI for mental healthcare, following media reports of AI-induced “psychosis” and a June advisory from the American Psychological Association warning that protections are needed for adolescents using AI.
Rep. Raul Ruiz, D-Calif., said some chatbots were actively harmful to those seeking care, including direct-to-consumer chatbots like ChatGPT.
Ruiz referenced the death of 16-year-old Adam Raine, whose parents say was encouraged to commit suicide by ChatGPT. The Democrat worried that similar products might offer users baseline correct information — like how to seek out local support resources — but also indulge users' darker thoughts, a boundary a human professional would never cross.
“Imagine you’re admitting your suicidal thoughts and the clinician just holds up a poster board with a hotline number, and then continues with more deep conversations about how to actually complete suicide,” Ruiz said.
Part of the problem is that the market is teeming with unregulated chatbots making “deceptive and dangerous” claims, said Vaile Wright, senior director of healthcare innovation at the American Psychological Association.
In one case, an entertainment bot presented itself as a psychologist and engaged in millions of chats. In another, Wright said a chatbot validated a user’s violent thoughts toward family members.
“This is unacceptable,” Wright said. “The APA has formally requested investigations by the [Federal Trade Commission] and the Consumer Product Safety Commission to address these potentially harmful products to realize AI's promise while protecting patients.”
The APA further asked Congress to enact legislation prohibiting the misrepresentation of AI as licensed professionals, and mandating transparency and human oversight over clinical decisions. The organization also wants to see legislation that requires age-appropriate safeguards and limits access to harmful or inaccurate health content.
However, Wright admitted that some more robust solutions may take time and independent research to develop due to the emergent nature of AI.
“What we actually need is independent research, looking at what the problem is and what the solutions ought to be. And those should be empirically driven, not just us throwing spaghetti at the wall because it makes sense at the time,” she said.
More oversight is needed
Witnesses expressed support for several reforms to beef up oversight of AI.
Mello advocated for modernizing the Food and Drug Administration’s authority to regulate AI. She also called for making post-deployment monitoring of AI a requirement for Medicare funding.
Andrew Ibrahim, chief clinical officer of Viz.ai, a health AI company, echoed calls for reviewing FDA policies.
“Right now, [the FDA is] using laws that are decades old, or statutes or authorities or frameworks that don’t really fit the pace or the way that AI is being used,” said Ibrahim.
Although lawmakers mostly declined to back legislative proposals for enhancing AI oversight, Rep. John Joyce, R-Pa., said he planned to introduce the Health Tech Investment Act into the House. The legislation was originally introduced in the Senate in April.
Some Democrats said they couldn’t focus on modernizing the FDA while the HHS was in upheaval. The agency has been contending with a restructuring and a reorganization of staff. Last week, Secretary Robert F. Kennedy Jr. fired the director of the Centers for Disease Control and Prevention, a month after she was confirmed by the Senate. Several other high-ranking CDC officials left in response.
“Rome is burning,” said Rep. Diana DeGette, D-Colo. “Rome is burning. More important than this is the peril that the Trump administration's policies have put my constituents and the constituents of every single person on this panel in.”