Artificial intelligence could have significant benefits for healthcare, but Congress needs to keep an eye on potential risks as the rapidly evolving technology becomes more prevalent in the sector, lawmakers noted in a House subcommittee hearing on Wednesday.
Legislators questioned witnesses during an Energy and Commerce health subcommittee hearing on a wide range of benefits and dangers that could arise — including risks to patient privacy due to the amount of data needed to train AI models and whether current privacy legislation is up to the task.
“[We have] a lot of very engaged members who want to understand it, and are working to understand it, so we can act appropriately to protect, but without impinging the great things that could come from this,” said Rep. Brett Guthrie, R-Ky.
Data privacy — already a challenge for healthcare amid an increase in data breaches and cyberattacks — is one concern for lawmakers. Strong federal data privacy protections for consumers is key to AI regulation, said Rep. Frank Pallone, D-N.J.
“AI cannot function without large quantities of data. And we must ensure that this increased data demand does not come at the expense of consumers' right to privacy,” he said.
High-quality, diverse and large datasets are necessary to train and improve AI models, said Michael Schlosser, senior vice president of care transformation and innovation at for-profit hospital chain HCA Healthcare. Providers have been operating with the Health Insurance Portability and Accountability Act for decades, and it’s provided a solid roadmap to manage privacy concerns, he added.
However, not all entities collecting health data are covered under the HIPAA privacy law. Consumer apps collect health information that is frequently provided by patients, and they could generate algorithms without transparency into how the data will be used, said Christopher Longhurst, chief medical officer and chief digital officer at UC San Diego Health.
The Federal Trade Commission has already taken some steps to crack down on digital health companies for sharing data. Earlier this year, regulators fined drug cost platform GoodRx, Teladoc-owned virtual mental healthcare provider BetterHelp and fertility app Premom for allegedly sharing consumers’ health data for advertising purposes.
But, in a growing ecosystem of apps and wearable devices, lawmakers need to consider legislation that would set the groundwork for how data is collected, used and shared, said Rep. Greg Pence, R-Ind.
“We should do that before we can look at regulating AI in healthcare and find the balance in simultaneously encouraging private innovation,” he said. “Our increasingly digital world leaves Hoosiers and all Americans in the dark about who has access to their information.”
Access to quality data to train healthcare AI can also be a challenge, said David Newman-Toker, director of the division of neuro-visual and vestibular disorders at the Johns Hopkins University School of Medicine.
The most reliable datasets exist for radiology and laboratory medicine, but data contained in electronic health records is often missing or inaccurate, he said. AI trained on that data could replicate human biases or provide incorrect answers altogether, so research funding for “gold-standard” datasets should be a priority.
“We have to start coordinating data architectures, and we have to start developing and curating good datasets that can be used at a large scale to train these AI models,” Newman-Toker said. “And so I do think that that's going to take a big effort, and one that would be best coordinated federally.”