Industry data miners are increasingly using non-traditional data to predict health outcomes, raising new challenges in complying with global privacy rules.
With the growth in use of AI and machine learning, experts at a Health Datapalooza panel said using data that don't come straight from an EHR — like social media or demographics — also opens up a new box of ethical issues.
Technologists developing AI tools for healthcare must "completely re-engineer" their data flows around de-identified data to avoid regulatory hurdles, Stanley Crosley, an attorney who chairs the data privacy and health information governance team at Drinker Biddle, said.
That presents technologists with a challenge, as de-identifying healthcare data from a patient's pacemaker or insulin pump, Crosley argued, strips that data of revelatory information.
"As a friend of mine said, often in the U.S., when you are looking at data, it's either de-identified or it's useful," he said. "How do we amass enough information, enough data and the right data sets to actually do artificial intelligence? The data sets aren't easy to aggregate in the U.S. in a HIPAA-covered environment."
A rising acknowledge of the impact of social determinants of health is also playing a role.
"We've gotten to the point where we're realizing a healthcare outcome is influenced by so many things, not just the healthcare data," Deven McGraw, chief regulatory officer at Ciitizen, a startup that seeks to help patients better access their healthcare information, said. "So, many more researchers are getting smarter and curious about taking in some of that data and coming up with an answer that provides you a better insight into what's going to work for patients."
In 2011, FICO released a controversial HIPAA-compliant medication-adherence tool using predictive analytics to approximate a patient's likelihood to adhere to their medications. While the company never revealed what data the tool relied on, McGraw guessed FICO was leveraging a combination of socioeconomic, demographic and geographic data, avoiding HIPAA altogether.
"The problem with thinking about this as a privacy issue is that privacy doesn't regulate outcomes," McGraw said. "It's more of an ethics issue ... How do we make sure the uses of this are for the good and not for the bad, and who gets to decide that?"
Earlier this year, scientists at Stanford University developed an AI algorithm that predicts patient mortality by studying health records from about million patients. The technology had a 90% success rate, but the bigger problem, according to Crosley, was that scientists didn't know exactly how the algorithm was so efficient.
"They were worried, appropriately so, that until they could figure out how it could be done, this wasn't a system that was ready to be loosed upon the hospital," Crosley said.
Even so, anything less than a 100% success rate carries risk for patients and physicians. Providers are just beginning to warm up to AI and its potential, specifically in precision medicine and routine tasks. But as AI's role in an episode of care grows, so does gray area around liability if, for example, a palliative care physician takes into account inaccurate mortality date predicted by Stanford's algorithm.
Michelle Dennedy, vice president and chief privacy officer at Cisco, argued that AI needs to be used to help industry professionals make decisions rather than make decisions for them.
And patients would agree. A recent Accenture survey found that while most patients are comfortable with using an AI-powered service to access information or better navigate services, the majority are not comfortable with AI being used to help facilitate care.
Before technologists dive in to social determinant and social media data to develop new AI solutions, Dennedy said, the industry needs more clarity around AI's potential for harm and for improving the efficiency of existing systems.
"As complex as medical care and systems are, it isn't impossible to create an AI system that is useful," Dennedy said.