Dive Brief:
- Healthcare workers are using artificial intelligence tools that haven’t been approved by their organizations — a potential patient safety and data privacy risk, according to a survey published Thursday by Wolters Kluwer Health.
- More than 40% of medical workers and administrators said they were aware of colleagues using “shadow AI” products, while nearly 20% reported they have used an unauthorized AI tool themselves, according to the survey by the information services and software firm.
- Those unapproved tools might be useful to individual workers, but their health systems haven’t vetted the products’ risks or considered governance processes, according to Dr. Peter Bonis, chief medical officer at Wolters Kluwer. “The issue is, what is their safety? What is their efficacy, and what are the risks associated with that?” he said. “And are those adequately recognized by the users themselves?”
Dive Insight:
Shadow AI, when employees use AI products that haven’t been authorized by their companies, can pose a serious security risk across industries, experts say. Because of shadow AI’s furtive nature, organization leaders — and IT teams — don’t have insight into how the tools are used, creating opportunities for cyberattacks and data breaches.
Cybersecurity is already a challenge for healthcare organizations, as cybercriminals frequently target the sector given its valuable data stores and the high stakes of care delivery.
Risks can be more serious in healthcare, Bonis said. Accuracy is a concern, since AI tools can give misleading or inaccurate information — potentially harming patients. About a quarter of providers and administrators ranked patient safety as their top concern surrounding AI in healthcare, according to the survey, which included more than 500 respondents at hospitals and health systems.
“There’s a whole variety of ways in which — even though the intention is for humans to be in that loop at some point or another — these tools misfire, and those misfires may not be adequately intercepted at the point of care,” Bonis said.
Still, AI has become one of the most exciting technologies for the healthcare sector, given hopes the tools can help sift through vast amounts of data, speed administrative work, and assist doctors with documenting care and finding medical information.
Some workers are turning to shadow AI tools to do this, according to the survey by Wolters Kluwer, which offers its own AI-backed clinical decision support product.
More than 50% of administrators and 45% of care providers said they used unauthorized products because they offered a faster workflow. Additionally, nearly 40% of administrators and 27% of providers reported using shadow AI because the tool had better functionality or there weren’t approved products.
Finally, more than 25% of providers and 10% of administrators cited curiosity or experimentation as the reason behind shadow AI use.
Many healthcare workers aren’t aware of their organization’s AI policies. Administrators are more likely to be involved in AI policy development compared with care providers, according to the survey. But 29% of providers said they’re aware of the main AI policies at their organization, compared with 17% of administrators.
Many providers have likely encountered some AI policies related to AI scribes — which typically record clinicians’ conversations with patients and draft a clinical note — as these tools have been widely adopted at health systems, Bonis said.
“So that might be why they are saying that they are aware, but they may not be fully aware of all the things that could be considered an AI policy,” he said.