LAS VEGAS — Artificial intelligence tools are improving rapidly, leaving regulators and lawmakers the challenging task of determining how — and when — to oversee the technology in healthcare, experts said Monday at the HIMSS conference.
Regulations for AI use in the sector are fragmented. The Trump administration has largely moved to limit rules that could slow the adoption of the technology, leaving healthcare organizations with less guidance on how to implement AI.
Meanwhile, some states are attempting to legislate how AI is used in healthcare, creating a convoluted regulatory environment for providers and payers who operate across state lines.
“It remains a very complex environment, with few guardrails for the use of AI in healthcare, and still a lot of work to do to really create an environment that is going to produce safe, reliable artificial intelligence for clinical use,” Tina Joros, the vice president of policy and innovation at health IT company Veradigm, said in an interview.
Still, the HHS has done a lot of work to understand what the sector needs from the federal government, Arman Sharma, the deputy chief AI officer and senior advisor in the office of the secretary at the HHS, said during a panel discussion Monday.
The department has moved to align AI work across its agencies and make sure they’re working in the same direction, Sharma said. Previously, agencies might start their own separate AI projects without clear visibility into what their colleagues were doing.
To help the massive department figure out how it could advance AI in clinical care across its divisions, the HHS released a request for information in December.
The RFI, which received nearly 450 comments, asked the industry how regulations might need to change to include AI tools, how the department could simplify payment to encourage the use of the technology and what research and development investments could offer best practices for adoption.
“We recognize that this is a space that is very volatile. It’s moving very quickly,” Sharma said. “It’s very important that the industry gets clear signals from the agency about what we care about.”
Adapting to a fast-moving technology
Inside the HHS, the Food and Drug Administration is trying to set policy that works in 2026 but will also adapt as the technology evolves, Jared Seehafer, senior advisor in the office of commissioner at the agency, said during a panel.
The agency has approved more than 1,300 AI medical devices since 1995, according to a tracker maintained by MedTech Dive. The number of submissions has spiked in recent years as investment in the technology has increased.
But the advent of agentic AI, or AI systems that could act more autonomously and potentially improve themselves, adds new challenges to the FDA’s regulatory framework, given developers typically have to notify the agency about their update plans, Seehafer noted.
AI tools also might need a more balanced approach between reviewing them before they hit the market and keeping an eye on the devices in operation, he added. The performance of AI models can degrade over time, so the tools could need continuous oversight to ensure they remain accurate.
“That is going to require of us a new regulatory framework, but it’s also going to require from the entire community — and all stakeholders of different types — data sharing in a way that heretofore does not really happen,” Seehafer said.
Another challenge is an agentic system that relies fully on human oversight likely won’t be able to scale, given the amount of human resources needed to monitor all of the AI’s output, said Dr. Haider Warraich, a program manager at the Advanced Research Projects Agency for Health, or ARPA-H.
For example, ARPA-H, a relatively new agency that supports biomedical and health research, is working on a project that aims to develop clinical AI agents that can autonomously guide cardiovascular disease care, Warraich said.
The program also plans to build a supervisory agent that can oversee the clinical tools to ensure they continue to perform safety and effectively.
“I think we’re still in the early innings. So a system that has no human oversight would be unacceptable, I think, not just to patients and health systems, but also to clinicians like myself,” Warraich said. “And really the goal of the supervisory agent is, how do you create a system that can optimize human oversight, especially in the absence of ground truth?”
A complex regulatory environment
Meanwhile, as AI technology advances, the current regulatory scheme remains complicated. For example, few states have one regulatory body responsible for AI, so rules could fall under a number of agencies, like state departments of insurance or attorneys general.
Plus, state legislation can be slow-moving, so guardrails could become stale and out-of-date by the time the bill advances, Veradigm’s Joros said.
For now, the adoption of AI agents or tools for diagnosis and treatment might take time, Danny Sama, vice president of digital platform and strategy for information services at Northwestern Medicine, said in an interview.
“Frankly, I think a lot of it’s going to depend on regulatory and legal,” he said. “At some point, some agent is going to make a mistake, and there will be a malpractice lawsuit, and the question is, who’s liable?”