- The White House has directed the HHS to create a safety program focused on harmful or unsafe healthcare practices involving artificial intelligence.
- President Joe Biden outlined the planned program as part of an executive order — released Monday — on “safe, secure, and trustworthy” AI. The program will receive reports of harms or unsafe practices and act to fix them.
- Other sections of the order call for the development of a strategic plan for the responsible use of AI in the health and human services sector, including medical device safety. An HHS AI Task Force will develop the plan. The executive order does not go into detail about medical devices and technology, but the Food and Drug Administration has recently prioritized AI projects, like creating a database for AI- and machine learning-enabled devices.
Regulatory scrutiny of AI has increased over the past year. The European Union is developing the AI Act, legislation that it claims will be the first comprehensive AI law, and the FDA is drafting guidance on AI-enabled device software functions. Now, the White House has set out its stance on AI in an executive order.
The order gives the HHS 365 days to establish an AI safety program. The program should create a “common framework” for approaches to identifying and capturing clinical errors resulting from AI, as well as specifications for a central tracking repository for “incidents that cause harm, including through bias or discrimination, to patients, caregivers or other parties.”
Through the program, the HHS will analyze the captured data and generate evidence to develop and share best practices, recommendations or other informal guidelines to help avoid the harms associated with AI.
The order also requires the HHS to work with the Secretary of Defense and the Secretary of Veterans Affairs to create an AI Task Force within 90 days of the order. Once the task force is in place, it will have 365 days to create a strategic plan. The order envisages the plan including policies and frameworks on the responsible deployment and use of AI.
Specific objectives include monitoring the long-term safety and real-world performance of AI-enabled technology, “including clinically relevant or significant modifications and performance across population groups, with a means to communicate product updates to regulators, developers and users.”
The order also calls for the incorporation of equity principles and security standards into AI development and the incorporation of safety, privacy and security standards in the software-development lifecycle.