- CMS on Thursday named the 25 participants selected for the first stage of its Artificial Intelligence Health Outcomes Challenge. The cohort includes a mix of providers, tech companies, consultant firms and other companies and organizations.
- The agency said it received entries from more than 300 organizations for the challenge to develop AI tools that can predict unplanned hospital admissions and adverse events as well as methods to help clinicians use data to improve quality. Those selected have about a week to confirm their participation status.
- From the 25 in the first stage, seven will be selected in April for the second round and be awarded $60,000 each. Later next year, one finalist will be selected for the $1 million prize and a runner up will receive $230,000.
The organizations making it to the first stage ranged from big names like IBM, Merck and the Mayo Clinic to academic institutions like North Carolina State University and the Columbia University Department of Biomedical Informatics.
Providers advancing include Mayo's Claims-based Learning Framework and Northwestern Medicine's Human-Machine Solution to Enhance Delivery of Relationship-Oriented Care. Jefferson Health put forward a project on using AI to improve the population health of Medicare beneficiaries and optimize ambulatory scheduling.
Broad opportunities exist for the application of AI in healthcare, but the tech is only beginning to emerge. Still, recent research showed it can be as accurate as human doctors in detecting diseases from medical imaging.
Last week at the HLTH conference in Las Vegas, Google Health head and former Geisinger CEO David Feinberg laid out plans for using AI tools to help doctors predict disease and cut red tape in hospital administrative tasks.
Google is developing products to allow doctors to spend more time with patients. Providers in the hospital are "simply becoming data clerks entering information," Feinberg said.
And as use of AI grows, so do ethical concerns. In October, major U.S. and European radiology societies called for more guidelines on ethical use of AI in imaging, including holding developers to the same "do no harm" standard as physicians.