Dive Brief:
- A Google artificial intelligence tool may predict breast cancer better than doctors, according to a study published Wednesday in the journal Nature.
- The algorithm, which was trained on almost 91,000 mammogram scans from women in the U.S. and U.K., resulted in almost 6% fewer incorrect cancer diagnoses and 9% fewer false negatives than standard clinical practice when used in the U.S. In the U.K. those figures were 1% and 3% respectively.
- Yet the evidence for AI's value-add in diagnostics is still shaky. Developers of such algorithms stress they should be used as tools on top of a clinician's judgment, not a stand-alone diagnostic in place of it.
Dive Insight:
Roughly one in eight women in the U.S. will develop invasive breast cancer over the course of her lifetime, according to the American Cancer Society. Early detection of abnormal tissue or tumor is critically important for treatment, but mammograms — the best screening tools currently available for doctors — have their limitations.
Clinicians fail to catch about a fifth of all breast cancer cases, and half of U.S. women receiving annual mammograms in a given decade will be incorrectly told they have breast cancer when they actually don't, the cancer group says.
Google Health's new tool, developed in tandem with its British subsidiary DeepMind, Northwestern Medicine and Imperial College London and the University of Cambridge, improved mistakes in both areas. False positives were reduced by 5.7% and 1.2%, and false negatives were reduced by 9.4% and 2.7% in the U.S. and U.K., respectively.
U.K. test scans were collected from almost 26,000 women at two England screening centers, where women are screened on average every three years, from 2012 to 2015. The U.S. test set was collected from more than 3,000 women from one academic medical center from 2001 to 2018. American women are screened every one to two years.
The study, funded by Google, also found the algorithm was more accurate in identifying breast cancer from almost 500 patient reports than six doctors evaluating the same cases.
Clinicians remain skeptical of AI's efficacy in the exam room. In October, a cadre of major international radiology societies called for more oversight of the use of AI-based intelligent and autonomous systems in imaging, noting their widespread use could potentially increase errors and worsen health disparities without ethical and operational guardrails.
Such algorithms, however, have been a major prong of tech giants like Google's ongoing efforts to disrupt U.S. healthcare. AI company DeepMind, acquired by Google in 2014, said last year its technology could identify acute kidney disease two days before clinicians with 90% accuracy.
Additionally, Alphabet's life sciences unit Verily's machine learning algorithm is being used to analyze images for diabetic eye diseases such as diabetic retinopathy in Aravind Eye Hospital in Madurai, India.
A September study published in the Lancet Digital Health journal suggested AI can detect diseases from medical imaging with similar levels of accuracy as healthcare professionals, though the tech didn't outperform them. Research and development of AI solutions in healthcare has gained steam, galvanized by similar reports of such algorithms' successes.
Radiology is a logical area for the tech to saturate, as machine learning algorithms can easily be trained on the reams of available data. According to healthcare consultancy Frost & Sullivan, of the more than 100 medical imaging AI startups in 2018, the majority were for image analysis.
And the need is there: Radiology, like many other medical arenas, is seeing increasing physician burnout and shortages as the volume of medical images start to outpace the number of available specialists to analyze them, especially in low- and middle-income countries.