Google seeking talent in voice tech to improve doctor-patient experience
- Google is looking to build up its Google Brain team with the aim of developing the “next gen clinical visit experience,” CNBC reports, citing internal job postings at the Silicon Valley company
- The team is already working with Stanford Medical to evaluate artificial intelligence and voice recognition in generating EHRs. The focus of the new push is on using voice recognition to help doctors take notes, and would likely draw on Google’s existing voice technologies in its Home, Assistant and Translate products.
- Steven Lin, the Stanford physician leading the study with Google, told CNBC that the major challenge is accuracy. “But if solved, it can potentially unshackle physicians from EHRs and bring providers back to the joys of medicine: actually interacting with patients,” he said.
Solutions that cut down on administrative tasks and increase doctor-patient face time are in huge demand, and speech recognition has been high on the list for a long time.
According to studies, primary care doctors spend more than half their workday on EHRs and computer tasks. And a recent Medscape survey found seven in 10 doctors feel burned out, depressed or both due to increasing administrative burdens.
Last month, digital startup Suki announced it had raised $20 million in a Series A round to advance an AI-based voice assistant for doctors. That tool is currently being tested in 12 pilots at internal medicine, ophthalmology, orthopedics and plastic surgery practices in California and Georgia — with early results showing up to 60% less time spent on medical notes when the voice assistant is used.
Suki claims its digital assistant can also search and retrieve patient data and create action plans based on the doctor-patient visit, the doctor’s known preferences and clinical practice guidelines.
Google’s interest in health applications for voice technology is not new. Researchers at the company in November said speech recognition technology could be practical for transcribing medical conversations. The researchers cited two disparate research models that achieved word error rates of 20.1% and 18.3%.
The researchers said they had already cracked the code to identify multiple speakers and grasp medical terminology. “[O]ur research shows that it is possible to build an [automated speech recognition] model which can handle multiple speech conversations covering everything from weather to complex medical diagnosis,” they wrote at the time.