Crowdsourced hospital ratings diverge from CMS on quality, similar for patient experience
- Ratings on Yelp and other crowdsourcing sites tend to mirror Hospital Compare on patient experience, but are less consistent when it comes to issues like clinical quality and patient safety, a study in Health Services Research finds.
- The researchers looked at five measures from 2016 Hospital Compare data and compared scores for nearly 3,000 hospitals with ratings on Yelp, Google and Facebook. Hospitals with fewer than five reviews were excluded from the study.
- More than half (50%-60%) of best-ranked hospitals on crowdsourcing sites were also best ranked on Hospital Compare's overall and patient experience scores. However, 30% to 37% of hospitals ranked best on crowdsourced sites were deemed worst on clinical quality measures, while 26% to 34% of those ranked best scored worst in terms of patient safety.
Hospitals have a lot to gain, or lose, from online rating systems, which can influence patient choice and an organization's bottom line. The American Hospital Association and others have questioned the government's star rating system's accuracy and transparency and have urged CMS to delay updating it until some of the perceived quirks are ironed out.
A recent analysis also suggested the CMS formula favors specialty hospitals over major teaching hospitals, because they are less likely to report the heavily weighted mortality measure.
Meanwhile, providers are wary of the increasing "Yelpification" of health. "We're moving to a health system where patient ratings are becoming more important, where top down ratings are really inaccessible to patients and probably not that useful," Yevgeniy Feyman, coauthor of the Manhattan Institute report Yelp for Health, told attendees at this spring's Health Datapalooza conference.
The study looked at five Hospital Compare metrics: overall hospital star rating, overall patient experience, all-cause unexpected 30-day readmission rates, 30-day pneumonia mortality rates and intestinal infection rates.
Of the three crowdsourcing sites, Yelp correlated least with Hospital Compare ratings. The tendency for crowdsourced ratings to diverge from Hospital Compare on clinical quality metrics could be due to several factors, the authors write. For example, patients who experienced poor outcomes may be less likely to post online reviews. Patients may also be more concerned about how they are treated and more likely to rate hospitals based on that quality versus clinical experience.
The findings have implications for how information about hospital quality is conveyed online.
"Our results suggest a warning to consumers," the authors write. "While crowdsourced sites may provide similar information to patient experience surveys, patients should be encouraged to seek out other sources of information about clinical quality and not just focus on the simpler aggregate five-star ratings."
For providers, the takeaway is that crowdsourcing sites focus on nonclinical qualities such as wait times and the patient-doctor interaction. Hospitals could counter that by better communicating clinical quality to patients, the authors say.
For example, rather than forcing patients to try to weigh myriad individual metrics, Hospital Compare could provide a synthesized measure of clinical quality.
"[F]or patient broadly interested in dimensions of clinical quality and safety … a synthesized summary score of these dimensions would be valuable for consumers because these attributes of hospitals were not reflected in crowdsourced ratings," the authors conclude.