Healthcare’s embrace of artificial intelligence increasingly resembles a digital gold rush as new capabilities roll out fast.
But beneath the hype sits a more urgent truth: AI holds real promise to improve care, while also introducing new and underappreciated risks.
AI without clinical literacy is not innovation. It is a fast way to scale the very problems healthcare has spent decades trying to fix: flawed documentation, uneven clinical reasoning and bias embedded in data. As these tools move from pilots into daily clinical workflows, the question is no longer whether AI will be used, but whether clinicians are equipped to use it safely.
ECRI, an independent nonprofit focused on healthcare research and patient safety, put it plainly in its 2026 Top 10 Patient Safety Concerns report: the leading risk is what the report calls “Navigating the AI Diagnostic Dilemma” — the growing risk of overreliance on AI tools can drive diagnostic error, amplify bias and erode judgment.
And this is no longer theoretical, with two‑thirds of physicians reporting AI use in 2024 (a 78% increase from prior surveys). The tools are already in practice. The window to get this right is narrow.
The Risks We Underestimate: Flawed Data and Automation Bias
The AI risk I worry about most is automation bias, the very human tendency to trust a confident machine over the patient sitting in front of us.
At best, AI tools are sophisticated pattern‑recognition engines that can ease cognitive load by summarizing charts and surfacing trends — freeing clinicians to spend more time with patients rather than screens.
However, AI has a fundamental limitation: It does not know what it does not know. It can miss context, misinterpret data or inherit errors embedded in the medical record. And those records used to train these systems reflect decades of encoded bias, variable documentation quality and disparities.
When AI learns from that imperfect version history, it scales those embedded flaws and compounds potential dangers for the very patients who stand to benefit most if we get this right.
Because AI outputs sound fluent and authoritative, even when wrong, a its plausible-sounding errors or misinterpretations can be incredible difficult to detect. A rushed clinician may be tempted to simply accept the AI-generated response because it appear highly reliable.
This is why governance cannot be treated as a formality — it is a clinical necessity. As physicians, our irreducible value was never data retrieval or typing speed. It is the relationship, judgment and the translation of complex information. AI shouldn't change that. Done right, it protects it.
Operating at Scale with Complex Patients
This is not an abstract debate. At DaVita, we care for more than 200,000 patients living with end stage kidney disease (ESKD), many also managing diabetes, heart disease and social vulnerability. At this scale, an algorithmic error is not a data anomaly. It could inadvertently influence care guidance repeated across thousands of complex clinical interactions.
This reality means passive adoption is not an option.
We scale AI only where use cases are clear, outcomes are measurable and safety and reliability are non-negotiable. And we assume from the start we will have to iterate. Real‑world performance teaches us where a model succeeds — and where it fails.
This is also why we are building structural guardrails to address this risk. By grounding AI responses with verified clinical guidelines through Retrieval-Augmented Generation (RAG), actively debiasing algorithms to protect our diverse patient population and rigorously monitoring for model drift, we are working to support, rather than compromise, clinical judgment.
The AI Literacy Gap
Even strong governance does not solve the hardest part of implementation: the human element.
Many clinicians tools have not been given the conceptual vocabulary to evaluate these tools. Medical board exams do not typically cover topics like model drift, probabilistic outputs or bias monitoring in non‑deterministic systems. Yet clinicians are being asked — implicitly or explicitly — to use tools shaped by those concepts.
This is a solvable gap. Clinicians are better positioned than they may realize because the mental habits required for AI literacy are already familiar. Clinicians ask: Does this lab value make sense? Does the patient match the study population? What is the pre‑test probability? What are the confounders? AI literacy uses the same muscles — applied to algorithmic output.
At DaVita, we treat this as an opportunity to strengthen clinical reasoning, not replace it. We encourage caregivers to interrogate AI the way they would interrogate a consultant:
- Where did this output come from?
- What data was it fed to generate this answer?
- What does it tend to get wrong?
Those questions are part of the antidote to blind reliance. They are teachable. And they are increasingly essential.
Building the Clinician the Future Requires
Clinical roles are shifting. Where we were once data gatherers — hunting, gathering, typing — we are increasingly becoming editors of a synthesized narrative.
Editing well requires knowing when the draft is wrong. It means recognizing when a patient falls outside the algorithm’s reliable range or when the recommendation is not applicable despite being issued by a highly “intelligent” system.
We should approach AI integration with the same discipline we apply to a clinical study. We ask who was included, who was excluded, what the endpoints were, what the limitations are and whether results generalize.
AI deserves the same rigor — paired with strong governance, continuous education and a serious commitment to interoperability and data quality.
I am genuinely optimistic about the future AI can promise. By offloading the administrative drudgery, AI can ease that “drinking from a firehose” reality of modern medicine. It can create the space to reclaim the human role of caregiving as we spend more time making eye contact, building rapport and reinforcing behaviors that can help support better outcomes.
Paired with accountable oversight, AI can become what it should be: a powerful, well-monitored extension of human clinical expertise.