For years, our industry treated documentation accuracy as the finish line. If the code was correct, you were compliant. If the chart supported the diagnosis, you were audit-ready. That assumption was never entirely true and it's becoming dangerous to hold onto now.
During my time conducting RADV audits, I reviewed thousands of charts across dozens of plans. What stood out wasn't whether individual codes were right or wrong. It was the patterns. Plans that submitted thousands of additions and almost no deletions. Systems that surfaced every possible diagnosis but never questioned whether one should come off. To the plan, that looked thorough. To us on the audit side, it looked like a process designed to find more codes, not to find clinical truth.
That pattern is exactly what investigators are trained to spot today.
Explainability Is Not the Same as Defensibility
More plans now use AI that can show how a coding recommendation was made. That's a step forward. But the $556M Kaiser settlement and the ongoing DOJ investigation into UnitedHealth have raised a harder question: Does your process reflect clinical reality or is it built to maximize codes?
Explainability shows what happened inside the system. Defensibility shows that the system was designed to surface truth, including diagnoses that shouldn't have been submitted. One demonstrates transparency. The other demonstrates intent.
When investigators examine your risk adjustment program, they don't just look at codes. They examine the incentives behind them. Was volume rewarded? Were warnings ignored? Were deletions caught as often as additions? If a system only ever adds, that tells a story about what it was built to do.
I've seen plans absorb significant compliance costs even when their codes were technically accurate. The process told a different story. The documentation existed, but the clinical reasoning behind accepting each code was thin or unclear. That gap is real and auditors know how to find it.
Retrospective Isn't the Problem. One-Way Retrospective Is.
There's a growing instinct to blame retrospective review whenever enforcement actions make headlines. I understand the reaction, but the retrospective itself isn't the liability. The structure is.
When retrospective review only adds codes and never removes unsupported ones, it signals revenue pursuit, not clinical discipline. Plans see thoroughness. Auditors see a system that was never designed to test its own accuracy. When retrospective review is structured to validate in both directions (confirming what belongs and flagging what doesn't), it becomes one of the strongest compliance tools available. It catches weak documentation early, exposes risky coding trends and builds the kind of evidence trail that holds up under scrutiny.
A balanced, two-way process shows clinical judgment. A one-directional one raises doubt.
The Details That Cost Plans Millions
When I conducted audits, I wasn't following a simple checklist. I was looking for discrepancies between what was submitted and what the documentation actually supported. The findings that cost plans the most were rarely about blatant errors. They were about gaps in the process.
Carry-forward diagnoses that were never reassessed during an actual patient visit. Codes that didn't match documented severity: the submission said one thing, the clinical note said something less precise. AI-recommended codes that no clinician ever documented a judgment call on. The algorithm flagged it; someone accepted it, but no one paid attention to the clinical reasoning behind that decision.
And plans with strong accuracy rates that could show what they submitted, but never what they rejected. That one-directional pattern tells auditors everything about intent.
CMS now evaluates these patterns across your entire membership. A weakness in one area can't be hidden by strength in another.
The Window to Act Is Open, But It Won't Stay Open
CMS is now looking back six years. Audit cycles are intensifying and the financial exposure per finding continues to rise. Risk adjustment is no longer a revenue function, with compliance guardrails in place. It's regulated clinical compliance, held to the same standards as everything else in your organization.
For plans that haven't built defensibility into their coding process, the time to start is now. Not because the sky is falling, but because the organizations that build two-way, evidence-grounded, audit-ready processes today will be the ones that aren't scrambling when the next round of enforcement actions arrives.
Defensibility isn't a product you buy. It's a discipline you build. And the foundation of that discipline is simple: every diagnosis must be encounter-linked, clinically evidenced and supported by a process that proves you were looking for clinical truth, not just more codes.