Back to cases
Big Leap Health · Senior Product Manager · 2023–2024

Claim Automation

HealthcareAI/ML0→1

Big Leap Health's RCM (Revenue Cycle Management) platform processed thousands of medical claims daily. Claims were submitted to payers with missing or incorrect fields — but the system gave no signal when something was wrong. Submissions went through silently, only to be denied weeks later. By then, the context was lost, the rework was expensive, and operators had no trust in the tooling.

The root issue wasn't just technical — it was a trust problem. Operators had been burned so many times by silent failures that they'd developed workarounds: manual spreadsheets, double-checking every claim by hand, flagging issues on sticky notes. The tooling was supposed to help them, but they didn't believe it would.

I joined as the Senior PM to own the claim submission pipeline end-to-end — from the moment a claim was created to the moment it was accepted (or denied) by the payer. My job was to figure out why the error rate was so high and ship a solution that operators would actually trust.

"Flag as missing" over auto-correction. The engineering instinct was to build AI that auto-filled missing fields. I pushed back. Operators didn't trust the system — auto-correcting silently would make the trust problem worse. Instead, I designed a "flag as missing" flow: the system would catch errors before submission and surface them to the operator with clear context on what was wrong and why it mattered. The operator stayed in control. This was the single most important decision in the project.

Trust-building through transparency. Every flagged issue showed the operator exactly which field was problematic, what the expected format was, and what would happen if they submitted anyway (likely denial, with estimated revenue impact). We didn't hide the AI's reasoning — we made it visible. Over time, operators started trusting the flags because they could verify them.

Incremental rollout with the most skeptical team first. Rather than a big-bang launch, I partnered with the operations team that had the lowest trust in our tooling. If we could win them over, the rest would follow. We did weekly check-ins, incorporated their feedback into every sprint, and gave them direct access to flag false positives.

Eliminated silent submission errors. Claims with missing or incorrect fields were caught before they left the system. The denial rate from preventable errors dropped to near-zero for participating teams.

Operators went from maintaining manual workarounds to relying on the flagging system as their primary quality check. The team that was most skeptical became the system's biggest advocate internally — they started requesting features instead of filing complaints.

The biggest lesson: when users don't trust your product, shipping more automation makes things worse. You have to earn trust first, and the way you earn it is by giving users visibility and control. "Flag as missing" was a less ambitious feature than auto-correction — but it was the right one.

I also learned that the best way to validate a product in a low-trust environment is to start with the harshest critics. If the most skeptical users adopt it, you know it works.