Getting Healthcare AI Right: Lessons From Building It, Buying It And Watching It Break

    Originally published on Forbes Councils on January 21, 2026. Read the original → Forbes

    There's a particular kind of failure that shows up in healthcare technology. It doesn't just live in the code, the clinic or the compliance checklist. It happens well before any of those stages — in the meeting room, in the decision-making process, in the rush to roll out a “solution.”

    Six months of implementation, dozens of training sessions and a vendor's promise of transformation. Then, three weeks after launch, a call I've come to recognize: “We're turning it off.” The truth is, most healthcare AI doesn't fail because the tech is flawed. It fails because the implementation wasn't designed for the real world.

    I've seen this again and again, not just from the outside but from the inside as an operator and lead implementer. Over the past several years, I've built and scaled health technology across U.S. healthcare facilities. With my team, we've navigated multiple implementations, learned from each and continued to iterate. The answer to what makes AI succeed or unravel is rarely in the code; it lives in clarity, culture and operational fit.

    When AI Doesn't Deliver: A Common Story

    A few months ago, a hospital system reached out to us. Its current AI vendor had promised streamlined scheduling and pre-visit workflows. However, the staff was frustrated, adoption was lagging and the project was quietly failing.

    This scenario is more common than people realize. The technology might work in theory, but if it's not built with actual clinical workflows in mind, or if it demands that teams relearn too much too quickly, it won't take hold. Even a promising solution can wither without the right support, training and alignment.

    AI: Not Just A Tool But A Species

    One of the most valuable lessons we've learned is the importance of understanding the type of AI being purchased before implementation begins. Here's a framework we use to evaluate solutions:

    • Level 0 (Administrative AI): Manages structured, repetitive tasks such as scheduling, reminders and documentation.
    • Level 1 (Assistive AI): Guides staff or patients through processes but doesn't make decisions.
    • Level 2 (Supervised Autonomy): Makes recommendations or decisions with oversight from clinicians.
    • Level 3 (Autonomous Care Delivery): Mostly theoretical in today's environment. Not where most organizations should start.

    Most tools today are Level 0, and this isn't a problem unless an organization purchases them with expectations of Level 3 outcomes. Misaligned expectations are often where failure begins.

    Six Lessons From Implementations That Actually Work

    Across dozens of deployments I've worked on, these six lessons have consistently made the difference between scalable success and silent failure:

    1. Tie AI to the metrics that already matter. If a vendor can't map its product to something you already track — such as intake delays, claim denial rates or message backlog — it isn't solving your problem. It's selling a narrative, and that doesn't serve the buyer or the vendor.

    2. Align internal expectations. CFOs are often focused on ROI and cost reduction, while clinical teams care more about time to care and patient experience. Intake delays may not register for finance leaders, but they're a daily challenge for clinicians. Alignment starts with recognizing that different teams care about different outcomes and that both are valid.

    3. Start small, but keep it real and measurable. Don't pilot a solution in a sanitized environment. Instead, choose one workflow at one location and define success with specific metrics. If it can't succeed in a focused test, it won't succeed at scale. You can't improve what you don't measure. If your team isn't tracking patient conversion or intake cycle time, you won't know whether AI is delivering impact.

    4. Don't skip the integration conversation. Some of the most visually polished tools fail when plugged into real-world systems. Ask vendors to share data flow diagrams before contracts are signed. Make sure to clarify what integrates today, not what might be ready in a future release.

    5. Train for adoption, not just usage. Healthcare teams are already overwhelmed. If a tool requires significant workflow changes without clear benefits to front-line staff, it won't be adopted. Adoption happens when people experience immediate value in their day-to-day work.

    6. Build an ROI model before going live. Estimate minutes saved, denials avoided and time reclaimed. Then, ask the vendor what happens if results fall short. Good partners are willing to tie success to contract terms. If possible, align incentives with shared outcomes such as revenue protection or cost savings. Coming from a consulting background, I've always believed that when all parties share the same table, better results follow.

    Technology That Frees People, Not Just Processes

    From my own experience, I've found that the most powerful benefits of healthcare AI aren't abstract; they're deeply human:

    • A clinic we worked with cut its intake time in half and now calls patients back within a day instead of a week.
    • A speech therapist who is a client of ours once spent hours chasing authorizations. They now use that time to plan better sessions.
    • We worked with an autism therapy center to reach more families, more quickly, as documentation and scheduling are no longer bottlenecking care.

    These are real-world results from real implementations — not pilot dreams, not vaporware. They're what happens when technology is designed for, and with, care delivery in mind.

    Implementation Is The Strategy

    Success with healthcare AI isn't about choosing the flashiest solution. It's about asking sharper questions and treating implementation as a core strategy.

    The clinics and systems that win with AI are those that move with purpose — defining clear problems, testing solutions with measurable pilots and choosing long-term partners over short-term vendors.

    AI isn't a replacement for care. It's a multiplier, but only when introduced with discipline, collaboration and honest measurement.

    The technology is here. The difference will be in how we bring it to life.

    JP

    JP Montoya

    Founder & CEO, Solum Health

    JP Montoya builds and scales healthcare administrative automation at Solum Health, working with ABA, PT, and ST/OT therapy practices across the US. He writes about healthcare operations, AI implementation, and practice management from direct experience building and running clinical workflows.

    Ready to Automate Your Front Office?

    Let Annie handle your intake, insurance, and authorizations 24/7.

    HIPAA Business Associate
    SOC 2 Type II
    Pen Tested
    AES-256 Encrypted
    Chat