
Your Weekly Dose of AI in Health
🚨 Epic reportedly debuting an AI scribe — startups take notice
Researchers and reporters say Epic is prepping to launch its own ambient AI scribe product, a move that would put the EHR giant directly into a market currently dominated by well-funded startups and established players. The entry of an incumbent with Epic’s install base could reshape competition and adoption timelines for scribe and ambient documentation tools.Why it matters: This could accelerate mainstream adoption of ambient documentation — and force startups to specialize or integrate more tightly with customers to maintain differentiation.
🗣️ Oracle launches an AI-powered, voice-first EHR for ambulatory care
Oracle Health unveiled a next-generation EHR designed around embedded AI and voice-first interactions for ambulatory providers, positioning itself as a competitor in the emerging “agentic EHR” space and emphasizing reduced clicking and faster clinician workflows. The product is being rolled out to ambulatory settings with plans for broader launches later.Why it matters: When large platform vendors bake agentic/voice capabilities into core EHRs, the locus of innovation shifts — hospitals will evaluate platform-level agent features alongside best-of-breed point solutions.
🩺 Study: Regular AI use may erode endoscopists’ standalone detection skills A multicenter observational study reported in The Lancet Gastroenterology & Hepatology found that after routine deployment of AI assistance in colonoscopies, endoscopists’ adenoma detection rate without AI fell from 28.4% (before AI exposure) to 22.4% (after AI exposure)—a relative drop of ~20% (6 percentage points). In AI-assisted procedures the detection rate was 25.3%; the authors and commentators warn this is the first real-world evidence suggesting continuous AI use can produce “deskilling” of experienced clinicians. The study is observational and the authors note limitations (possible confounders, experienced-operator sample), so further research—especially across experience levels—is needed.
Why it matters: This finding highlights a critical unintended consequence of agentic and decision-support tools: while AI can raise immediate performance, continuous reliance may quietly degrade clinicians’ independent skills—underscoring the need for deliberate evaluation, training, and governance when integrating AI into clinical workflows.
🏭 Philips to invest $150M+ in U.S. manufacturing and R&D to scale AI-enabled imaging tech Philips announced more than $150 million in new U.S. investments to expand production and R&D for AI-enabled ultrasound and imaging systems, including manufacturing expansions and job creation tied to delivering AI-enabled products faster to hospitals.
Why it matters: Supply chain and on-shore manufacturing investments show that AI in medical devices is maturing from lab prototypes into scaleable, regulated products — a necessary step for broad clinical deployment.
🤝 Stanford Health + Qualtrics — building artificial patient-experience agents Stanford Health is partnering with Qualtrics to embed AI agents into patient experience and operational workflows, aiming to translate predictive insights into targeted actions that reduce administrative burden and improve care coordination. Press coverage and releases outline agent use cases for outreach, triage, and workflow automation.
Why it matters: Academic medical centers pairing operational data with agentic platforms is a testbed for practical, measurable benefit — if agents can reduce friction across the patient journey, the ROI case becomes much clearer.

Each week, we select a critical topic for an in-depth exploration.

Deploying Agents in Healthcare: People and Process First, Technology Second
Deploying Agents in Healthcare: People and Process First, Technology Second In healthcare, AI agents are still in the early stages of deployment. Voice agents lead adoption, but their success depends on more than the technology. From my experience building and scaling a GI practice and creating a surgical scheduling automation platform, I have learned that people, process, and data form the real foundation. Technology amplifies what already works, not the other way around. In IT this is called value stream mapping—understanding where value is created or lost and how resources flow through a process. This approach forces us to study the full workflow before introducing automation.
This leads to a point that can be controversial: deploying AI agents into controlled production environments early rather than delaying for perfect data readiness. In many healthcare settings, greater value comes from seeing how an agent performs with your actual data, workflows, and people. This provides real performance metrics, surfaces weaknesses faster, and allows for targeted refinement through continuous AI evaluations.
A Blueprint for Agent Deployment
Map and understand the workflow you want to enhance. Identify the people involved, the handoffs, and where waste or delays occur.
Quantify the value of the process. Measure time spent on each task, the cost of that time, and the tools used to complete it.
Deploy into a safe but realistic environment using your actual data. Capture feedback early, and make AI Evals the center of your improvement cycle.
What an AI Evaluation Really Means An AI evaluation is the systematic process of measuring how a generative AI application behaves in a real-world context. It involves:
Running the agent against actual production data
Monitoring for errors, latency, and workflow misalignments
Feeding those findings back into the application to improve performance
These are not benchmark tests done in ideal conditions. AI evaluations are ongoing maintenance and part of the continuous development and integration lifecycle.
The 4×4 Matrix for Selecting Agentic Solutions
I use a simple decision framework with two axes: clinical risk and technical debt. The sweet spot for early deployment lies in the quadrant with low clinical risk and low technical debt. These projects can move to production faster and scale more easily once proven.
The Evaluation Challenge in Voice Agents
Evaluating a voice agent goes beyond checking if the answer is correct. Latency must be low enough for natural conversation—ideally under 800 milliseconds. Reliability, adaptability, compliance, and accuracy all have to be measured continuously, especially when working with unpredictable real-world data.
The Agentic AI Space in Healthcare
The agentic AI space in healthcare is already producing measurable results. Nuance DAX Copilot now serves over 400 provider organizations with documented burnout reductions of more than 80 percent. Abridge is live in more than 150 health systems, powering tens of millions of patient conversations. Notable Health automates over one million workflows daily across 12,000 care sites, while Artisight’s smart hospital agents have delivered a 39 percent reduction in patient fall risk in published trials. Platforms like Bayesian Health show peer-reviewed clinical impact, with 89 percent adoption and improved outcomes in Johns Hopkins hospitals. Builders are leveraging HIPAA eligible cloud stacks such as AWS HealthScribe, Azure OpenAI, and Google MedLM to create compliant, integrated agents at scale.
Lessons from Enterprise AI
Across industries, the most successful AI teams:
Start small with one defined use case
Deploy quickly into a controlled production setting
Evaluate continuously with real data and refine
Expand only after proving value
A Final Note Deploying agents in healthcare requires more than technical skill. It needs input from clinical leaders, operations experts, technical teams, and patient experience specialists. When people and process come first, and AI evaluations are treated as a continuous discipline, agents can move from pilots to scaled solutions that truly enhance care delivery.

Stay informed on frontier research on the future of AI and health.
🔬 BBC / broader coverage — AI designing potential antibodies / biologics (reported) Recent reporting highlights AI systems being used to design candidate antibodies and other biologics against pathogens — an accelerating trend where computational design shortens the early discovery cycle and generates candidate molecules for lab validation. (Multiple outlets and preprints document AI-driven antibody and protein design breakthroughs.)
Why it matters: AI-driven molecular design can compress discovery timelines — but lab validation and safety testing remain essential gating items before clinical use.
🗂️ New health data repository aimed at AI researchers — better data, less “garbage in, garbage out” A newly released curated health data repository is targeted at AI researchers to improve reproducibility and reduce common dataset issues that hamper model generalization. The resource emphasizes de-identification and structured labeling for common clinical tasks.
Why it matters: High-quality, well-curated datasets are the single biggest lever for trustworthy model performance — investing in them short-circuits many downstream evaluation headaches.
🩺 Study: Regular AI use may erode endoscopists’ standalone detection skills A multicenter observational study reported in The Lancet Gastroenterology & Hepatology found that after routine deployment of AI assistance in colonoscopies, endoscopists’ adenoma detection rate without AI fell from 28.4% (before AI exposure) to 22.4% (after AI exposure)—a relative drop of ~20% (6 percentage points). In AI-assisted procedures the detection rate was 25.3%; the authors and commentators warn this is the first real-world evidence suggesting continuous AI use can produce “deskilling” of experienced clinicians. The study is observational and the authors note limitations (possible confounders, experienced-operator sample), so further research—especially across experience levels—is needed.
Why it matters: This finding highlights a critical unintended consequence of agentic and decision-support tools: while AI can raise immediate performance, continuous reliance may quietly degrade clinicians’ independent skills—underscoring the need for deliberate evaluation, training, and governance when integrating AI into clinical workflows.
🔊 Diagnostic acoustics: AI may detect early voice-box lesions from voice patterns New research shows acoustic analysis combined with AI can help distinguish vocal fold lesions, suggesting non-invasive screening tools for early detection of certain laryngeal conditions. Early results point to promising diagnostic signal in voice features.
Why it matters: Non-invasive, acoustic-based screening could enable earlier referrals and monitoring — but datasets, external validation, and prospective clinical trials are required before clinical rollout.
🧠 MedGemma: Google’s vision-language models for medical imaging — technical deep dive
Google’s MedGemma family (vision + language) is presented as a medically tuned multimodal model for radiology, pathology and other imaging tasks, with deployment guidance (Vertex AI, model garden variants) and sample app patterns for integrating image + text workflows. The post walks through model variants, MedSigLIP (medical vision encoder), and practical deployment tips.Why it matters: Domain-specific multimodal models lower the barrier to building clinically useful imaging assistants — but they also raise the stakes for governance, evaluation, and careful integration with radiology workflows.

Mark your calendars for essential industry gatherings and educational opportunities.
Event | Date | Sponsor |
|---|---|---|
October, 10, 2025 1 p.m. – 4 p.m. San Diego, CA | American Medical Association | |
October 19-21, 2025 Pittsburgh, Pennsylvania | The University of Pittsburgh |
Reach out if you have an event you’d like to promote [email protected]
Help us grow our community and spread the word about the exciting advancements in AI and Health.
If you found this newsletter valuable, please share it with your network with this Referral Link!
Thank you for reading The AI Pulse Weekly. I’d love your feedback, so please drop me a note at [email protected] with thoughts, suggestions or feedback. I'll be back in your inbox next Friday with more Health and AI insights!
Warmly,
Sean
Copyright © Root Note Ventures LLC, All rights reserved.