Patients ask AI about lab results: What are the risks?

Patient reviewing health information on a smartphone through a patient portal.
athenahealth%20logo_RGB_leaf
athenahealth
March 20, 2026
8 min read

What happens when curious patients ask AI about their lab results

The moment lab results hit a patient portal, a new trend is emerging. Instead of waiting for a call from their clinician, some patients may open a generative AI (GenAI) tool and ask, “What does this mean?"

Past studies have shown that 96% of patients ideally want immediate access to their medical records1. That access might come before clinical review, sometimes resulting in large language models (LLMs) like ChatGPT, Claude, and Gemini becoming a default “second opinion." Generative AI can prove fast, conversational, and available around the clock, filling the gap between when results are released and when follow-ups occur.

This trend doesn't reflect a breakdown of trust in care teams. Patients aren't turning to AI because they distrust their providers. Rather, they're turning to it because the healthcare ecosystem hasn't always met them at the moment they need answers. Factors like overwhelmed clinical staff and operational difficulties can impact the efforts of balancing clinical workflows and data ingestion with patient outreach.

For practices, this shift toward rising expectations for timely, thorough answers invites reflection on not only clinical accuracy but patient experience. It could be an opportunity to re-evaluate how and when lab information reaches patients — whether current tools meet patients at their moment of uncertainty. And if answers aren't readily available within your platform, where are patients going to find them and how can you try to limit negative outcomes?

Why patients are turning to AI for lab analysis

If patients go to chatbots for insight, it's not because they necessarily want to. In fact, most people still prefer that results be explained directly by their care teams,2 and only 37% are comfortable with AI diagnosing conditions3. Patients turn to AI for a simpler reason — lab information arrives quickly, but the interpretation often comes later.

Lab reports are frequently released through patient portals before a clinician has reviewed or explained them. When patients see a page of numbers, reference ranges, and flags — especially after hours — questions may arise immediately. Is this serious? Is this normal? Do I need to act now? GenAI can offer instant, plain-language responses at exactly that moment.

What most patients are really seeking isn't a diagnosis. Turning to AI may reflect a desire for some kind of immediate guidance and comfort, not clinical replacement. In actuality, a Aug. 2024 poll found that most adults (56%) could not tell the difference between whether an AI chatbot was supplying truthful or false information.4

Despite these statistics, discouraging patients from using GenAI isn't realistic — or patient-friendly. The better approach is to ensure those questions are answered within the healthcare system, using tools designed for clinical care rather than public platforms. When implemented responsibly, AI can help improve patient understanding while easing clinician workload.

The risks with unsecured, public AI tools in healthcare contexts

Public, consumer-facing AI tools have developed health-specific tools aimed at tackling healthcare context and answering patient queries. But early indications emphasize the importance of data availability and access to the medical record. This matters for several reasons:

When lab results lose context, accuracy suffers

Lab values rarely mean much in isolation. Extensive background regarding age, medications, medical history, and prior results all matter — context that public AI tools don't have. Without it, these systems may overemphasize minor abnormalities, misread reference ranges, or miss meaningful patterns.

Research has shown that LLMs can generate inaccurate or “hallucinated" explanations,5 often delivered with total confidence. In one study, ChatGPT sometimes overemphasized or downplayed findings in radiology reports, and patients reported greater confusion after reading AI-generated summaries of clinical findings.6

The false-confidence conundrum

GenAI responses may often sound confident—even when they get it wrong. That's part of the problem: Because LLMs are built to be helpful, these tools will often go along with incorrect or illogical medical questions and produce authoritative-sounding answers, even when the question or premise is flawed.7

In one study, AI models accepted nearly all incorrect medical assumptions they were given, prioritizing helpful-sounding responses over factual accuracy.8 That false confidence can do real damage, leading patients to overestimate the significance of certain results and complicating follow-up conversations with care teams.

Anxiety, unnecessary care, and inequity

Poorly contextualized explanations can increase anxiety, triggering extra messages, visits, or testing. Bias is another concern. When AI tools are trained on non-representative data, disparities can surface in how results are explained or prioritized. Notably, 33% of recently surveyed Black adults expect AI to increase bias in healthcare, compared with 21% of white adults.9

Possible strain on clinician-patient dialogue

Instead of saving time, inaccurate interpretations may strain clinician-patient dialogue, shifting visits toward correction rather than care planning.

The benefits of secure, platform-integrated AI

When generative AI can connect to patient portals, messaging tools, and clinical workflows, it can help deliver explanations at the moment lab results are released because it has access to more data. That access to historical, structured datasets helps improve the chances of a better GenAI output while also keeping questions, conversations, and follow-up inside trusted channels. Those factors can simultaneously service the patient experience while helping to reduce administrative burden.

An MCP server is one way to establish a connection to the medical record. This software can act as a secure and intelligent bridge that ingests requests from consumer AI and helps governs what information the models can use in their response to a patient query. An MCP server can also log and audit these requests for further evaluation and ongoing development.

When AI is secure, platform-integrated, and designed for both patient engagement and care delivery, it becomes an asset rather than a workaround.  Practices leveraging MCP servers can help consumer AI tools deliver context-aware explanations within the framework of their own ecosystem — where privacy, governance, and clinical oversight are built in.

The result is fewer fragmented conversations, which can contribute to a more seamless experience for both patients and staff.

When AI is secure, platform-integrated, and designed for both patient engagement and care delivery, it becomes an asset rather than a workaround.

Built-in privacy protections for lab data

When patients review lab results, they're often sharing highly personal information — sometimes without realizing how easily it can travel beyond their control. Platform-integrated AI helps keep those conversations inside secure systems, so sensitive health information doesn't leave trusted channels.

Instead of copying results into public tools with unclear data policies, patients can receive explanations through secure portals and messaging tools connected to their care team. That continuity helps protect privacy and can reinforce confidence that their information is being handled responsibly.

Context-aware explanations grounded in the patient record

Lab results rarely tell the whole story. Platform-integrated AI can draw from available patient information to help provide more relevant context when supporting lab communication. Two-way texting and secure webchat capabilities help enable patients to ask basic questions and receive timely responses within the same trusted channels they already use.

If additional follow-up is needed, conversations can be handed off to staff without losing context. By embedding these capabilities into existing workflows, practices can work to streamline communication and reduce phone volume and backlog, placing emphasis on the patient experience through all phases of the care journey. 

Clear boundaries between education and diagnosis

Healthcare-grade AI should help support communication — not replace clinical judgment. When embedded within secure patient communication channels, it can help patients understand lab-related information and next steps without moving into diagnosis or treatment decisions.

That distinction keeps care teams responsible for clinical decisions while helping to offer patients clear and timely guidance.

Clinically aligned outputs that support care teams

When secure AI operates within established patient engagement and practice workflows, communication remains consistent with how information is delivered and documented. This helps reduce fragmented messaging and unnecessary back-and-forth, keeping interactions focused and organized.

Reduced administrative burden after labs are released

When patients get clearer explanations upfront, it can help reduce questions and concerns that may pile up later. Within athenaOne®, AI triage embedded in Patient Conversations can serve as a first filter for incoming lab-related questions — addressing routine inquiries through AI agents and routing more complex concerns to staff when appropriate.

By managing communication at the point of release, practices can reduce phone volume, inbox backlog, and repetitive administrative work, while remaining responsive to patients.

A trusted tool clinicians can use to improve lab communication at scale

When embedded within secure patient engagement platforms, AI can help practices manage lab-related communication more consistently and efficiently. By supporting routine inquiries and guiding patients within trusted channels, these tools help enable teams to respond at scale without overwhelming staff. In the context of patient engagement, that can help create reliable communication workflows while keeping clinical decisions with providers.

How athenahealth helps deliver safer, smarter patient understanding

Too often, practices lack the operational faculties necessary to meet patient demands and communicate with them at the proper time. athenahealth's AI approach includes consistent innovation to address those topical issues: Patients shouldn't feel the pull for immediacy have to leave the healthcare realm to understand their lab results, and clinicians shouldn't have to consistently chart after hours.

By embedding secure, healthcare-grade AI directly into athenaOne, athenahealth can help support clear lab communication for patients through "always-on" patient engagement tools that provide 24/7 access to AI assistants. These capabilities help patients receive fast responses, with the ability to transition seamlessly to care teams when clinical input is needed.

  • Lab explanations stay inside secure communication through portal messaging and other patient-facing channels.
  • Direct communication via two-way texting in Patient Conversations enables patients to communicate with patients and help them understand abnormal flags, ranges, and trends without unnecessary alarm. Through AI triage, routine questions can be addressed, and patients can receive guidance about when follow-up is needed.
  • Contextual guidance is included to help reflect prior results, medications, and clinician intent — without diagnosing.
  • Clinician-only AI tools can support documentation and chart review upstream.
  • Privacy-first design helps keep lab data within HIPAA-compliant systems.
  • For more immediate follow-up, athenaOne will have a Model Context Protocol (MCP) server that will act as a secure bridge connecting between the platform and consumer AI.

The result is a connected experience that helps limit misunderstandings and reasons for other reasons for patients to turn to unsecured AI tools.

A better path forward: Empowerment without exposure

Patients will continue asking AI about their lab results. The real question is where those questions get answered.

When results arrive before explanations, public AI tools fill the gap because they're fast — not because they're appropriate. Trustworthy lab interpretation belongs inside healthcare — and grounded in context, privacy, and care teams.

That's the opportunity ahead: Lab-focused AI built for healthcare, not scraped from the open web. In this future, AI doesn't replace the patient-provider relationship — it helps strengthen it. 

AI in healthcarepatient engagementdata privacyelectronic health recordathenahealth productspatient communicationEHR data securityreducing admin burdenhealthcare cybersecuritypatient satisfaction

More AI in healthcare resources

Hero image for content focused on Healthcare Information Technology (HIT) solutions.
  • Marty Fenn
  • March 20, 2026
  • 7 min read
AI in healthcare

How MCP servers help mitigate AI patient privacy concerns

MCP servers enable secure AI access to healthcare data. See how the model works.
Read more

Continue exploring

Icon Computer

Read more actionable insights

Get thought leadership, research, and news about the business of healthcare.

Browse the blog

1https://pmc.ncbi.nlm.nih.gov/articles/PMC10028486/

2https://pmc.ncbi.nlm.nih.gov/articles/PMC12374212/

3United States of Care and athenahealth report: Artificial Intelligence (AI) in Healthcare: Patient and Physician Perspectives

4https://www.kff.org/public-opinion/kff-health-misinformation-tracking-poll-artificial-intelligence-and-health-information/#cdc7ee85-54db-4b29-aa9b-625afeb4c03c

5https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html

6https://preprints.jmir.org/preprint/76097

7https://www.ncbi.nlm.nih.gov/books/NBK615587/

8https://www.nature.com/articles/s41746-025-02008-z

9https://pmc.ncbi.nlm.nih.gov/articles/PMC12665710/