Physicians embrace AI help with administrative burden

How is AI use in healthcare changing in 2025?
Artificial intelligence is beginning to deliver on its promise to ease physician burnout by shouldering administrative burdens and giving providers back the time to focus on patient care.
In athenahealth’s 2024 Physician Sentiment Survey (PSS), 93% of surveyed physicians reported feeling burned out regularly, with 49% saying their workload had become unsustainable.1
The 2025 PSS shows reason for optimism: Nearly 10% fewer physicians report burnout. And only 28% now say they’re considering leaving the medical field, down from 62% in 2024.2
As AI’s role in healthcare grows, physicians are eager to understand the full scope of its capabilities, but many are already embracing AI for administrative relief.
From AI skepticism to strategic adoption
In 2025, 68% of physicians report increased use of AI for documentation. The rise signals growing comfort with AI for clinical documentation, administrative support, and patient education.
“The industry is experiencing a shift with AI, from hype to reality,” says Nele Jessel, M.D., Chief Medical Officer at athenahealth, “Physicians are starting to see immediate benefits, particularly with alleviating administrative burden.”
This shift signals a growing willingness to engage with AI, especially where it alleviates the burdens of documentation and administrative work.
Physicians remain wary of AI in clinical decision-making
Physicians are embracing AI for administrative relief but remain cautious about its clinical reliability — raising questions like ‘How accurate is AI?’ and ‘Is AI reliable enough for diagnosis?’ In the 2025 survey, 58% expressed concern about overreliance on AI for diagnosis, while 53% worry about potential misdiagnoses.
Additionally, in 2025, 61% of physicians cite concern over loss of human touch with the growing role of AI in care delivery.
These concerns suggest that many physicians aren’t fully confident in AI’s accuracy or its ability to apply clinical judgment appropriately, especially in high-stakes situations like diagnosis or treatment decisions.
Confidence in AI also has a generational component, with younger physicians more likely to see immediate benefits of artificial intelligence in their day-to-day practice than their older counterparts.
Likelihood by age group of physicians seeing the potential of AI in reducing administrative work:
- 68% of those under 40 years of age
- 56% of those aged 40-64
- Only 35% of those 65 and older
AI in practice
Today’s most accepted AI use cases in healthcare are focused on administrative and communication tasks—areas where efficiency matters, but clinical risk is minimal.
While physician attitudes towards AI are mixed, the technology is moving forward where administrative tasks are concerned. Namely, AI-supported ambient listening and note generation are helping reduce documentation fatigue, and generative AI and agentic AI can assist in managing patient portal messages and non-urgent inquiries.
The takeaway? AI is welcomed when it handles routine, low-risk tasks, but skepticism remains when it encroaches on clinical judgment or patient interaction.
Still needed: guardrails, standards, and better integration
While enthusiasm for AI has grown, the 2025 Physician Sentiment Survey makes it clear: physicians still want clear boundaries, safeguards, and smarter integration before fully trusting AI in clinical care. Their top concerns reveal where support is still needed.
Interoperable EHRs that filter and summarize
Physicians remain overwhelmed by fragmented data. Despite the push for interoperability, only 28% say exchanging patient information across systems is easy. Many point to the urgent need for EHR platforms that not only connect but also deduplicate, extract, and summarize patient information, so they can spend less time sifting through redundant records and more time focused on care.
Chad Dodd, Vice President of Product Development at athenahealth, agrees with physician sentiment about interoperability challenges. “While the industry has made great strides in EHR adoption, the lack of interoperability continues to create bottlenecks, resulting in wasted time and missed opportunities for improving patient care,” he explains.
Guidelines for safe and effective AI use
Physicians want reassurance that AI won’t be used indiscriminately in clinical settings. More than half express concern that AI could lead to misdiagnosis or undermine the human connection in care. Many are asking: When is it appropriate to use AI for diagnosis? How much should a clinician rely on AI-generated suggestions? The growing call for regulatory action —reflected in the rise from 10% to 15% of physicians wanting elected officials to step in — signals an increasing urgency to establish national guidelines and standards for clinical AI use.
Clarity around accountability
As AI tools become more embedded in care decisions, physicians need to know where responsibility lies. Who’s accountable when an AI-generated recommendation goes wrong? How do we ensure clinicians remain in control, without being overburdened by constant oversight? The survey reinforces that without clear governance structures and transparent accountability frameworks, physicians may hesitate to use AI beyond low-risk administrative support.
Until clinicians are confident in how accurate and reliable AI truly is, they’re unlikely to trust it in critical care settings.
The future of AI in healthcare is promising, but only if it evolves in partnership with the people who use it every day
What healthcare leaders should do now
The 2025 PSS doesn’t just highlight what physicians are feeling, it points the way forward. For healthcare leaders, the message is clear: AI can be part of the solution to burnout and inefficiency, but only if it’s implemented with care, purpose, and transparency.
“Practices need to continually make investments that move beyond basic data exchange into purpose-built workflows and usability so physicians can deliver best-in-class care,” says Sam Lambson, Vice President of Product Management, Data & Ecosystem at athenahealth.
Start with high-impact, low-risk use cases
AI adoption should begin where the benefits are immediate, and the clinical risk is low. According to the survey, physicians overwhelmingly support using AI to reduce administrative work — tasks like documentation, coding support, and scheduling. These are the functions that eat up time and contribute to cognitive overload and burnout.
Automating the tedious tasks, physicians can focus on higher-value work and demonstrate that AI is a helpful partner, not a disruptive force.
Improve patient communications without overwhelming the inbox
Physicians also support the use of automated and agentic AI to handle routine patient messages, such as appointment requests, lab follow-ups, or basic triage questions. The key is giving physicians control over what AI handles and what gets escalated, while setting clear expectations with patients about response times and communication channels.
Done well, this use of AI protects physician time and improves the patient experience without adding noise to already overloaded inboxes.
Invest in smart, interoperable tools
Technology should reduce complexity, not add to it. The 2025 PSS reinforces the need for interoperable EHRs that surface relevant patient data in a clear, concise format, without duplication or excess clicks. Tools that summarize, highlight trends, and present the right information at the right time help physicians make faster, more confident decisions.
Build trust through thoughtful, phased implementation
Perhaps most importantly, AI adoption must be measured and intentional. Trust takes time, and reckless implementation can set progress back.
Leaders should:
- Involve clinicians early in decision-making
- Offer clear guardrails and training
- Track outcomes and continuously improve
The future of AI in healthcare is promising, but only if it evolves in partnership with the people who use it every day.
Efficiency now, accuracy next
Physicians are becoming cautiously optimistic about AI. The hype cycle is giving way to practical, real-world applications. Early signs suggest that AI can meaningfully reduce administrative burden and improve workflows when thoughtfully deployed.
“When leveraged appropriately, the right tools and technology can be advantageous and enable better patient care, and, as an industry, we must work together to keep the momentum,” says Jessel.
Full trust will depend on proving how accurate and reliable AI can be, especially in high-stakes clinical use. For now, the safest and most successful AI use cases in healthcare are those that support — not replace — clinical judgment.
Learn how your organization can implement AI that works for your clinicians, not against them.