What healthcare leaders are running into with AI adoption
Many healthcare leaders aren’t simply debating whether to adopt AI. Rather, they’re also trying to determine which tools to choose, and whether they can afford those tools.
AI expansion has been rapid as practices look for potential new use cases. Ambient documentation tools have become a staple in many exam rooms, with recent survey data showing a 38 percent increase in usage from last year alone.1 In back offices, staff may use generative AI to draft appeals, summarize policies, and keep work moving. Vendors are adding AI capabilities to existing platforms on accelerated timelines to handle daily tasks such as chart summaries, structuring and parsing incoming data from other clinicians, appointment scheduling, and patient communications.
Meanwhile, governance moves at a very different speed.
Policies are still being written. Legal teams are interpreting new and often vague state guidance. Compliance leaders are trying to define when patients must be informed, which tools are acceptable, and who is ultimately responsible. In many organizations, technology has moved into production faster than the rules meant to govern it.
The tension between fast-moving implementation and slower-moving governance was the most consistent theme raised in focus groups with healthcare leaders and frontline operators at athenahealth’s 2025 Thrive customer conference. CEOs of Federally Qualified Health Centers (FQHCs), practice administrators, IT leaders, practice managers, and AI leads described different pressures, but the same underlying challenge: AI is already here, and the operating model hasn’t caught up.
What follows is a synthesis of the real barriers slowing AI adoption today and the accelerators helping organizations move forward responsibly.
Considerations and accelerators for AI adoption
AI adoption in healthcare is rarely a single decision or a linear rollout. Leaders navigating this transition must balance innovation with risk tolerance, operational capacity, and stakeholder confidence.
Insights from the Thrive focus groups suggest that progress often depends less on sweeping mandates and more on practical enablers, including incremental testing, credible internal advocacy, evolving governance, and clear accountability.
Staged adoption that respects risk and capacity
Organizations making progress tended to approach adoption incrementally. For leaders in Thrive focus groups, this meant piloting tools with a small subset, validating impact, and expanding only after confidence grew.
This approach allowed leaders to manage regulatory uncertainty, budget constraints, and internal skepticism without stopping progress entirely. Pilot programs created space to learn, adjust policies, and build internal alignment before scaling. They afforded practices and larger healthcare organizations the chance to test new AI tools in practice before they’re widely available.
athenahealth actively recruits practices of all sizes for its Alpha/Beta testing program through the Success Community portal. This opt-in process provides a low-risk environment where organizations can test emerging AI solutions without committing to full implementation. athenahealth intentionally selects diverse practice types and service tiers to help capture the full spectrum of healthcare organizations, not just early tech adopters.
Participating practices provide real-time feedback directly to development teams, actively shaping solutions that address their specific concerns. This collaborative approach can help transform skeptical stakeholders into informed partners who witness AI's practical impact while maintaining control over their adoption journey. Importantly, it also gives participating practices the chance to start small and build alongside vendors – which has served as an AI adoption unlock for representative members of Thrive focus groups.
Champions matter more than mandates
Joining pilot programs made it easier for leaders to choose a small group of champions, assess the efficacy of the tools, gradually improve comfort and support wider adoption, and provide feedback. That test-and-learn approach in the early stages through general availability (GA) gave leaders the chance to better shape the AI tool’s development for maximum ease and utility.
Across organizations represented in our Thrive focus groups, AI adoption accelerated when early users became advocates. Clinician champions, practice managers, or operational leaders who could speak credibly about real benefits helped shift sentiment far more effectively than top-down directives or formal training programs.
Importantly, champions emerged at all career stages. Although the consensus was that younger practitioners were somewhat more eager and comfortable with new AI tools, familiarity and enthusiasm were not tied to age, but to firsthand experience and perceived value.
The latter point is reflected in industry-wide trends. According to the 2026 Physician Sentiment Survey (PSS), 54 percent of all respondents feel comfortable using AI tools, an 8 percent increase from 2025, including 65 percent of Millennials (up 15 percent from 2025) and 49 percent of Boomers and Gen X (up 3 percent from 2025).2
Increasing adoption and comfort with AI tools allows healthcare organizations to choose from a larger champion pool. This, in turn, can help maximize feedback so that all pain points are noted and addressed prior to practice-wide adoption.
Organizations making progress tended to approach adoption incrementally. For leaders in our focus groups, this meant piloting tools with a small subset, validating impact, and expanding only after confidence grew.
Pragmatic policy development — not perfection
Organizations that moved forward did not wait for perfect regulatory clarity. Instead, they developed working policies that could evolve. That meant defining approved tools, acceptable use, and boundaries for patient data while acknowledging that guidance would change.
These policies were paired with education, helping staff understand not only what was allowed but why. This helped reduce anxiety, limit informal workarounds, and make expectations clearer across teams.
Clear ownership for vendor vetting and infosec review
According to focus group participants, AI adoption accelerated when organizations clarified who was responsible for evaluating AI vendors and managing third-party risk. Review processes moved more predictably when ownership was localized and more clearly defined.
Leaders emphasized that vetting AI vendors required deeper scrutiny than traditional software, particularly around data access, training practices, and breach response. Clear accountability helped teams move forward with greater confidence.
Early and ongoing IT involvement
IT teams were most effective when involved early rather than brought in at the end of decision-making. Early engagement allowed for more thorough back-end research, clearer security expectations, and better integration planning.
When IT had visibility into the broader AI strategy — rather than reacting to individual tool requests — resource strain was more manageable.
Acknowledging grassroots use instead of ignoring it
Several organizations described a shift in mindset: acknowledging that informal AI use was already happening and addressing it directly.
Rather than attempting to shut it down entirely, leaders brought use cases into the open, clarified what was and wasn’t acceptable, and offered approved tools that met real operational needs. This reduced unmanaged risk while supporting productivity.
Financial pressure as a catalyst, not just a constraint
AI tools are not free, though some (such as AI-native capabilities in athenaOne®) roll out automatically to users without upgrade fees. For many organizations, the constraint is not interest, but funding structure. Annual budgets, grant-dependent financing, and declining grant availability limit flexibility, particularly for FQHCs and safety-net practices. Leaders described having to weigh AI investments against staffing, access, and compliance needs, even when the operational case for adoption was strong.
While cost was a barrier, financial pressure also acted as an accelerator. These same financial challenges made AI adoption feel necessary rather than optional, as tools promising efficiency gains and reduced administrative burden offered a path to sustainability. For these leaders, technology was viewed as a way to maintain access and operations while reducing reliance on additional human capital, especially in high-turnover administrative roles.
However, this urgency reinforced the need for standardization in assessment prior to adoption. The most successful organizations balanced this tension by starting with targeted pilots that demonstrated clear ROI before scaling, allowing them to harness financial pressure as motivation while avoiding costly missteps from rushing.
Workforce anxiety and staffing pressure
Participants described concern among front-office and administrative staff about potential job loss as AI tools are introduced, particularly in areas like scheduling, call handling, and billing support. While most leaders did not believe AI would replace clinical roles, anxiety about reduced staffing was real and, in some cases, slowed adoption.
At the same time, several leaders — especially at FQHCs — framed AI adoption as necessary to operate with leaner teams. Shrinking budgets, declining grant funding, and margin pressure made it difficult to maintain current staffing levels. In these settings, AI was viewed less as a replacement for existing staff and more as a means to avoid backfilling high-turnover roles and to continue delivering care with fewer resources.
Organizational governance is a dependency to wider AI rollout
There is a moment every healthcare leader recognizes: the point at which innovation stops being optional and starts being operational.
AI has crossed that threshold.
What emerged most clearly from the conversations at Thrive is that AI adoption in healthcare is already underway. But adoption can happen unevenly, sometimes informally, and often ahead of governance structures. The organizations moving forward are working to establish governance by reducing friction where it predictably appears, clarifying responsibility, and creating room to learn as technology, regulation, and policy evolve.
For healthcare leaders, the work ahead is less about deciding whether to adopt AI and more about building operating models that can keep pace — responsibly — with how quickly AI is becoming part of everyday care.
More AI in healthcare resources
Continue exploring
1. 2026 Physician Sentiment Survey
2. Ibid.






