Primary care AI adoption stalls due to deep structural gaps
Cybersecurity and data privacy concerns also emerged as major issues. Integrating AI into primary care raises fears about data breaches, misuse of sensitive patient information, and erosion of trust. Without robust safeguards and transparent data governance, clinicians and patients remain cautious, slowing adoption.
In primary healthcare, where most care actually happens, the adoption of artificial intelligence is still slow and uneven. A new international study finds that the limiting factor is not the technology itself but deep structural gaps in governance, financing, workforce capacity, and service organization that prevent AI from moving beyond pilot projects into routine care.
The study, titled Integrating Artificial Intelligence (AI) in Primary Health Care (PHC) Systems: A Framework-Guided Comparative Qualitative Study, was published in the journal Healthcare. The research compares AI readiness in the primary healthcare systems of Quebec, Canada, and Iran, revealing how systemic conditions shape whether AI becomes an asset or a stalled ambition.
Governance and financing gaps block AI scale-up
The study identifies stewardship and financing as the most decisive determinants of AI readiness in primary healthcare. In both Quebec and Iran, fragmented governance structures were found to undermine coordination, accountability, and long-term planning for AI deployment.
In Quebec, a high-income system with relatively advanced digital health infrastructure, governance challenges stem from complexity rather than absence. Multiple authorities oversee health policy, digital health strategy, professional regulation, and data governance. This fragmentation creates uncertainty over who is responsible for approving, funding, monitoring, and scaling AI applications in primary care. As a result, many initiatives remain confined to small pilots without clear pathways to system-wide adoption.
Participants in Quebec also highlighted the lack of stable financing models for AI-enabled services. While innovation grants and research funding support experimentation, there are limited reimbursement mechanisms for AI-supported clinical activities. Without incentives aligned to primary care delivery, providers face little motivation to adopt tools that may increase workload or disrupt established workflows. The study shows that even in resource-rich settings, unclear financial incentives can stall AI integration.
In Iran, governance challenges are more foundational. The study finds no unified national strategy for AI in primary healthcare, with responsibility scattered across ministries, regulatory bodies, and regional authorities. Managerial instability and frequent leadership changes further weaken continuity and long-term planning. Regulatory frameworks are often outdated or misaligned with digital health realities, creating legal uncertainty for AI developers and healthcare providers.
Financing constraints in Iran are even more pronounced. Limited public funding, weak insurance incentives, and competing health system priorities restrict investment in AI infrastructure. Without clear economic justification and reimbursement pathways, AI is perceived as an optional add-on rather than a core component of primary care reform.
Across both contexts, the study concludes that governance and financing failures are not secondary issues but structural barriers. AI readiness, the authors argue, begins with institutional clarity and economic alignment, not software acquisition.
Data systems and workforce capacity define readiness
Resource generation, particularly data infrastructure and human capital, is a shared bottleneck across health systems at different income levels.
In Quebec, digital health records are widespread, but data fragmentation remains a serious obstacle. Primary care data are often siloed across clinics, hospitals, and regional systems, limiting interoperability and reducing the quality of data available for AI training and deployment. Participants expressed concern that inconsistent data standards and variable data quality undermine the reliability of AI tools, particularly in clinical decision support.
Cybersecurity and data privacy concerns also emerged as major issues. Integrating AI into primary care raises fears about data breaches, misuse of sensitive patient information, and erosion of trust. Without robust safeguards and transparent data governance, clinicians and patients remain cautious, slowing adoption.
Workforce readiness presents a parallel challenge. In Quebec, clinicians often lack training in AI literacy, leading to uncertainty about how tools function, how outputs should be interpreted, and where responsibility lies when AI-supported decisions influence care. This knowledge gap fuels skepticism and resistance, especially when AI systems are perceived as black boxes rather than supportive tools.
In Iran, data and workforce challenges are more severe. Many primary care facilities lack comprehensive digital records, and where data exist, they are often incomplete, unstandardized, or inaccessible. Limited broadband access and uneven digital infrastructure further constrain AI feasibility, particularly in rural and underserved areas.
Human resource shortages compound the problem. The study finds a lack of professionals trained at the intersection of healthcare, data science, and AI. Without investment in education and capacity building, AI initiatives depend heavily on external vendors or short-term projects, reducing sustainability and local ownership.
The authors emphasize that AI readiness depends on the slow, cumulative work of building data ecosystems and human expertise. Technology cannot compensate for weak foundations, and premature deployment risks reinforcing inefficiencies rather than solving them.
Service delivery risks and unequal outcomes
At the service delivery level, the study reveals how misaligned AI integration can produce unintended consequences, including increased workload, reduced care quality, and widening inequities.
In Quebec, clinicians expressed concern that AI tools introduced without workflow redesign may add administrative tasks rather than reduce them. Poor integration with existing systems forces providers to navigate multiple platforms, increasing cognitive burden and time pressure. There is also apprehension that overreliance on AI could weaken clinical judgment or disrupt the patient–provider relationship, particularly in primary care settings where trust and continuity are central.
Ethical concerns loom large. Participants questioned how accountability should be assigned when AI systems influence diagnoses or care pathways. Without clear guidelines, clinicians fear liability risks and professional exposure, further discouraging adoption.
In Iran, service delivery challenges are closely tied to access and equity. AI tools developed for urban, well-equipped clinics often fail to translate to rural or low-resource settings. The study warns that uneven AI deployment could exacerbate existing disparities, concentrating benefits in already advantaged areas while leaving vulnerable populations behind.
Across both systems, the study finds that AI can only improve service delivery once upstream issues in governance, financing, infrastructure, and workforce capacity are addressed. Introducing AI into fragile systems risks amplifying dysfunction rather than correcting it.
- READ MORE ON:
- AI in primary healthcare
- artificial intelligence primary care systems
- AI readiness healthcare systems
- primary health care AI adoption
- health system governance AI
- AI healthcare workforce readiness
- digital health infrastructure primary care
- AI implementation barriers healthcare
- comparative healthcare AI study
- AI health system integration
- FIRST PUBLISHED IN:
- Devdiscourse

