AI in pharmacy requires human oversight and ethical safeguards
If an AI-driven dispensing system makes an error, it is unclear whether responsibility lies with the developer, the pharmacy operator, or the pharmacist. This lack of clarity, regulators warned, could erode public confidence in healthcare technologies if not addressed through proactive governance.
A new study sheds light on how regulators in Canada and the United States are navigating the challenges of integrating artificial intelligence (AI) into pharmacy practice. The research, titled “Responsible Adoption of Artificial Intelligence (AI) in Pharmacy Practice: Perspectives of Regulators in Canada and the United States” and published in Pharmacy, reveals that regulators prefer a principle-based framework over rigid rulemaking to ensure AI technologies in healthcare remain ethical, transparent, and safe.
The findings underscore a critical juncture in the evolution of pharmacy practice, as AI tools, from automated dispensing systems to predictive analytics, become more prevalent in clinical decision-making. Regulators, the study finds, are grappling with how to oversee a technology that is rapidly reshaping the pharmacist’s role without compromising patient safety, professional accountability, or trust in healthcare systems.
Balancing innovation and accountability in AI-driven pharmacy
The study is based on semi-structured interviews with 12 pharmacy regulators across both countries, uncovering a consensus that traditional regulatory structures are ill-suited to govern emerging AI systems. Unlike drugs or medical devices, AI applications evolve continuously, making fixed regulatory frameworks difficult to enforce.
Participants highlighted a key distinction between human-in-the-loop (HiL) and human-out-of-the-loop (HoL) AI systems. In HiL models, pharmacists retain ultimate decision-making authority, allowing existing professional standards to remain applicable. In contrast, HoL systems, where AI operates autonomously, fall outside traditional oversight mechanisms, creating what the authors describe as a “regulatory gray zone.”
The concern, according to the study, is that unchecked use of autonomous AI could blur lines of accountability in clinical settings. If an AI-driven dispensing system makes an error, it is unclear whether responsibility lies with the developer, the pharmacy operator, or the pharmacist. This lack of clarity, regulators warned, could erode public confidence in healthcare technologies if not addressed through proactive governance.
Despite these challenges, most regulators interviewed rejected the notion of imposing strict rules or licensing requirements for AI technologies. Instead, they advocated for a guidance-based approach grounded in professional ethics and adaptive principles. This approach, they argued, would enable innovation while ensuring that human oversight, transparency, and safety remain central to AI integration in healthcare.
Ethical frameworks over formal regulation
The research reveals that regulators overwhelmingly support principle-driven guidance as a more flexible and future-proof alternative to traditional regulation. Participants identified several foundational principles essential for responsible AI use in pharmacy: transparency, redundancy, audit and feedback, quality assurance, data privacy, ethical alignment, and interoperability.
Transparency was viewed as the cornerstone of trust between patients and healthcare providers. Regulators stressed that patients should be made aware when AI is used in decision-making processes, even if explicit consent is not required for every digital intervention. This approach reflects a balance between ethical ideals and the operational realities of modern healthcare, where AI assists in routine tasks like drug-interaction checks and prescription validation.
Redundancy, meanwhile, ensures continuity of care in case of system failure. Regulators emphasized the need for fallback mechanisms that maintain patient safety when AI systems malfunction or data networks go offline. Audit and feedback loops were highlighted as critical tools for continuous performance monitoring, allowing pharmacists and developers to identify errors, improve algorithms, and strengthen system reliability.
The study also draws attention to data governance and privacy, which remain major concerns for both Canadian and U.S. regulators. The potential for data breaches, algorithmic bias, and vendor monopolization poses long-term risks to public trust. By calling for interoperability standards, regulators aim to prevent dependence on single proprietary systems that may limit adaptability or compromise patient confidentiality.
Closing the regulatory gap before it widens
The study acknowledges that current regulations lag behind the pace of AI innovation. While most regulators agree that formal policies will eventually be necessary, the majority prefer to start with non-binding guidance and evolve as technologies mature.
The authors warn that waiting too long to establish oversight mechanisms could replicate the “social media problem”—where technologies outpace ethical and legal frameworks, leading to unanticipated harms. By adopting early, principle-based guidance, regulators hope to mitigate risks before AI becomes too deeply embedded in pharmacy operations.
However, the study also exposes tensions between ethical ideals and operational feasibility. Some participants argued that requiring patient consent for every AI-driven decision would be impractical and could slow down healthcare delivery. Others maintained that transparency should remain absolute, even if it complicates workflows. These differing perspectives highlight the ongoing struggle to balance innovation with moral responsibility.
The authors note that effective AI governance will require collaboration between regulators, practitioners, policymakers, and developers. They advocate for ongoing dialogue to establish shared standards for auditing, risk assessment, and professional accountability. This collaborative model would allow regulatory guidance to evolve alongside technological progress, ensuring that oversight remains both relevant and enforceable.
The authors also call for public engagement in shaping AI governance. Patients, they argue, must understand how AI influences their care and have opportunities to express concerns or preferences. Building public literacy around healthcare AI is seen as vital to maintaining trust in systems that increasingly rely on algorithmic decision support.
- FIRST PUBLISHED IN:
- Devdiscourse

