Medical education faces integrity and competency questions as AI adoption accelerates

The study shows that ChatGPT usage among healthcare students in Saudi Arabia is no longer experimental or marginal. Nearly seven out of ten surveyed students reported prior use of the tool, indicating that generative AI has become a mainstream study aid across medicine, pharmacy, nursing, dentistry, and allied health disciplines. Adoption was observed across all years of study, with higher usage rates among students in more advanced academic stages, reflecting increasing academic pressure and more complex learning demands.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 03-01-2026 17:43 IST | Created: 03-01-2026 17:43 IST
Medical education faces integrity and competency questions as AI adoption accelerates
Representative Image. Credit: ChatGPT
  • Country:
  • Saudi Arabia

With the rapid adoption of artificial intelligence (AI), educators and policymakers are struggling to understand whether these tools are improving learning outcomes or quietly introducing new risks related to overreliance, ethics, and professional competence.

In a study titled ChatGPT in Health Professions Education: Findings and Implications from a Cross-Sectional Study Among Students in Saudi Arabia, published in International Medical Education in late 2025, researchers assess how healthcare students are actually using ChatGPT. Based on survey data from more than one thousand undergraduate students across Saudi Arabia, the research offers a detailed picture of adoption patterns, perceived benefits, and growing concerns surrounding generative AI in health professions education.

ChatGPT adoption becomes widespread across healthcare programs

The study shows that ChatGPT usage among healthcare students in Saudi Arabia is no longer experimental or marginal. Nearly seven out of ten surveyed students reported prior use of the tool, indicating that generative AI has become a mainstream study aid across medicine, pharmacy, nursing, dentistry, and allied health disciplines. Adoption was observed across all years of study, with higher usage rates among students in more advanced academic stages, reflecting increasing academic pressure and more complex learning demands.

Students reported using ChatGPT most frequently for summarizing academic articles, preparing assignments, studying for examinations, and generating study materials. These uses align closely with the structural challenges of health professions education, where students must process large volumes of technical information under tight time constraints. Many respondents indicated that ChatGPT helped them manage workloads more efficiently, allowing them to focus on understanding core concepts rather than spending excessive time on information retrieval.

Time efficiency emerged as the single strongest driver of adoption, followed by convenience and perceived accuracy of responses. In an education system where curricula are dense and assessment stakes are high, these attributes make AI tools particularly attractive. The study also found that ChatGPT use was consistent across different healthcare programs, suggesting that its appeal is not discipline-specific but tied to broader features of professional education.

Importantly, the research highlights that ChatGPT is not primarily being used as a shortcut to avoid learning. Many students described using the tool to clarify difficult concepts, organize study plans, and rehearse clinical reasoning in a low-pressure environment. These findings challenge simplistic narratives that frame AI use in education solely as a threat to learning integrity.

Learning gains coexist with growing ethical and cognitive concerns

While the perceived benefits of ChatGPT are substantial, the study makes clear that they come with significant trade-offs. A majority of students reported that using ChatGPT improved learning efficiency and reduced study-related stress. Many also believed the tool helped enhance critical thinking by presenting information in structured, digestible formats. Communication skills and clinical preparation were cited as additional areas where ChatGPT provided support, particularly for students studying in English as a second language.

Concerns about overreliance were widespread. More than half of respondents expressed worry that frequent use of ChatGPT could weaken independent thinking and analytical skills. In health professions, where clinical judgment and decision-making are central competencies, this concern carries particular weight. Students appeared aware that while AI can assist learning, excessive dependence may undermine the development of professional autonomy.

Academic integrity was another major issue identified in the study. Many students acknowledged that ChatGPT could be misused for plagiarism or inappropriate assistance with assignments. This concern was not limited to hypothetical scenarios; respondents recognized that existing assessment structures are often poorly equipped to distinguish between acceptable AI-supported learning and unethical academic behavior.

Additional challenges reported by students included subscription costs, technical limitations, and difficulties in formulating effective prompts. These barriers highlight a digital divide even within technologically advanced education systems, where access to premium AI features may influence learning experiences. Students also noted frustration with occasional inaccuracies or superficial explanations, reinforcing the need for human oversight and critical evaluation of AI-generated content.

The study further reveals a gap between student use and institutional engagement. Most students reported rarely discussing ChatGPT with instructors, and many believed that current curriculum coverage of AI tools was low or insufficient. This disconnect suggests that AI adoption is being driven largely by students themselves, rather than guided by formal educational strategies.

Calls grow for structured integration and AI literacy

The study finds strong student support for integrating ChatGPT into health professions education in a more structured and transparent way. A majority of respondents believed that AI tools should be formally addressed in curricula and that training on appropriate use should be at least average or higher. Students did not call for unrestricted use, but for clear guidelines that balance innovation with ethical responsibility.

This demand reflects a broader shift in professional education. As AI becomes embedded in healthcare practice, from diagnostics to decision support, students increasingly view familiarity with AI tools as a core professional skill rather than an optional supplement. Ignoring these tools, the study suggests, risks leaving future healthcare professionals unprepared for technology-rich clinical environments.

The research also stresses that responsible integration is essential. Clear institutional policies are needed to define acceptable use, protect academic integrity, and ensure that AI supports rather than replaces critical learning processes. Faculty oversight plays a central role in this process, not only by setting boundaries but by modeling effective and ethical AI use.

The study also highlights the importance of AI literacy. Beyond basic operational skills, students need to understand the limitations, biases, and uncertainties inherent in generative AI systems. Training in prompt design, verification of outputs, and ethical decision-making can help mitigate risks associated with overreliance and misinformation.

From a policy point of view, the findings align closely with Saudi Arabia’s broader digital transformation agenda, which emphasizes AI development in both healthcare and education. The study suggests that health professions education represents a critical testing ground for how generative AI can be integrated responsibly into high-stakes professional training.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback