AI in healthcare: ChatGPT’s potential, pitfalls, and future
AI-driven triage systems powered by ChatGPT have been designed to assess symptoms, suggest potential diagnoses, and recommend whether medical attention is necessary. This automation has significantly reduced administrative burdens for healthcare providers, allowing professionals to focus on patient care rather than paperwork.

Artificial Intelligence (AI) is revolutionizing healthcare, improving diagnostics, streamlining administrative tasks, and enhancing patient interaction. One of the most impactful AI tools in medicine today is ChatGPT, a Large Language Model (LLM) capable of understanding and generating human-like responses based on vast medical knowledge.
A recent study titled "Benefits, Limits, and Risks of ChatGPT in Medicine", authored by Jonathan A. Tangsrivimol, Erfan Darzidehkalani, Hafeez Ul Hassan Virk, Zhen Wang, Jan Egger, Michelle Wang, Sean Hacking, Benjamin S. Glicksberg, Markus Strauss, and Chayakrit Krittanawong, published in Frontiers in Artificial Intelligence, explores ChatGPT’s applications in medical education, patient care, clinical decision-making, and research. The study highlights its efficiency in reducing administrative burdens, improving accessibility to medical knowledge, and enhancing remote patient monitoring, while also addressing risks related to misinformation, ethical concerns, and AI-generated biases.
ChatGPT in medical education and information retrieval
ChatGPT has emerged as a valuable educational tool, helping both medical professionals and students by simplifying complex medical concepts, assisting in board exam preparation, and providing instant access to a wealth of medical literature. The study reports that ChatGPT has demonstrated a 60.2% accuracy rate on the USMLE (United States Medical Licensing Examination) and 78.2% on PubMedQA, proving its capability in processing medical data efficiently.
Beyond exams, ChatGPT aids in medical training by generating customized study materials, quizzes, and research summaries, making learning more interactive and accessible. However, despite these advantages, the study warns that ChatGPT lacks real-world clinical experience, which limits its ability to fully replace traditional education. Furthermore, its knowledge is restricted to the data it has been trained on, meaning it cannot interpret real-time patient cases or provide personalized medical advice without human oversight.
ChatGPT in patient triage and symptom assessment
AI-driven triage systems powered by ChatGPT have been designed to assess symptoms, suggest potential diagnoses, and recommend whether medical attention is necessary. This automation has significantly reduced administrative burdens for healthcare providers, allowing professionals to focus on patient care rather than paperwork. ChatGPT’s ability to collect patient histories and organize initial assessments makes it a useful support tool, particularly in emergency care and virtual consultations.
However, the study highlights that while ChatGPT performs well in structured cases, it is susceptible to errors when dealing with ambiguous symptoms or atypical cases. It lacks clinical intuition and the ability to read physical cues, which are crucial in real-world medical assessments. As a result, the study strongly recommends that ChatGPT be used only as a supplementary triage assistant, with human physicians making the final call on diagnoses and treatment plans.
Remote monitoring and chronic disease management
ChatGPT has shown potential in telemedicine and remote patient monitoring, particularly in managing chronic conditions such as diabetes, hypertension, and post-surgical recovery. Patients can use ChatGPT for symptom tracking, medication reminders, and receiving general health advice, which helps bridge the gap between medical visits. The study highlights successful applications of AI-driven health coaching, where ChatGPT-assisted programs have contributed to weight loss and improved post-operative care.
However, significant limitations remain, particularly ChatGPT’s inability to integrate real-time patient data from wearables and its lack of visual processing capabilities. This restricts its effectiveness in fields such as radiology, dermatology, and pathology, where image-based assessments are crucial. To improve its role in patient monitoring, the study suggests integrating ChatGPT with wearable health devices and developing AI models that can analyze multimodal data, including images and sensor readings.
Mental healthcare assistance: A new role for AI
Mental health support is an area where ChatGPT has been explored as a virtual assistant, offering coping strategies, answering mental health queries, and providing general psychological guidance. The study finds that ChatGPT can simulate empathetic conversations and assist individuals dealing with anxiety, stress, and mild depression by suggesting relaxation techniques and self-help resources. However, the research warns that ChatGPT is not a substitute for licensed therapists or mental health professionals, as it lacks emotional intelligence, personalized therapeutic strategies, and the ability to detect suicidal ideation or severe psychiatric conditions.
In some cases, AI-generated responses may even provide incomplete or misleading psychological advice, which could be harmful. The study recommends that ChatGPT be used as a complementary tool for mental health awareness but not as a primary source of mental healthcare. Future improvements should focus on incorporating advanced sentiment analysis and AI-driven emotional recognition to enhance ChatGPT’s ability to provide meaningful and safe mental health support.
ChatGPT in medical research and decision support
ChatGPT is proving to be an indispensable tool in medical research, assisting scientists and healthcare professionals in literature reviews, hypothesis generation, and clinical data analysis. The study highlights that ChatGPT streamlines research workflows by quickly summarizing medical studies, extracting key insights, and assisting in clinical trial design. However, a critical issue raised in the study is the phenomenon of "artificial hallucination," where ChatGPT generates plausible but incorrect information or fabricates references. This has raised concerns about the reliability of AI-assisted research.
To mitigate these risks, the study suggests that AI-generated research outputs must always be cross-verified by human experts, and AI systems should be trained to cite legitimate sources and recognize knowledge gaps rather than providing speculative responses. Despite these concerns, ChatGPT remains a valuable tool for accelerating the research process, particularly in areas where rapid data synthesis is required.
ChatGPT’s Role in Language Translation and Multilingual Healthcare
Language barriers are a significant challenge in global healthcare, and ChatGPT has demonstrated strong real-time translation capabilities, particularly in commonly spoken languages. The study reports that ChatGPT performs better than traditional translation tools like Google Translate when interpreting complex medical terminology. This makes it a valuable resource for doctors treating non-native speakers and in international medical collaborations.
However, challenges remain in translating medical jargon in lesser-known languages, where ChatGPT often produces conversational but imprecise translations. To enhance its capabilities, the study suggests training ChatGPT on specialized multilingual medical datasets and developing domain-specific translation models to improve accuracy in clinical settings.
Ethical and regulatory concerns: The challenges ahead
The widespread use of AI in medicine raises significant ethical, legal, and regulatory concerns. The study highlights the risk of AI bias, where ChatGPT may produce inaccurate or skewed medical advice based on incomplete or biased training data. Additionally, data privacy and security remain major concerns, as AI models must comply with HIPAA (in the U.S.), GDPR (in Europe), and other data protection regulations to ensure patient confidentiality. Another challenge is accountability - who is responsible if AI-generated medical advice leads to a negative outcome? The study suggests that AI should be used strictly as a decision-support tool, with human oversight remaining essential in all medical applications.
Future directions: Where does ChatGPT in medicine go from here?
The study envisions ChatGPT evolving into a more reliable medical assistant with improvements in accuracy, multimodal data integration, and ethical compliance. Some of the key areas for future AI development include enhancing AI’s ability to process visual data for diagnostic fields like radiology, improving emotional intelligence for mental healthcare applications, and refining AI-generated research outputs to minimize misinformation risks. Additionally, creating clear regulatory frameworks will be essential to ensure safe and ethical AI deployment in healthcare settings.
- FIRST PUBLISHED IN:
- Devdiscourse