Mental health AI research often fails to report informed consent

The authors highlight that informed consent and ethical approval are distinct ethical responsibilities. The former ensures participant autonomy, while the latter evaluates the study’s risk-benefit ratio and regulatory compliance. Treating them interchangeably, or omitting one, can compromise the ethical clarity of the research. In nine studies that reported IRB approval, informed consent was still not mentioned, indicating a problematic disconnect in ethical reporting.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 03-05-2025 18:25 IST | Created: 03-05-2025 18:25 IST
Mental health AI research often fails to report informed consent
Representative Image. Credit: ChatGPT

Artificial intelligence continues to transform mental health care, particularly through the use of conversational agents and chatbots. But as these tools proliferate, a critical oversight persists: whether researchers behind these innovations consistently obtain ethical approval and informed consent when collecting data from vulnerable populations. A new scoping review published in AI & Society titled “Ethical approval and informed consent in mental health research: a scoping review” sheds light on how inconsistently these vital ethical practices are reported and raises questions about global standards in human-subject research involving AI technologies.

Conducted by Leona Cilar Budler and Gregor Stiglic of the University of Maribor, the study analyzed 27 peer-reviewed mental health research articles using AI-driven chatbots. The findings reveal a concerning discrepancy in ethical reporting, with nearly half the studies omitting explicit declarations of ethical approval or informed consent - two foundational pillars of ethical research involving human participants.

Are AI chatbot studies respecting ethical standards?

The review applied the PRISMA-ScR methodology and covered studies from databases like PubMed, PsychARTICLES, and Web of Science. Only 13 out of 27 studies reported obtaining ethical approval from Institutional Review Boards (IRBs), while 16 confirmed collecting informed consent. Shockingly, several studies mentioned neither, despite involving large sample sizes or collecting sensitive data.

Researchers reached out to authors of 18 studies that had unclear or missing ethical disclosures. Only three responded, offering retrospective clarification that they had indeed followed ethical procedures. Still, the sparse response rate underscores the difficulty in post-publication verification and the risks of relying on implied compliance.

An analysis of the publication outlets showed that studies appearing in low-impact factor journals more consistently reported ethical approval compared to those in higher-impact venues. While this pattern may appear counterintuitive, it hints at possible assumptions by high-tier journals that established authors or institutions adhere to best practices, even when such compliance isn’t explicitly reported.

Moreover, studies with smaller sample sizes, particularly those published in journals without an impact factor, frequently lacked ethical disclosures. Of five studies with fewer than 20 participants, none reported receiving IRB approval. This raises concerns that researchers may be underestimating the ethical implications of small-scale studies, despite dealing with sensitive mental health data.

What explains the gaps in ethical transparency?

The review found multiple structural and systemic factors contributing to the gaps in ethical reporting. Journal policies were one prominent variable. Some journals explicitly require statements confirming IRB approval and informed consent, while others provide vague or optional guidance. For instance, journals like JMIR Mental Health mandate such disclosures irrespective of study type, whereas high-impact journals such as Scientific Reports or PLOS ONE often only request this information for clinical trials.

These inconsistencies mirror broader challenges in international research regulation. Ethical guidelines vary considerably across countries. While the European Union's General Data Protection Regulation (GDPR) and the U.S. Common Rule provide stringent ethical frameworks, many low- and middle-income countries have evolving or decentralized policies. In such jurisdictions, verbal or community-based consent may be the norm, leading to potential underreporting in international publications that use Western ethical benchmarks as reference.

Study design also influenced ethical transparency. Experimental and interventional studies involving chatbots in therapeutic settings tended to report both ethical approval and informed consent. Conversely, cross-sectional and observational studies, particularly those using anonymized datasets, often omitted these declarations, possibly due to exemptions granted by local IRBs or assumptions of minimal risk.

The authors highlight that informed consent and ethical approval are distinct ethical responsibilities. The former ensures participant autonomy, while the latter evaluates the study’s risk-benefit ratio and regulatory compliance. Treating them interchangeably, or omitting one, can compromise the ethical clarity of the research. In nine studies that reported IRB approval, informed consent was still not mentioned, indicating a problematic disconnect in ethical reporting.

How can mental health research keep pace with AI and ethics?

The authors call for the creation of a global ethical framework tailored to AI-powered mental health research. Drawing from existing models like GDPR and HIPAA, such a framework should mandate explicit ethical declarations, ensure participant data privacy, and demand algorithmic explainability, especially when research involves vulnerable populations like individuals with mental illness.

Recommendations include stricter journal policies, standardized ethical reporting templates, mandatory researcher training on human-subject ethics, and the development of interdisciplinary ethical review boards with expertise in both AI and mental health. Journals must play a central role by enforcing ethical declaration requirements during submission and peer review. Furthermore, editorial oversight should not assume ethical compliance based on researcher affiliation or reputation.

The study also suggests that future research should explore why ethical practices are more rigorously reported in lower-impact journals and whether this stems from editorial policy differences or researcher self-selection. Another avenue is the examination of how local cultural norms, such as community versus individual consent, affect ethical disclosures in multinational studies.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback