Adolescents want privacy, not just accuracy, from health AI tools
Adolescents expressed both enthusiasm and apprehension when evaluating the clinical and personal applications of health AI. In scenarios involving AI scribes like "ScribeBot," teens welcomed the idea of AI streamlining doctor-patient communication by automatically transcribing consultations, thereby freeing doctors to focus on them. Similarly, risk-prediction models like “HealthRisk AI” were appreciated for helping adolescents understand potential health outcomes and take preventive action, especially when family health history was involved.

A newly-published study sheds light on how adolescents perceive artificial intelligence in healthcare, offering vital insight into how the next generation navigates the promises and pitfalls of AI-driven health tools. Published in the Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems under the title “Understanding Adolescents’ Perceptions of Benefits and Risks in Health AI Technologies through Design Fiction,” the research explores a demographic often overlooked in discussions about medical technology: teenagers.
Researchers from the University of California, Irvine, employed design fiction to survey 16 adolescents, aged 13 to 17, in speculative scenarios involving AI-powered health technologies. These fictional vignettes covered both clinical and personal health contexts - from ambient scribes recording medical appointments to chatbot therapists simulating human empathy. Through these interactive interviews, the study revealed nuanced opinions shaped by trust, privacy concerns, perceived benefits, and adolescent-specific expectations.
What are the perceived benefits and concerns adolescents have about health AI?
Adolescents expressed both enthusiasm and apprehension when evaluating the clinical and personal applications of health AI. In scenarios involving AI scribes like "ScribeBot," teens welcomed the idea of AI streamlining doctor-patient communication by automatically transcribing consultations, thereby freeing doctors to focus on them. Similarly, risk-prediction models like “HealthRisk AI” were appreciated for helping adolescents understand potential health outcomes and take preventive action, especially when family health history was involved.
Personal health tools such as “Wellbeing AI,” which track fitness and offer personalized health advice, were seen as empowering for establishing healthy habits early in life. Chatbots like “MedHealthGPT” were praised for helping manage minor symptoms and offering accessible guidance, particularly in situations where teens felt uncomfortable seeking human support.
However, these perceived advantages were shadowed by significant concerns. Participants worried about AI misinterpreting medical conversations, especially when dealing with sensitive topics like mental health or substance use. Several feared that automated transcription tools might relay private details to parents without consent, raising questions about confidentiality. The idea of AI systems using video to record emotional or physical cues heightened discomfort, with teens describing it as excessive and invasive.
In AI chatbot scenarios, skepticism mounted around AI’s capacity to understand the emotional nuance of mental health conditions. Adolescents doubted the ability of machine-learning tools to accurately evaluate stress or diagnose emotional struggles, often preferring human therapists despite recognizing AI’s convenience. Prior negative experiences with AI platforms like ChatGPT contributed to a broader distrust of machine-generated health advice.
How do teens define trust and privacy in the context of health AI?
The study identifies a complex interplay of factors that govern adolescent trust in health AI, extending far beyond the system's technical performance. While many participants were open to using AI for general wellness and minor medical issues, trust declined significantly in cases involving complex diagnoses or deeply personal matters.
Teens emphasized the importance of human involvement in both clinical and personal AI systems - not just for oversight, but for emotional reassurance. Many viewed AI as a useful partner in a triadic collaboration between doctors, patients, and algorithms. Yet, this partnership was only acceptable if it preserved human judgment and relational dynamics.
Concerns about data privacy were deeply situational. Teens differentiated between types of health data, labeling nutrition and step count information as “shallow” and not worthy of protection, while categorizing mental health disclosures and facial recordings as highly sensitive. The fear wasn’t limited to public data breaches but extended to internal risks, specifically whether AI systems might share information with parents without adolescent consent. This anxiety about secondary disclosure created an invisible barrier to honesty during medical consultations.
Interestingly, while many teens acknowledged these risks, they were willing to accept some level of data exposure in exchange for meaningful benefits such as receiving timely intervention or more accurate health advice. This utilitarian approach indicates a willingness to negotiate privacy depending on the perceived payoff. However, participants often lacked a clear understanding of how data could be misused, revealing an urgent need for improved AI and data literacy among adolescents.
What design principles should shape the future of health AI for youth?
The study’s findings underscore the necessity of youth-centered design in the future of health AI. Adolescents are not merely passive users but informed stakeholders capable of nuanced evaluations. They want technologies that are educational, participatory, and adaptive to their emotional and developmental needs.
Researchers advocate for incorporating learning features into health AI systems to support adolescents' critical thinking and reflection. These tools should not only deliver outputs but explain them in age-appropriate language, offering contextual insight that promotes health literacy. AI-driven platforms could scaffold learning in a way similar to educational chatbots, enabling teens to understand not just what to do, but why they are doing it.
Emphasizing the human-in-the-loop model, the study highlights the importance of preserving clinician presence, especially in emotionally charged or complex cases. Adolescents want reassurance, and AI is seen as a tool that enhances rather than replaces the human relationship.
Privacy protections must evolve to reflect the distinct needs of adolescent users. That includes granular control over what data is shared, with whom, and under what circumstances. Design safeguards must ensure that adolescent patients retain agency over their health information, even as legal guardians maintain a supporting role.
- READ MORE ON:
- adolescent health AI
- youth perceptions of artificial intelligence in healthcare
- healthtech
- AI in teen healthcare
- AI and privacy in healthcare
- AI in digital health tools
- how teens perceive health AI technologies
- adolescent concerns about privacy in AI healthcare
- responsible AI development for adolescent health support
- FIRST PUBLISHED IN:
- Devdiscourse