Explainable AI must consider adolescents’ vulnerabilities


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 16-03-2026 07:38 IST | Created: 16-03-2026 07:38 IST
Explainable AI must consider adolescents’ vulnerabilities
Representative Image. Credit: ChatGPT

Digital technologies now play a key role in the lives of adolescents. Social media platforms, video-sharing services, search engines, and online learning tools rely heavily on artificial intelligence (AI) systems to personalize content and recommend information. These recommendation systems influence what users see, what they engage with, and how they interpret online environments.

For adults, these interactions occur during a developmental stage characterized by rapid cognitive, emotional, and social change. Teens are actively forming their identities, establishing relationships, and developing the skills needed to make independent decisions. They are also navigating digital ecosystems that operate through complex algorithms often invisible to users.

In the study “One Blind Spot of the Explainability Debate: The Specific Needs and Vulnerabilities of Adolescents,” published in the journal AI & Society, the author examines how current frameworks for explainable artificial intelligence largely overlook the developmental and social realities faced by adolescents interacting with algorithmic systems.

Adolescents in an algorithmically curated world

Digital technologies now play a key role in the lives of adolescents. Social media platforms, video-sharing services, search engines, and online learning tools rely heavily on artificial intelligence systems to personalize content and recommend information. These recommendation systems influence what users see, what they engage with, and how they interpret online environments.

For adults, these interactions occur during a developmental stage characterized by rapid cognitive, emotional, and social change. Teenagers are actively forming their identities, establishing relationships, and developing the skills needed to make independent decisions. At the same time, they are navigating digital ecosystems that operate through complex algorithms often invisible to users.

The research emphasizes that algorithmic systems can influence adolescents in several ways. Recommendation engines determine which content is highlighted in feeds or search results, potentially shaping the information young users encounter. Automated moderation systems determine which content is removed or promoted. Personalized advertising systems target users with tailored messages based on behavioral data.

These mechanisms may guide online behavior in subtle but powerful ways. By continuously filtering and prioritizing information, algorithms can shape the opinions, preferences, and social interactions of adolescents. Because these systems operate without clear explanations, young users may struggle to understand why certain content appears in their digital environments.

This lack of transparency can limit adolescents’ ability to critically evaluate digital platforms. Without insight into how algorithms work, users may assume that content appearing in their feeds reflects objective popularity or social consensus rather than the outcome of automated decision-making processes.

According to the study, this situation creates a vulnerability unique to young users. Adolescents are highly active online yet may lack the experience or knowledge needed to recognize how algorithmic systems influence what they see and how they interact with others.

Why explainability matters for young users

Explainable AI refers to the ability of AI systems to provide understandable information about how decisions are made or recommendations are generated. In many policy discussions, explainability is viewed primarily as a tool for transparency and accountability. Governments and regulators have promoted explainability as a way to ensure that automated decisions can be audited and evaluated.

The research expands this perspective by highlighting the developmental importance of explainability for adolescents. When young people understand how algorithms shape their digital experiences, they are better equipped to interpret online information and make informed choices.

Explainability can support adolescents’ development in several ways.

  • It can foster digital awareness by helping users recognize the role of algorithms in curating online content. This awareness encourages critical thinking about digital platforms rather than passive consumption of recommended material.
  • Explainability can strengthen autonomy. Adolescents who understand the mechanisms behind recommendation systems are more likely to question algorithmic suggestions and explore diverse sources of information.
  • Explainability can support the development of digital literacy. As artificial intelligence becomes more integrated into everyday life, understanding algorithmic systems becomes an essential skill for participating in modern societies.

The study also notes that adolescents’ interactions with AI systems often differ from those of adults. Young users may engage more intensively with recommendation-based platforms such as video-sharing services and social media applications. They may also be more susceptible to persuasive design strategies used by digital platforms to increase engagement.

Without clear explanations of how these systems function, adolescents may struggle to distinguish between organic content discovery and algorithmically engineered recommendations. This can affect how they evaluate information, form opinions, and interact with peers online.

The research suggests that explainability frameworks should therefore consider not only technical transparency but also the cognitive and developmental needs of younger users.

Diverse vulnerabilities among adolescents

Another key finding of the study is that adolescents cannot be treated as a single, uniform group in discussions about AI explainability. The experiences and vulnerabilities of young users vary widely depending on social, cultural, and economic factors.

Differences in socioeconomic background may influence how adolescents access digital technologies and how they interpret algorithmic systems. Young people from households with greater technological resources may have more opportunities to develop digital literacy skills. Others may rely on digital platforms without the same level of guidance or educational support.

Gender, ethnicity, and disability can also shape how adolescents experience algorithmic environments. Recommendation systems trained on biased datasets may amplify stereotypes or marginalize certain communities. Automated decision-making systems used in areas such as education or social services may also reflect structural inequalities present in training data.

These variations highlight the need for inclusive approaches to AI explainability that account for diverse user experiences. Systems designed to explain algorithmic decisions must be accessible and understandable to users with different levels of technical knowledge and different social backgrounds.

The study emphasizes that designing explainable systems for adolescents requires collaboration across multiple fields, including computer science, education, psychology, and ethics. Understanding how young people interact with AI technologies is essential for developing safeguards that protect their rights and support their development.

Role of education in building AI literacy

The study recommends integrating AI literacy into educational systems. As algorithmic technologies become more common in daily life, schools play a critical role in helping young people understand how these systems operate.

AI literacy goes beyond basic digital skills such as using software or navigating online platforms. It involves understanding the principles behind machine learning, recommendation systems, and automated decision-making. It also includes recognizing the ethical and social implications of AI technologies.

By incorporating AI education into school curricula, educators can help adolescents develop the knowledge needed to interpret algorithmic systems critically. This education may include lessons about data collection, algorithmic bias, digital privacy, and the ways platforms use engagement metrics to shape user behavior.

Such knowledge empowers young people to navigate digital environments with greater awareness. Instead of passively consuming algorithmically curated content, adolescents can learn to question how and why certain recommendations appear.

The study suggests that educational initiatives should also encourage discussions about the ethical responsibilities of technology companies and the societal impacts of AI systems. Teaching students about these issues fosters a broader understanding of how technology influences social structures and democratic processes.

Responsibilities of technology companies and policymakers

While education is essential, the study also stresses that responsibility does not rest solely with young users or schools. Technology companies and policymakers play a crucial role in ensuring that AI systems are designed with adolescents’ needs in mind.

Developers of algorithmic systems should consider the developmental characteristics of young users when designing recommendation algorithms, user interfaces, and transparency mechanisms. Systems intended for younger audiences should include explanations that are clear, accessible, and appropriate for different age groups.

Policymakers also have a role in establishing regulatory frameworks that protect adolescents from harmful algorithmic practices. These frameworks may include requirements for transparency, safeguards against manipulative design strategies, and stronger protections for young users’ data.

Regulations addressing algorithmic accountability are already emerging in several regions, but the study suggests that these policies often focus primarily on adult users. Expanding these frameworks to address the unique vulnerabilities of adolescents could strengthen digital protections for younger populations.

By incorporating youth perspectives into the design of explainable AI, developers and policymakers can help ensure that algorithmic technologies support rather than undermine adolescents’ development.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback