Teenagers misunderstood? The AI bias shaping a generation

AI systems derive their knowledge from massive datasets, often curated from news, social media, and other public sources. These datasets frequently overrepresent sensationalist narratives, leading to biased portrayals of specific groups. The study demonstrates that adolescents are disproportionately associated with societal problems such as violence, drug use, and mental health struggles in AI-generated outputs.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 25-01-2025 17:28 IST | Created: 25-01-2025 17:28 IST
Teenagers misunderstood? The AI bias shaping a generation
Representative Image. Credit: ChatGPT

Artificial intelligence (AI) systems are increasingly shaping societal perceptions, influencing everything from online interactions to policymaking. However, these technologies often reflect and amplify societal biases, particularly against marginalized groups.

The study “Representation Bias of Adolescents in AI: A Bilingual, Bicultural Study” by Robert Wolfe, Aayushi Dangol, Bill Howe, and Alexis Hiniker, published in the Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, investigates how AI systems depict adolescents across different cultural contexts. By analyzing static word embeddings (SWEs) and generative language models (GLMs), the researchers reveal significant disparities between AI-generated representations of teenagers and their lived experiences, offering actionable recommendations for improving fairness in AI systems.

The Disconnect between AI representations and reality

AI systems derive their knowledge from massive datasets, often curated from news, social media, and other public sources. These datasets frequently overrepresent sensationalist narratives, leading to biased portrayals of specific groups. The study demonstrates that adolescents are disproportionately associated with societal problems such as violence, drug use, and mental health struggles in AI-generated outputs. For example, 30% of the responses from GPT2-XL and 29% from LLaMA-2-7B generative models linked adolescents to societal issues, particularly delinquency and violence. In stark contrast, less than 4% of U.S. workshop participants and under 1% of Nepalese participants identified these themes when describing their own experiences.

Instead, teenagers emphasized everyday activities like friendships, school life, hobbies, and aspirations—aspects largely overlooked in the AI’s representations. This gap between AI-generated outputs and adolescent self-perceptions highlights how biased training data perpetuates harmful stereotypes.

Cultural variations in bias

The study’s comparative analysis of English and Nepali language models reveals intriguing cultural differences. English-language models often linked adolescents with negative stereotypes, including rebellion, sexualization, and violence. This bias is likely rooted in the dominance of sensationalist Western media narratives in English-language datasets. Conversely, Nepali models offered more balanced representations, focusing on themes like personal growth, family bonds, and community involvement.

While the Nepali models were not entirely free of bias, the researchers suggest that low-resource languages, like Nepali, may offer a more nuanced depiction due to the relatively limited influence of globalized media. This finding underscores the importance of considering cultural context in AI training and highlights the need to develop AI systems that respect and reflect cultural diversity.

Workshops with 13 U.S. and 18 Nepalese adolescents provided valuable insights into how teenagers perceive themselves and their AI representations. U.S. participants advocated for diversity in AI portrayals, emphasizing that teenagers’ experiences vary widely based on factors like race, socioeconomic status, and personal identity. They expressed concern over AI reinforcing monolithic and harmful stereotypes, such as portraying all teenagers as rebellious or troubled.

Nepalese participants emphasized positivity, suggesting that AI representations should focus on creativity, resilience, and the potential of adolescents to contribute meaningfully to society. Both groups shared optimism about AI’s potential to correct media stereotypes, provided that these systems are trained on data that accurately captures the diversity and complexity of teenage life.

Ethical implications and challenges

The ethical concerns raised by the study extend beyond adolescents to broader issues of fairness and accountability in AI systems. Representational bias not only skews societal perceptions but also impacts how resources and opportunities are allocated to marginalized groups. For instance, negative portrayals of adolescents in AI could influence educational policies, mental health resources, or even criminal justice systems, perpetuating systemic inequalities.

The researchers advocate for participatory design methodologies in AI development, where underrepresented groups, including adolescents, are directly involved in shaping the datasets that influence their portrayal. This approach ensures that AI systems are more reflective of diverse realities and less prone to reinforcing harmful stereotypes.

Recommendations for fairer AI representations

To address these biases, the study provides actionable recommendations:

  1. Diversify Training Data: AI models should be trained on datasets that include perspectives from underrepresented groups, such as adolescents from diverse cultural and socioeconomic backgrounds.

  2. Incorporate Participatory Design: Adolescents should have a voice in AI development, contributing to the creation of datasets and algorithms that represent their experiences authentically.

  3. Audit and Monitor AI Systems: Regular audits should evaluate AI outputs for biases, ensuring that these systems evolve to reflect fairer and more accurate portrayals.

  4. Focus on Contextual Nuance: Training datasets should prioritize context-rich narratives over sensationalist media, capturing the everyday realities of adolescents’ lives.

  5. Collaborate Across Sectors: Policymakers, educators, and technologists should work together to establish guidelines and standards for ethical AI representation.

A vision for the future

This study underscores the critical need for inclusive and culturally sensitive approaches in AI development. By addressing representational biases, AI can move from perpetuating stereotypes to becoming a tool for empowerment and equity. Incorporating diverse voices into the design and training of AI systems is not just an ethical imperative but also a practical step toward creating technology that serves all members of society fairly.

As AI continues to shape how society perceives and interacts with marginalized groups, it is vital to center the voices of those most affected. By doing so, we can ensure that AI becomes a force for understanding, inclusion, and fairness - values that are essential for building a more equitable future.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback