Generative AI literacy gaps threaten responsible and sustainable AI use


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 03-02-2026 19:09 IST | Created: 03-02-2026 19:09 IST
Generative AI literacy gaps threaten responsible and sustainable AI use
Representative Image. Credit: ChatGPT

A new global review published in the journal Sustainability warns that without stronger literacy frameworks, generative AI risks deepening inequality, weakening ethical safeguards, and undermining sustainable adoption across key sectors of society.

Titled Mapping the Landscape of Generative Artificial Intelligence Literacy: A Systematic Review Toward Social, Ethical, and Sustainable AI Adoption, the study systematically analyzes how academic research has approached generative AI literacy since the release of large-scale tools such as ChatGPT, offering one of the most comprehensive assessments to date of where the field is advancing and where it is falling short

The authors argue that while awareness of generative AI has expanded quickly, the skills required to use these systems critically, ethically, and effectively remain unevenly developed.

Ethics and education dominate, while measurement lags behind

The review finds that current research on generative AI literacy is heavily focused on two areas: ethical foundations and educational use. Ethical concerns account for the largest share of academic attention, reflecting widespread anxiety about bias, misinformation, intellectual property, and the social consequences of AI-generated content. Educational studies follow closely, focusing on how students and teachers learn to use generative tools in writing, problem-solving, language learning, and creative tasks.

This concentration, the authors argue, reflects a broader consensus that generative AI literacy is no longer a purely technical skill. Instead, it is increasingly understood as a civic competence that combines knowledge of how systems work, practical ability to use them, critical evaluation of outputs, and ethical awareness of risks and responsibilities. In many studies reviewed, literacy is framed as essential for helping users distinguish AI-generated content from human-authored material, assess reliability, and avoid overdependence on automated systems.

However, the review identifies a major structural weakness in the field: the lack of robust evaluation and measurement tools. While scholars have devoted significant effort to defining what responsible generative AI use should look like, far fewer studies have developed validated instruments to measure whether users actually possess these competencies. Evaluation-focused research represents only a small portion of the literature, creating what the authors describe as a theory-to-practice gap.

This imbalance has practical consequences. Without reliable ways to assess generative AI literacy, educators, employers, and policymakers struggle to determine whether training initiatives are effective, whether ethical awareness translates into behavior, or how literacy levels differ across populations. The authors warn that this gap risks turning generative AI literacy into a largely normative concept, rich in principles but weak in empirical grounding.

The review also highlights the dominance of education as the primary research context. Nearly half of the analyzed studies focus on schools and universities, particularly higher education. While this emphasis reflects the rapid uptake of generative AI in academic settings, it leaves other sectors underexplored. Healthcare, government, and industry appear far less frequently in the literature, despite facing distinct risks related to data protection, accountability, and decision-making.

According to the authors, this skewed focus raises concerns about transferability. Literacy frameworks developed for students may not adequately address the pressures faced by professionals who rely on generative AI for clinical decisions, administrative processes, or strategic planning. Without sector-specific research, organizations risk adopting tools faster than their workforces can use them responsibly.

A global field with uneven geographic representation

The study reveals notable geographic patterns in generative AI literacy research. Half of the reviewed studies originate from Asia, with Europe and the Americas accounting for most of the remaining work. Other regions are largely absent from the indexed literature, pointing to a potential geographic bias in how generative AI literacy is being conceptualized and studied.

The authors caution that this concentration matters. Educational systems, labor markets, and regulatory environments vary widely across regions, shaping how generative AI is adopted and understood. Frameworks developed in one context may not align with the cultural norms, institutional capacities, or policy priorities of another. Without broader geographic representation, the global discourse on generative AI literacy risks reinforcing existing digital divides rather than narrowing them.

The review also traces how the concept of generative AI literacy has evolved from earlier notions of digital literacy and AI literacy. Unlike traditional AI systems that classify or predict, generative models actively produce new content, raising distinct challenges related to originality, authorship, and trust. Literacy in this context requires not only understanding model limitations, but also managing co-creation processes where human judgment and machine output interact continuously.

Several studies analyzed in the review emphasize prompt engineering as a core skill, reflecting the growing recognition that user input shapes AI output in powerful ways. At the same time, the authors note concerns about cognitive dependence, where users rely excessively on generative systems at the expense of critical thinking or professional expertise. Literacy, in this sense, is framed as a safeguard against both misuse and overuse.

Ethical awareness emerges as a recurring concern across regions and disciplines. While many users recognize surface-level risks such as plagiarism or misinformation, the review finds lower awareness of deeper ethical issues, including environmental costs, labor exploitation in data labeling, and long-term societal impacts. This disconnect between expert frameworks and user perceptions underscores the need for literacy programs that go beyond tool proficiency to address broader consequences.

Why sustainable AI adoption depends on literacy

Generative AI literacy is inseparable from sustainability. Responsible AI adoption, the authors contend, requires more than technical deployment or regulatory compliance. It depends on whether users at all levels can engage with these systems in ways that align with social values, ethical norms, and long-term institutional resilience.

The lack of validated evaluation tools becomes a critical bottleneck. Without measurement, organizations cannot track progress, identify gaps, or adapt training to evolving technologies. The authors argue that evaluation should be treated as infrastructure, not an afterthought, supporting continuous learning as generative AI capabilities change.

The review also highlights the role of institutions in shaping literacy outcomes. Universities, libraries, and professional organizations are emerging as key intermediaries, translating abstract ethical principles into practical guidance. However, the authors warn that institutional adoption without adequate preparation can create structural fragility, exposing organizations to legal, reputational, and operational risks.

In professional settings, the pressure to use generative AI for efficiency gains can conflict with ethical caution. The study notes that when AI systems outperform humans on speed or output quality, workers may feel compelled to defer judgment to machines, even when risks are unclear. Literacy, in this context, becomes essential for maintaining human oversight and accountability.

The authors call for a shift from fragmented initiatives to integrated literacy ecosystems that connect ethics, adoption, evaluation, and education. Such ecosystems would recognize that literacy is not static, but must evolve alongside technology. They also emphasize the need for longitudinal research to understand how literacy develops over time, rather than relying on one-off assessments.

The study outlines several priorities for future research. These include developing culturally adapted evaluation instruments, expanding studies into underrepresented sectors, and examining how organizations translate ethical commitments into everyday practice. The authors also urge closer collaboration between researchers, educators, and policymakers to ensure that literacy frameworks inform real-world decision-making.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback