Generative AI could usher in an age of public ignorance
Generative AI represents the third seismic shift. Unlike the internet, which broadened exposure to multiple sources, AI condenses queries into a single, seemingly authoritative answer. While this efficiency appeals to societies accustomed to instant gratification, it strips away context and narrows users’ exposure. The study highlights that this shift changes the dynamics of knowledge itself: rather than exploration and triangulation, users are given one response to accept at face value.
The rapid rise of generative artificial intelligence, genAI, is being compared to the printing press and the internet in terms of its transformative impact on society. Yet unlike its predecessors, it may usher in an era not of enlightenment but of ignorance. A new peer-reviewed study titled Generative Artificial Intelligence and the Future of Public Knowledge, published in Knowledge (2025), raises alarms about how large language models could distort, dilute, and dominate what societies come to accept as truth.
Authored by Dirk H. R. Spennemann of Charles Sturt University, the paper argues that tools such as ChatGPT, Google Gemini, and DeepSeek have already shifted public expectations about knowledge access, but they threaten to erode critical thinking, deepen bias, and fuel the spread of misinformation. The study uses strategic foresight methodology to explore how AI might restructure public knowledge and concludes that, without urgent intervention through education, humanity risks sliding toward a technologically enabled age of ignorance.
From printing press to generative AI: Shaping public knowledge
The paper places generative AI in the long trajectory of technological revolutions in knowledge transmission. The printing press broke the monopoly of knowledge by clergy and guilds, allowing information to circulate independent of its holders. The internet and later social media democratized publishing, enabling global immediacy but also fostering tribalized communities and alternative truths.
Generative AI represents the third seismic shift. Unlike the internet, which broadened exposure to multiple sources, AI condenses queries into a single, seemingly authoritative answer. While this efficiency appeals to societies accustomed to instant gratification, it strips away context and narrows users’ exposure. The study highlights that this shift changes the dynamics of knowledge itself: rather than exploration and triangulation, users are given one response to accept at face value.
This narrowing effect is compounded by the fact that generative AI does not possess knowledge in the human sense. Its outputs are statistical predictions based on training data, not grounded understanding. The risk lies in the human-like tone of responses, which can mislead users into believing the system genuinely comprehends. Over time, this undermines the practice of questioning, validating, and critically assessing information.
The dangers of bias, manipulation, and content saturation
The author outlines five trajectories that together signal the potential decline of public knowledge. First, AI is well-suited to routine tasks such as drafting emails or summarizing texts, which will normalize its use in both professional and personal life. Second, the public’s preference for convenience and near-instant answers increases reliance on AI-generated responses. Third, transformative technologies that meet these demands typically displace traditional, labor-intensive approaches. Fourth, critical thinking and information literacy are already in decline, weakened by trust in social media influencers and the devaluation of experts. Fifth, sources once dismissed as unreliable, such as Wikipedia, have become widely accepted, setting precedent for AI outputs to follow the same trajectory.
The paper stresses that these trajectories intersect with significant risks. Generative AI models are shaped by training data that inevitably reflect ideological and cultural biases. Studies already show left-leaning and progressive preferences in ChatGPT outputs, as well as stereotypes in text-to-image generation, perpetuating gender, racial, and age-based distortions. These biases, even if unintentional, risk reinforcing harmful norms and diminishing diversity of thought.
Manipulation is another pressing concern. The study warns of the potential for authoritarian regimes, corporations, or malicious actors to flood training datasets with biased or misleading content. With estimates suggesting that up to 40 percent of online text is already AI-generated, the danger of self-reinforcing feedback loops, where AI consumes its own synthetic content, could result in a steady dilution of knowledge quality. Disinformation campaigns, echo chambers, and the erosion of evidence-based authority would then become entrenched in what the public perceives as common knowledge.
The corporate concentration of AI also exacerbates the problem. Market dominance by OpenAI, Google, and Microsoft means that knowledge provision is increasingly in the hands of a few actors whose interests may align more with shareholder returns than with objectivity. Control over algorithms, ranking, and curation could quietly shape entire societies’ understanding of truth.
Education as the only safeguard against public ignorance
The study acknowledges that futures are not predetermined and explores potential off-ramps. The most critical lies in education. Without widespread efforts to build AI literacy and revive critical thinking, the risks of public ignorance will accelerate.
Spennemann stresses that teachers must be empowered to instill evidence-based reasoning at all levels of schooling, from primary to higher education. Information literacy, including the ability to assess sources and question AI outputs, must become a compulsory part of curricula. Equally important is fostering a cultural appreciation for the effort involved in research and validation. Reliance on convenient solutions, even when correct, diminishes intellectual resilience and the ability to challenge assumptions.
The paper also warns of political barriers. In many democracies, education itself has become politicized, with ideological forces seeking to shape curricula for partisan gain. If critical inquiry is not safeguarded, the public may be left unequipped to navigate the complexities of AI-driven knowledge systems. The consequence could be a society primed for manipulation, unable to discern between evidence-based truth and algorithmically reinforced bias.
- FIRST PUBLISHED IN:
- Devdiscourse

