Generative AI sparks new wave of social and information crises
Artificial intelligence (AI) is increasingly framed not just as a technological breakthrough but as a layered crisis shaping public imagination, institutional responses, and future expectations, according to new research published in AI & Society.
The study, titled “The crisis layers of artificial intelligence,” examines how AI has become embedded in overlapping narratives of fear, uncertainty, and expectation, spanning speculative futures, present-day disruptions, and deeper structural anxieties about technological progress.
The analysis argues that AI is no longer perceived merely as a tool or innovation but as a phenomenon defined by crisis thinking. These crisis narratives, rather than isolated reactions, are structured across multiple layers that influence how societies interpret both the risks and promises of artificial intelligence.
Future catastrophes and the rise of AI apocalypse narratives
At the outermost level, the study identifies a dominant narrative centered on imagined future crises, particularly scenarios involving superintelligent systems that could surpass human control. These visions, once confined to science fiction, are now increasingly treated as plausible risks within scientific and policy discussions.
AI is often portrayed as a potential existential threat, comparable to climate change or nuclear war, but with a distinct characteristic. Unlike other global risks, the imagined AI crisis is total in scope, extending beyond environmental or civilizational damage to the possible end of humanity itself.
The research highlights that these scenarios are marked by radical uncertainty. While the mechanics of nuclear weapons or climate systems are relatively well understood, the behavior of a hypothetical superintelligence remains undefined. This lack of clarity intensifies the sense of crisis, as decision-makers and the public are forced to confront risks that cannot be fully modeled or anticipated.
Another defining feature is the inevitability embedded in many of these narratives. Once the development of advanced AI reaches a certain threshold, the transition toward uncontrollable outcomes is often depicted as unavoidable. This creates a perception that preventive action may be limited or ineffective, further amplifying concern.
At the same time, the study notes an unusual ambivalence within these crisis narratives. While many portray AI as a destructive force, others frame the rise of machine intelligence as a form of evolution, where human cognition is replaced by more efficient systems. This duality sets AI apart from traditional crisis frameworks, where outcomes are typically viewed as either negative or catastrophic.
Present-day disruptions and emerging social instability
The study identifies a second layer of crisis grounded in current technological developments, particularly the rapid expansion of generative AI systems.
These technologies are already reshaping information ecosystems, labor markets, and social interactions. The research points to a growing volume of low-quality, machine-generated content that undermines trust in knowledge production and weakens the reliability of digital information environments.
At the institutional level, generative AI is linked to concerns about misinformation, propaganda, and the erosion of shared epistemic standards. As automated systems produce and distribute content at scale, distinguishing credible information from manipulated or fabricated material becomes increasingly difficult.
The study also highlights the impact on professional roles and skills. Creative workers, analysts, and other knowledge-based professionals face the risk of deskilling as AI systems replicate or automate tasks that previously required years of training. While large-scale unemployment remains a debated outcome, the perception of job insecurity is already shaping attitudes toward AI adoption.
On a more personal level, the research points to subtle but significant shifts in human behavior. Individuals are beginning to rely on AI systems for advice, decision-making, and even emotional support, raising questions about trust, autonomy, and the nature of human relationships.
These developments form what the study describes as an intermediate crisis layer, where the effects of AI are neither fully realized nor purely speculative. Instead, they exist in a space of ongoing transformation, where early impacts are visible but long-term consequences remain uncertain.
A deeper crisis of expectations and technological disillusionment
The study identifies a third layer of crisis that goes beyond both future fears and present disruptions. This layer concerns the possibility that AI may ultimately fail to meet the extraordinary expectations placed upon it.
In recent years, AI has been positioned as a transformative force capable of reshaping economies, societies, and human existence itself. These expectations have created a narrative of inevitable change, where technological progress is assumed to lead to radical breakthroughs.
However, the research raises the possibility that AI may instead follow a more incremental path, embedding itself within existing systems without fundamentally altering them. In this scenario, the anticipated disruption does not occur, leading to a different kind of crisis rooted in unmet expectations.
This form of disillusionment reflects a broader shift in how societies engage with technology. As traditional political visions of the future lose influence, technological innovation has become the primary space for imagining alternative possibilities. AI, in particular, carries the weight of these projections.
If AI fails to deliver on its perceived potential, it could expose a deeper structural issue, where societies rely on technological narratives to sustain a sense of progress. The absence of transformative change would challenge these assumptions, forcing a reassessment of both technological optimism and long-term planning.
The study suggests that this expectation-driven crisis is already emerging. The idea that AI will inevitably disrupt normality has itself become normalized, creating a paradox where stability is perceived as a failure rather than an achievement.
In this context, crisis narratives are no longer tied to specific events or outcomes. Instead, they become a persistent framework through which AI is understood, shaping discourse regardless of whether the technology fulfills its promises or falls short.
- FIRST PUBLISHED IN:
- Devdiscourse

