Generative AI may be splitting society into cognitive winners and losers
AI is quietly transforming how individuals construct meaning, evaluate information and exercise agency in modern societies, according to a new study published in Societies.
Titled AI and the Rise of Societal Bifurcation: Cognitive Dependency, Inequality and Democratic Pressure, the study outlines a new framework suggesting that generative AI may be driving a structural split between those who use AI reflectively and those who become cognitively dependent on automated interpretation.
Cognitive offloading and the erosion of interpretative autonomy
The analysis begins with cognition. Earlier digital technologies primarily affected memory and information retrieval. Generative AI systems, by contrast, intervene at a deeper level. They generate explanations, arguments and narratives before individuals have formed their own interpretations. As AI outputs become default starting points for reasoning, users increasingly shift from constructing meaning to evaluating machine-generated interpretations.
The paper draws on contemporary empirical research showing that unstructured interaction with generative AI can reduce metacognitive monitoring while inflating confidence. Users often feel more certain about their conclusions even when the underlying reasoning quality remains unchanged. Neurocognitive evidence suggests reduced activation in brain regions associated with effortful reasoning when individuals rely heavily on AI tools for interpretative tasks. Survey data from knowledge workers indicate similar patterns: reduced cognitive effort, paired with heightened perceived mastery.
The author argues that these effects are not technologically predetermined. Experimental research shows that structured prompting and reflective AI practices can strengthen engagement and preserve critical reasoning. The divergence emerges not from AI’s presence alone, but from how individuals integrate it into their cognitive routines.
This distinction produces what the author describes as a split between a cognitively resilient minority and a cognitively dependent majority. The resilient group uses AI as a cognitive amplifier. They interrogate outputs, seek alternative explanations and embed generative tools within broader analytical strategies. The dependent group, shaped by convenience-driven use and time pressures, increasingly treats AI-generated narratives as authoritative starting points. Over time, this shift reduces interpretative autonomy, defined as the capacity to generate and revise meaning independently before accepting automated explanations.
Individuals who rely heavily on automated reasoning may become more susceptible to oversights, misinterpretations and confidence inflation. Their ability to adapt to unfamiliar tasks weakens as generative systems shoulder more of the cognitive burden. In contrast, those who retain reflective engagement strengthen their ability to navigate complexity and ambiguity.
This divergence, as the study stresses, is cumulative. As AI adoption deepens, differences in cognitive strategy compound across educational, professional and political domains, laying the groundwork for structural inequality.
Labour-market restructuring and adaptive inequality
The second pillar of the analysis focuses on labour markets. Generative AI is accelerating the automation of symbolic, administrative and mid-skill professional tasks. International projections cited in the study suggest that a substantial share of employment across advanced and emerging economies faces exposure to automation or significant transformation due to AI systems.
Unlike earlier waves of automation that primarily displaced routine manual labor, generative AI intervenes in cognitive workflows. It assumes tasks once reliant on human judgment, coordination and interpretation. This shift alters the competencies required for stable employment. Increasingly, roles demand abstraction, oversight, hybrid reasoning and the ability to critically evaluate AI outputs.
Here, the cognitive divergence described earlier becomes economically consequential. Individuals who integrate AI reflectively develop complementary skills such as prompt optimization, multi-source validation and synthesis across domains. These workers gain productivity advantages and greater mobility within evolving job markets.
On the other hand, individuals who rely on AI as a substitute for reasoning may experience short-term efficiency gains but lose adaptability over time. As roles evolve toward higher levels of analytical oversight, cognitive dependency becomes a liability. Workers who cannot interrogate AI outputs or perform independent evaluations may struggle to transition into emerging occupations.
The author argues that this dynamic broadens inequality beyond traditional skill gaps. It institutionalizes a divergence in adaptive capacity. The labour market becomes a mechanism that translates cognitive differences into durable economic stratification. Those able to complement AI ascend. Those displaced by it face stagnation, insecurity or downward mobility.
The psychological consequences extend beyond employment. Work functions as a source of identity and social integration. Displacement or task erosion can generate anxiety and institutional distrust. When combined with cognitive dependency, such insecurity may heighten vulnerability to simplified narratives and political polarization.
Intergenerational effects add further complexity. Younger workers entering AI-saturated educational and professional environments may develop habits shaped by automated reasoning. Older workers face retraining challenges as familiar cognitive routines become obsolete. In both cases, adaptability hinges on the preservation of interpretative autonomy.
Democratic fragility in an AI-mediated public sphere
The third domain explored in the study is democratic governance. The author argues that generative AI transforms the informational ecosystem underpinning democratic deliberation. AI systems can now produce large-scale, personalized and contextually adaptive content that closely mimics authentic human communication.
This capacity enables unprecedented forms of political micro-targeting and synthetic persuasion. Automated systems can generate tailored narratives at speeds and volumes beyond human processing capacity, saturating digital spaces with emotionally resonant messages. As synthetic communication proliferates, distinguishing credible information from fabricated content becomes increasingly difficult.
Research cited in the paper suggests that individuals often attribute credibility to artificial agents unless their artificiality is disclosed. Disclosure can then trigger sharp trust collapses. This asymmetry destabilizes epistemic foundations. In environments of heightened uncertainty, citizens may gravitate toward simplified or emotionally charged narratives that reduce cognitive effort.
Cognitive dependency intensifies these risks. Individuals who outsource interpretative processes to AI may be less inclined to critically evaluate political claims or detect manipulation. Labour-market insecurity further compounds susceptibility to persuasive narratives offering simple causal explanations.
The author also places these dynamics within a geopolitical context. Autocratic regimes may deploy AI tools for surveillance, narrative control and sentiment analysis without the constraints faced by democratic states. Democracies, bound by legal and ethical safeguards, may struggle to regulate high-risk AI applications at comparable speed. This asymmetry increases democratic vulnerability in AI-mediated information environments.
The interaction of cognitive erosion, economic insecurity and synthetic persuasion forms a feedback loop. Reduced interpretative autonomy weakens resilience to misinformation. Economic stress heightens emotional receptivity. Political instability reinforces reliance on automated narratives. Together, these processes produce a self-reinforcing sociotechnical mechanism.
Societal bifurcation emerges from these circular dynamics. It is not a linear outcome of technological adoption but a product of interacting cognitive, economic and political pressures. A cognitively resilient minority may increasingly anchor standards of interpretation within professional and institutional spheres. A cognitively dependent majority may experience declining agency and influence.
- FIRST PUBLISHED IN:
- Devdiscourse

