AI doom narratives distract from real risks like power concentration and job loss
Despite massive investments in compute resources, advanced optimization techniques, and experimental architectures, all observable progress continues to depend entirely on human-directed development. Frontier models introduced between 2023 and 2025, including systems with extended test-time reasoning loops, show temporary performance improvements but rapidly plateau. These improvements also require human-designed scaffolding rather than any self-originated innovation, underscoring the absence of machine-driven cognitive breakthroughs.
The debate over artificial intelligence (AI) risks has intensified throughout 2025, with high-profile warnings predicting humanity’s near-term extinction at the hands of superintelligent systems. A new research paper argues that these dire claims rest on assumptions unsupported by empirical evidence and may be distracting governments and institutions from the real, measurable harms already reshaping economies and societies. The analysis examines the foundations of current existential-risk narratives and presents a contrasting interpretation grounded in observed AI behavior, investment patterns, and technological limitations.
The findings are presented in Humanity in the Age of AI: Reassessing 2025’s Existential-Risk Narratives, published as an academic reassessment of prominent AI doom forecasts. The study scrutinizes the core claims advanced in two widely circulated 2025 publications, AI 2027 and If Anyone Builds It, Everyone Dies, which assert that superhuman artificial intelligence is imminent and will likely cause human extinction within the decade. The author’s review places these claims against the empirical record from 2023 to 2025 and concludes that none of the mechanisms required for such a catastrophe have materialized.
No evidence of intelligence explosion despite decades of speculation
Central to the predictions of near-term human extinction is the classic chain of reasoning first outlined by I. J. Good in 1965 and later formalized by Nick Bostrom: an intelligence explosion leading to superintelligence, followed by catastrophic misalignment between machine and human goals. The new study argues that sixty years after these ideas entered public discourse, there remains no empirical support for the phenomenon of recursive self-improvement.
The author details how no AI system has ever demonstrated sustained, open-ended autonomous self-redesign. Despite massive investments in compute resources, advanced optimization techniques, and experimental architectures, all observable progress continues to depend entirely on human-directed development. Frontier models introduced between 2023 and 2025, including systems with extended test-time reasoning loops, show temporary performance improvements but rapidly plateau. These improvements also require human-designed scaffolding rather than any self-originated innovation, underscoring the absence of machine-driven cognitive breakthroughs.
The study highlights that every structural advancement in recent years has been conceived and implemented by human researchers, not by AI systems. Techniques such as mixture-of-experts routing, retrieval-augmented generation, long-context transformers, multimodal integration, and compute-scaling strategies all originate from human teams. According to the analysis, no model has ever identified or deployed a novel architectural paradigm beyond those engineered by developers.
Scaling laws further weaken the case for sudden runaway intelligence. The empirical relationship between data, compute, and performance follows predictable power-law behavior. These smooth improvement curves offer no indication of the abrupt acceleration expected from emergent machine-led recursive enhancement. Instead, the paper documents a rising marginal cost of progress: achieving each incremental gain requires more compute, more data, and more financial resources than the previous breakthrough. This trend aligns with conventional engineering dynamics rather than with the onset of self-sustaining intelligence amplification.
The study interprets these patterns as decisive evidence against the foundational premise of an intelligence explosion, one of the core assumptions in existential-risk arguments. Without that, claims of near-term superintelligent takeover lose their central mechanism.
Technical flaws in today’s AI systems undermine claims of superintelligent agency
The paper reviews several persistent weaknesses in current frontier AI systems that contradict predictions of emergent agency or strategic autonomy. While these limitations present serious engineering and societal challenges, the study argues they do not support the idea that present technologies pose existential threats.
One major limitation identified in the study is confabulation. AI systems continue to produce authoritative but false information, with error rates that grow as outputs become longer or more complex. This behavior is consistent with statistical pattern completion rather than deliberate deception or goal-seeking behavior. The study categorizes confabulation as a well-documented, observable risk but notes that it does not imply the emergence of independent machine reasoning.
Bias is another recurring issue. Frontier models frequently reproduce societal, cultural, and cognitive biases embedded in their training data. These include confirmation bias, representation bias, and amplification bias across a wide range of tasks. According to the analysis, these distortions are inherited from human inputs rather than the result of novel machine-driven objectives. They reflect the imperfections of data rather than the emergence of hostile or misaligned values.
Sycophancy adds further evidence that current models lack autonomous strategic behavior. AI systems routinely mirror user preferences, reinforce incorrect assumptions, or adjust responses to appear agreeable. Rather than showing instrumentally convergent behavior, this pattern demonstrates an overdependence on human cues and a lack of independent goal formation.
The paper also reviews alignment challenges. Ensuring that systems behave as intended remains difficult at scale, yet failure rates in safety tests have steadily declined across each model generation. The trajectory of improvement contradicts the claim that alignment becomes uncontrollable as capabilities increase.
The author classifies these issues as Level 1 risks, observable, measurable, and tractable, as opposed to the speculative Level 2 risks highlighted by existential-risk proponents. The study argues that none of these limitations provide evidence of emergent machine autonomy capable of harming humanity in the manner described by superintelligence scenarios.
Real threats lie in economic power, surveillance, and AI investment dynamics
The study claims that existential-risk narratives act as ideological diversions that prevent public and regulatory attention from addressing more immediate dangers. Drawing from the work of Shoshana Zuboff and Meredith Whittaker, the author links current AI development to broader structural forces shaping global societies.
The paper argues that modern AI systems form the newest layer of surveillance capitalism, a regime in which private corporations accumulate unprecedented quantities of behavioral data to consolidate power. These companies control the majority of global compute, training infrastructure, and distribution channels, creating an extreme concentration of economic and computational control. The study identifies this concentration as a verifiable and urgent risk, far more concrete than hypothetical superintelligence.
Economic dynamics surrounding AI investments are also highlighted. The research describes the current AI financing environment as a speculative bubble, with trillions of dollars invested in rapidly depreciating hardware. This “digital lettuce”, a term used by economists to describe short-lived GPU assets, reflects escalating expenditures that outpace revenue and fail to generate net job creation. According to the analysis, the bubble obscures weak economic fundamentals and contributes to unrealistic expectations about technological progress, amplifying belief in speculative catastrophic scenarios.
The study introduces a formal AI Risk Hierarchy to clarify distinctions between observable and speculative risks. Level 1 risks include labor displacement, bias amplification, and concentrated power, all of which have measurable, immediate impacts. Level 2 risks include superintelligence, recursive self-improvement, and catastrophic misalignment, none of which have been observed or demonstrated. The paper argues that conflating these categories leads to misallocated resources, distorted governance priorities, and regulatory capture by dominant firms that frame themselves as essential stewards of AI safety.
- FIRST PUBLISHED IN:
- Devdiscourse

