AI’s greatest threat may be to human flourishing, not jobs or productivity
Much of the public and policy conversation, the authors note, has focused on economic growth, geopolitical competition, labor displacement, and technical alignment. While these issues matter, they argue that they overlook a deeper risk: that AI systems may undermine intrinsic aspects of what makes human life go well, even as they deliver instrumental gains.
Artificial intelligence (AI) is advancing at a pace that few social institutions are prepared to absorb. Large language models (LLMs) powering these systems now draft essays, advise users on mental health, simulate companionship, and shape how people search for truth. Yet as adoption accelerates, a growing group of scholars is warning that the dominant debate around AI is missing a central question: whether these systems are helping people live better lives or quietly eroding the foundations of human flourishing.
A new academic review, Flourishing Considerations for AI, argues that efficiency, innovation, and productivity are insufficient benchmarks for evaluating AI technologies. Instead, it contends that every stage of AI development and use should be judged by its impact on human well-being, relationships, knowledge, and meaning. The study is published in the journal Information.
Why flourishing must become the core metric for AI
Much of the public and policy conversation, the authors note, has focused on economic growth, geopolitical competition, labor displacement, and technical alignment. While these issues matter, they argue that they overlook a deeper risk: that AI systems may undermine intrinsic aspects of what makes human life go well, even as they deliver instrumental gains.
The authors define flourishing as the degree to which all dimensions of a person’s life are going well, including the social and environmental contexts in which that life unfolds. Flourishing is presented as multi-dimensional and never fully complete, encompassing happiness, physical and mental health, meaning and purpose, moral character, relationships, and financial security. These domains, widely supported across cultures, provide the lens through which AI’s effects are assessed.
From this perspective, the study argues that many of the benefits promised by AI are instrumental, such as speed, convenience, and access to information. By contrast, many of the risks concern intrinsic goods, including reasoning, creativity, relationships, and moral development. When instrumental benefits threaten intrinsic goods, the authors warn, societies risk trading short-term efficiency for long-term human loss.
This framing leads to a central claim of the paper: AI technologies should not merely avoid harm, but should be actively designed and used to promote human flourishing. That responsibility falls not only on regulators and developers, but also on users themselves.
How AI design and use can quietly erode well-being
The first area of concern is the output generated by LLMs. Because these systems can produce persuasive and confident responses on almost any topic, they can influence behavior in areas tied directly to well-being, including mental health, social conflict, and moral decision-making.
The authors argue that current safeguards are inconsistent and often inadequate. In some cases, AI systems redirect harmful queries toward supportive resources, but in many others they provide guidance that could increase harm, misinformation, or social polarization. The study maintains that AI outputs should be shaped by explicit flourishing-oriented principles, guiding users toward health, meaning, relationships, and responsible action rather than merely optimizing engagement or satisfaction.
Closely tied to output is product design. The paper distinguishes between AI applications that are largely beneficial or neutral and those that pose a high risk to flourishing. Tools that assist with data analysis, engineering, medical procedures, or information retrieval are cited as examples that can plausibly enhance health, knowledge, and financial security with limited downside.
More troubling, the authors argue, are relational and social AI products designed to simulate friendship, romance, or emotional intimacy. Drawing on emerging evidence, the study warns that these systems may reduce motivation for real-world relationships, foster unrealistic expectations of human interaction, and contribute to loneliness over time. While such tools may offer short-term relief from isolation, the paper contends that they ultimately weaken the social fabric that flourishing depends on.
The study also highlights a third, more ambiguous category of AI use cases, where limited engagement may be helpful but excessive reliance becomes harmful. Educational tools, mental health applications, and social skill training systems fall into this category. AI may assist learning or therapy when used as a supplement, but the authors argue that it cannot replace the formative role of teachers, counselors, and caregivers without compromising character development, reasoning skills, and relational depth.
In addition to design, the paper places significant responsibility on users. It emphasizes that no regulatory system can fully prevent harmful AI products from existing. Individuals and communities, therefore, must cultivate discernment about when and how to engage with AI. The study urges users to regularly consider opportunity costs, asking whether time spent with AI could have been better invested in human interaction, creative work, physical activity, or reflection.
Knowledge, humanity, and the limits of outsourcing thought
The paper further examines AI’s impact on human knowledge and self-understanding. While acknowledging the remarkable capacity of large language models to summarize and synthesize information, the authors argue that these systems also pose a fundamental challenge to how knowledge is formed and validated.
Because language models generate responses based on probabilistic patterns rather than truth-seeking intent, they are prone to producing false or misleading information. The study warns that uncritical reliance on AI outputs risks weakening people’s ability to evaluate evidence, distinguish truth from error, and engage in independent reasoning. Over time, this erosion of epistemic vigilance could undermine not only knowledge but also democratic discourse, social trust, and collective problem-solving.
The authors argue that mitigating hallucinations through technical fixes is necessary but insufficient. Even with improvements, AI systems should be treated as aids to reasoning rather than authorities. The study calls for greater transparency in AI outputs, routine acknowledgment of uncertainty, and stronger norms around verification and critical engagement.
The paper raises a more existential concern: what happens when humans outsource activities that are central to flourishing. Reasoning, creativity, relationship-building, meaning-making, and joy are described as constitutive of human life, not merely optional tasks. Delegating these activities to machines may save effort, but at the cost of diminishing the very capacities that allow people to flourish.
The study points to evidence suggesting that excessive reliance on AI for writing, problem-solving, and creative expression may weaken cognitive skills over time. In education, the authors argue, widespread use of AI to complete assignments risks producing students who appear competent while lacking deep understanding or communicative ability. Similar concerns apply to creative arts, where AI-generated outputs may crowd out the personal struggle and growth that accompany human creation.
Relationships receive particular emphasis. The authors argue that genuine flourishing depends on real, reciprocal human connection. AI systems that simulate empathy or companionship, they warn, risk displacing rather than supporting these relationships. The paper calls for AI technologies that consistently redirect users toward real people and communities, rather than encouraging substitution.
- FIRST PUBLISHED IN:
- Devdiscourse

