AI not as harmless as it seems: cumulative effects raise new governance concerns
Artificial intelligence (AI) is not only transforming industries and economies but may also be quietly altering how humans think, feel, and make decisions, according to new research that reframes everyday AI exposure as a long-term cognitive and social risk. While most regulatory frameworks focus on high-risk systems, the study argues that seemingly harmless, low-risk AI applications could collectively reshape human agency in ways that remain largely invisible to policymakers and society.
The study, titled “The silent accumulation: AI as mental contaminant,” published in Frontiers in Artificial Intelligence, introduces a novel theoretical framework that treats AI as a form of “cognitive environmental contaminant,” drawing parallels with environmental health science to explain how repeated exposure to AI systems may produce cumulative psychological and social effects over time.
AI exposure accumulates across five dimensions, creating hidden cognitive risks
The study introduces a structured model to explain how AI exposure accumulates over time. It identifies five key dimensions: frequency of interaction, duration of use, intensity of influence, diversity of AI systems encountered, and the developmental stage at which exposure occurs. Together, these factors determine how deeply AI systems may affect individuals and populations.
This cumulative exposure framework borrows from environmental science, where low-level pollutants once considered harmless were later found to produce serious long-term health effects. Similarly, the study suggests that AI’s impact may not be immediately visible but could emerge gradually through repeated interactions across multiple systems.
The research highlights three possible patterns of accumulation. In additive models, the effects of AI systems simply build over time. In synergistic models, different systems interact to amplify their influence, creating effects greater than the sum of their parts. In threshold models, impacts remain minimal until a critical level of exposure is reached, after which rapid and potentially irreversible changes occur.
The study emphasizes that these dynamics are not uniform across populations. Individual vulnerability depends on factors such as digital literacy, cognitive capacity, and social context. Younger users, whose cognitive and emotional systems are still developing, may be particularly susceptible to long-term effects.
This framework challenges the dominant approach to AI governance, which evaluates systems based on isolated risk categories. By focusing on cumulative exposure, the study reveals a critical blind spot in current regulatory models, where the collective influence of multiple low-risk systems remains largely unexamined.
Five pathways reveal how AI may reshape attention, emotion, and identity
To explain how cumulative exposure translates into real-world effects, the study identifies five interconnected pathways through which AI systems may influence human cognition and behavior: attention erosion, emotional dependency, social connection alteration, decision-making dependency, and identity transformation.
Attention erosion is linked to the growing prevalence of AI-driven content optimization. Personalized feeds and recommendation systems continuously adapt to user behavior, creating feedback loops that may narrow attention spans and reduce the ability to engage with complex information. Over time, this could shift population-level patterns of attention, with implications for education, productivity, and democratic participation.
Emotional dependency emerges as AI systems increasingly mediate emotional experiences. Algorithmically curated content and AI companions can influence mood and emotional regulation, potentially leading users to rely on digital systems for emotional support. This reliance may weaken internal regulation mechanisms, making individuals more reactive and less capable of managing emotions independently.
Social connection alteration reflects changes in how people interact with one another. AI-mediated communication often removes nonverbal cues and prioritizes engagement metrics over meaningful relationships. As a result, social interactions may become more transactional and less emotionally rich, potentially reducing empathy and altering patterns of social bonding.
Decision-making dependency represents one of the most empirically supported pathways. As AI systems take over tasks such as navigation, information retrieval, and recommendation, users may increasingly rely on automated decision-making. This cognitive offloading can create a feedback loop in which reduced practice of independent decision-making leads to greater dependence on AI systems.
Identity transformation is perhaps the most profound and least understood pathway. Personalization algorithms shape the information and experiences users encounter, influencing how they perceive themselves and their place in the world. Over time, this may limit opportunities for self-directed identity exploration, as algorithmic predictions reinforce existing patterns and preferences.
These pathways are not isolated. The study emphasizes that they interact and reinforce one another, creating a complex web of cumulative effects. For example, attention erosion may contribute to emotional dependency, while decision-making reliance may influence identity formation. This interconnectedness is central to the study’s argument that AI’s impact must be understood as a systemic phenomenon rather than a collection of individual risks.
Governance frameworks fail to capture cumulative AI impact
Most regulatory frameworks, including risk-based models, assess AI systems individually, focusing on immediate and measurable harms. This approach, the study argues, fails to capture the cumulative and long-term effects of multiple systems interacting over time.
The research draws a direct parallel with early environmental regulation, which initially evaluated pollutants in isolation before recognizing the importance of cumulative exposure. Just as multiple low-level toxins can combine to produce significant health risks, multiple AI systems may collectively reshape cognitive and social environments.
This gap in governance is compounded by the absence of appropriate measurement tools. While environmental science has developed metrics for pollution and exposure, there are no widely accepted indicators for cognitive and social impacts such as attention fragmentation, emotional regulation, or decision-making independence.
The study also highlights the concept of technological externalities. AI systems optimized for engagement, efficiency, or profit may generate unintended social costs, including reduced attention capacity, increased dependency, and weakened social cohesion. These costs are not reflected in market transactions or regulatory assessments, allowing them to accumulate unchecked.
Without mechanisms to account for these externalities, the study warns that market incentives may drive further expansion of AI systems in ways that amplify cumulative risks. This creates a structural imbalance in which technological development outpaces the ability of governance frameworks to manage its societal impact.
New governance models aim to protect cognitive and social ecosystems
To address these challenges, the study proposes a set of governance innovations inspired by environmental protection frameworks. Key to this approach is the concept of cumulative impact assessment, which extends existing algorithmic auditing practices to consider how multiple AI systems interact and influence users over time.
The study calls for the development of cognitive-social monitoring systems that track key indicators such as attention capacity, emotional regulation, social connection quality, decision-making independence, and identity coherence. These metrics could be integrated into existing data collection infrastructures, enabling long-term analysis of AI’s societal impact.
Another proposed innovation is the use of synthetic population modeling. By simulating how different levels of AI exposure affect populations over time, policymakers could test potential interventions and identify risks before they become widespread. This approach mirrors the use of climate models in environmental policy, offering a proactive rather than reactive strategy for governance.
The research also introduces the concept of cognitive-social ecosystem services, framing human cognitive and social capacities as public goods that require protection. Just as clean air and water are essential for physical health, capacities such as sustained attention, emotional authenticity, and independent decision-making are essential for psychological well-being and democratic functioning.
Recognizing these capacities as valuable resources could support the development of new regulatory frameworks that prioritize human flourishing alongside technological innovation. This includes designing AI systems that preserve attention, support emotional well-being, and enhance rather than replace human agency.
The study outlines a phased approach to implementation, beginning with pilot monitoring programs and progressing to broader integration of cumulative impact assessments into policy and industry practices. It also emphasizes the importance of interdisciplinary collaboration, bringing together expertise from technology, psychology, environmental science, and governance.
A turning point for AI governance and human agency
Just as environmental awareness led to the development of policies that reduced pollution and protected ecosystems, recognizing AI’s cumulative effects could enable more responsible and sustainable technological development.
The research suggests that society is at an early stage of understanding AI’s broader impact, comparable to the initial recognition of environmental pollution decades ago. At that time, the lack of awareness allowed harmful effects to accumulate before effective regulation was implemented.
On the other hand, the study argues, there is now an opportunity to act before similar patterns emerge in the cognitive and social domain. Developing governance frameworks that account for cumulative exposure could help preserve the capacities that underpin human agency, social cohesion, and democratic participation.
- FIRST PUBLISHED IN:
- Devdiscourse

