Beyond the hype: Measuring generative AI’s societal impact through agency
The study also calls for a shift in how AI is perceived - not just as a tool for automation but as an entity capable of shaping societal structures and decision-making processes. If properly managed, AI can serve as a catalyst for positive change, augmenting human capabilities while maintaining ethical safeguards against misuse.
The rise of generative artificial intelligence (AI) has transformed the way we create and interact with digital content. From generating realistic images and videos to crafting functional code with minimal input, these tools are reshaping multiple aspects of society. However, along with these advancements comes a growing concern about their misuse, ethical implications, and unintended societal consequences.
Addressing these concerns requires a more profound theoretical framework. In his research paper "Agency in the Age of AI," Samarth Swarup from the Biocomplexity Institute at the University of Virginia argues that agency is the most appropriate lens to examine the benefits and harms of generative AI. His work, submitted in arXiv, explores how AI affects human decision-making, autonomy, and control, advocating for a structured approach to mitigate potential risks while leveraging AI’s capabilities for good.
The changing landscape of agency in AI-driven societies
AI technologies are fundamentally altering the concept of human agency. As these systems generate information, decisions, and even creative outputs, they influence the choices people make in various domains - politics, commerce, education, and personal life. Swarup identifies multiple ways in which generative AI can disrupt agency. Malicious actors can exploit these tools to manipulate public perception, influence elections, and spread misinformation. The so-called “liar’s dividend” allows individuals to dismiss authentic evidence as AI-generated fabrications, complicating efforts to establish truth in public discourse.
Beyond direct misuse, the very presence of generative AI tools creates an uncertain information landscape. People may struggle to differentiate between human-generated and AI-generated content, leading to confusion and reduced trust in digital interactions. Furthermore, generative models themselves are susceptible to corruption. Biased or adversarial training data can degrade the integrity of these systems, institutionalizing misinformation and reinforcing harmful stereotypes over time. These factors collectively pose a significant challenge to maintaining human autonomy in an AI-saturated world.
The theoretical lens of agency and AI’s influence
To understand and address these concerns, Swarup advocates for analyzing generative AI through the Planning Theory of Agency, which views agency as the ability to formulate goals and execute plans effectively. This theory, traditionally applied in multi-agent systems, aligns closely with the Belief-Desire-Intention (BDI) model, a computational framework that represents an agent’s goals, beliefs, and strategies for achieving them.
Through this lens, Swarup introduces a compelling thought experiment: an adversarial entity seeking to limit an individual’s agency. Such an entity could prevent people from successfully executing plans by distorting available information, controlling decision-making environments, or influencing goal selection. Many real-world AI-related risks align with these theoretical attacks on agency, such as:
-
Manipulating online narratives to shape public opinion (influencing beliefs and goals)
-
Spreading misleading information that discourages action (restricting goal formation)
-
Designing AI tools that promote overreliance and automation bias (reducing autonomy in decision-making)
Thus, the framework of agency helps unify different AI-related threats into a structured set of concerns, providing a foundation for mitigating potential harms.
Using Agent-Based Models to simulate AI’s societal impact
Beyond theoretical analysis, Swarup emphasizes the importance of Agent-Based Modeling (ABM) as a tool for understanding how AI interacts with human decision-making at scale. ABMs simulate environments where AI agents, human-like entities, and autonomous systems coexist, allowing researchers to study interactions and predict outcomes under different conditions. These simulations could help policymakers and AI developers identify vulnerabilities in AI-integrated societies and experiment with intervention strategies before implementing real-world policies.
However, there are key challenges in designing such models. Capturing the complexity of generative AI’s influence on human agency requires sophisticated representations of belief formation, decision-making, and adversarial manipulation. Additionally, ensuring transparency and explainability in ABM-driven simulations is crucial, as AI-generated decisions often involve nuanced and unpredictable consequences. Swarup’s work highlights the need for quantitative metrics to measure agency loss or gain, allowing for more precise evaluations of AI’s long-term impact on human autonomy.
Towards responsible AI development
Swarup’s research underscores the urgency of developing a comprehensive theory of agency that accounts for AI’s evolving role in society. To navigate the challenges posed by generative AI, he suggests integrating multiple research domains, including sociology, cognitive science, and information theory, to refine our understanding of agency. Future work should focus on designing AI systems that enhance rather than diminish human decision-making power. This includes incorporating self-monitoring mechanisms within AI models to assess their influence on users and introducing regulatory frameworks that ensure AI deployment aligns with societal values.
The study also calls for a shift in how AI is perceived - not just as a tool for automation but as an entity capable of shaping societal structures and decision-making processes. If properly managed, AI can serve as a catalyst for positive change, augmenting human capabilities while maintaining ethical safeguards against misuse. The key challenge remains in striking the right balance between technological advancement and human autonomy, ensuring that AI systems empower rather than undermine the agency of individuals and communities.
Ultimately, Swarup’s work presents a call to action for researchers, policymakers, and AI developers to rethink the way AI interacts with society. By adopting an agency-centered approach, we can better anticipate and mitigate AI’s unintended consequences, fostering a future where AI systems operate transparently, ethically, and in service of humanity.
- FIRST PUBLISHED IN:
- Devdiscourse

