Persuasive AI poses hidden dangers for truth, equity and governance

The study challenges the idea that people engaging with AI-driven arguments automatically become irrational. Instead, it suggests that the risk lies in a subtler erosion: people may form attitudes that are less rational than those developed through human interaction or independent reasoning.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 24-09-2025 23:20 IST | Created: 24-09-2025 23:20 IST
Persuasive AI poses hidden dangers for truth, equity and governance
Representative Image. Credit: ChatGPT

Concerns about the influence of persuasive artificial intelligence are not limited to science fiction. A new paper by Robin McKenna of the University of Liverpool and the African Centre for Epistemology and Philosophy of Science at the University of Johannesburg highlights the dangers of AI systems designed to shape beliefs, attitudes, and behaviors.

The research, titled Sophistry on steroids? The ethics, epistemology and politics of persuasive AI, was published in AI & Society in 2025. It investigates whether persuasive AI poses a unique threat to rationality, explores the ethical implications of large-scale persuasion, and examines the political tensions that surround regulation in this space.

Is persuasive AI making us less rational?

The study explores whether persuasive AI undermines human rationality. The study challenges the idea that people engaging with AI-driven arguments automatically become irrational. Instead, it suggests that the risk lies in a subtler erosion: people may form attitudes that are less rational than those developed through human interaction or independent reasoning.

For example, large language models can produce arguments that appear coherent and fluent but lack genuine depth. These outputs may mimic logical structure without real understanding, leading users to accept them uncritically. While this may not constitute outright irrationality, it changes the standard by which rational attitudes are formed, subtly weakening the quality of public discourse.

This concern is amplified by the sheer volume and speed at which AI can generate persuasive content. Unlike human debaters, AI does not tire or require commitment to a position. It can endlessly generate responses, nudging individuals toward conclusions through sheer repetition and exposure rather than careful reasoning.

Why persuasive AI resembles “Sophistry on Steroids”

The author argues that the threat of persuasive AI can be understood through the lens of sophistry. In classical times, sophists were criticized for producing arguments for effect rather than for truth. Persuasive AI takes this to a new level, functioning as “sophistry on steroids.”

The danger lies not in the unique persuasiveness of AI compared to human rhetoric, but in its capacity for scale and concentration of control. A handful of technology companies already possess the infrastructure to flood the information environment with AI-generated arguments. This dominance risks creating an uneven “marketplace of arguments,” where certain perspectives gain undue visibility simply because of the resources behind them.

In this context, AI-generated persuasion becomes less about free debate and more about controlling narratives. The result could be a distortion of democratic processes, where public reasoning is shaped by the volume of synthetic arguments rather than by the diversity and quality of human voices.

The ethical implications are profound. If attitudes are shaped by automated sophistry, questions of accountability arise. Who is responsible for the consequences of beliefs or actions formed through exposure to persuasive AI? And how can societies maintain meaningful standards of truth in a landscape dominated by artificial arguments?

How regulation of persuasive AI becomes a political battleground

The study also points out the political challenges of regulating persuasive AI. McKenna notes that attempts to establish ethical guidelines for AI persuasion will inevitably spark conflict, because persuasion is inherently political. Different groups have competing visions of what counts as legitimate influence, whose interests are served, and which goals are acceptable.

For instance, governments may turn to persuasive AI to promote public health campaigns or climate action. While these uses may appear beneficial, they also raise concerns about manipulation and the reinforcement of polarization. Citizens opposed to government agendas may view such campaigns as coercive, deepening mistrust rather than building consensus.

Private companies pose another challenge. Their use of persuasive AI in advertising, consumer behavior shaping, or political lobbying concentrates influence in the hands of a few actors. This raises questions about power asymmetries and democratic accountability. The ability of corporations to dominate discursive spaces threatens to marginalize less-resourced voices, making regulation a matter of not just ethics but also equity and justice.

The author further argues that governance frameworks must grapple with more than technical safeguards. They must address fundamental questions about who controls persuasive technologies, whose values guide their deployment, and how societies can prevent the amplification of inequality through AI-driven persuasion.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback