AI systems exert sublime influence over the self, driving behavioral uniformity

While AI recommendation engines are often discussed in terms of utility, optimization, or control, the study advances a radical thesis: these systems can be experienced as sublime - a quality historically reserved for nature, art, or the divine - and that this affective experience can covertly reshape a person’s identity.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 03-04-2025 18:14 IST | Created: 03-04-2025 18:14 IST
AI systems exert sublime influence over the self, driving behavioral uniformity
Representative Image. Credit: ChatGPT

A new study reveals a provocative and unsettling dimension of artificial intelligence recommendation systems, arguing they exert a powerful, quasi-mystical influence over users’ sense of self. Conducted by researchers at the Hanken School of Economics and published in Organization, the study titled "Mirror, mirror on the screen, ‘Wherein can I find me?’ – On the sublime qualities of AI recommendation systems, algorithm conformity, and the else," challenges conventional understandings of how algorithmic persuasion organizes human behavior.

While AI recommendation engines are often discussed in terms of utility, optimization, or control, the study advances a radical thesis: these systems can be experienced as sublime - a quality historically reserved for nature, art, or the divine - and that this affective experience can covertly reshape a person’s identity. Unlike systems of control or coordination seen in platforms like Uber or Kickstarter, recommendation systems such as those used in TikTok, Amazon, or predictive search tools engage the user through an ongoing stream of emotionally charged suggestions. These AI systems, the authors argue, don’t simply predict preferences, they reveal, judge, and prescribe how one ought to be.

How and why might AI recommendations alter individuals’ self-perception and induce behavioral conformity?

The authors argue that users often internalize these recommendations not because of rational utility but because they appear imbued with a kind of normative truth. The recommendations feel personal, even intimate, not merely because they are tailored but because they evoke a vision of what one could or should become.

Drawing on the concept of the sublime, traditionally linked to awe, terror, and transcendence, the researchers argue that AI systems simulate oracular power. Their data-driven pronouncements can elicit reverence, even submission. When users feel that these systems "know them" more deeply than they know themselves, they are more likely to conform to their behavioral cues. This phenomenon, termed “algorithm conformity,” echoes Erich Fromm’s concept of automaton conformity, where individuals surrender authentic identity in favor of externally imposed models, now computationally generated.

The authors press further: what gives AI recommendations their organizing power is not simply the manipulation of desires or the control of information but the affective judgment they appear to pass. A recommendation - what to watch, read, buy, believe - acts as both a mirror and a measuring stick. The gaze of the algorithm replaces the gaze of the other in constructing the self. The individual, bombarded with personalized suggestions derived from data patterns of “people-like-you,” is coaxed into becoming not more unique but more algorithmically average.

Why are some individuals more susceptible to this algorithmic identity shaping than others?

The answer, the study suggests, lies in a deeper socio-psychological landscape marked by anxiety, isolation, and the disintegration of traditional identity-forming institutions. In a society where community, faith, and labor once gave shape to the self, AI recommendations now fill the void, offering ready-made templates for how to act, look, feel, and think. The ease of following these prescriptions - no friction, no doubt - makes them alluring.

However, the study introduces an equally significant counter-force: the else. Coined from Cheney-Lippold’s theory of epistemic corruption, the else refers to the moment when AI-generated profiles get something wrong, slightly off, a misfire, a misreading. This misalignment, experienced as uncanniness, introduces doubt. Is the AI mistaken? Or is the self-deceived? This dissonance can act as a rupture, cracking open the normative authority of the algorithm and provoking reflexivity. The encounter with the else disrupts the sublime spell, allowing the individual to reclaim interpretive agency.

Can individual experiences of epistemic dissonance translate into collective resistance?

The researchers acknowledge this is a hard question, often sidestepped in critical AI literature that focuses primarily on corporate power. Yet they cite a concrete example: the German football club FC St. Pauli’s public withdrawal from the social media platform X, denouncing its transformation into a platform for hate amplification. The club’s action was not a top-down decree but a response to deliberation among its members - a rare instance where collective awareness of algorithmic influence translated into organizational decision-making.

The authors acknowledge the potential for new echo chambers or commodified dissent. Nevertheless, they insist that the affective disturbance of the else contains the seed of potential emancipation. The uncanny can trigger reflection, irony, or even sardonic laughter - responses that can dissipate the perceived moral authority of AI recommendations and reawaken individual autonomy.

Rather than focusing solely on transparency or accountability, the study advocates for a deeper inquiry into how technological systems shape human becoming.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback