The illusion of choice: How algorithms shape your online experience

One of the key challenges of algorithmic power is opacity - the “black box” nature of AI systems. While these systems collect vast amounts of user data to optimize engagement, they do so in ways that users cannot easily audit or understand. This creates an illusion of choice - users believe they are freely selecting content, unaware that the platform has already filtered and ranked information in ways that shape their perceptions.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 06-03-2025 17:06 IST | Created: 06-03-2025 17:06 IST
The illusion of choice: How algorithms shape your online experience
Representative Image. Credit: ChatGPT

Social media platforms have become powerful mediators of human interaction, shaping what we see, engage with, and even believe. As algorithmic recommendation systems refine their ability to predict and influence behavior, they raise profound questions about power, knowledge, and autonomy. Unlike traditional views of power that focus on coercion or domination, social media platforms exert a subtler, more pervasive influence, embedding power within everyday interactions.

A recent study titled “Productive Power in Social Networks: Challenges for Post-Phenomenological Mediation Theory” by João Vidal, published in AI & Society (2025), explores how social media algorithms challenge existing ideas of technological mediation and productive power. Drawing from Michel Foucault’s theories on power and Byung-Chul Han’s concept of intelligent power, the study argues that social media platforms do more than just connect people - they shape subjectivity itself, subtly directing behavior and decision-making without users' full awareness.

How algorithmic mediation redefines power

Post-phenomenological mediation theory, introduced by Don Ihde and expanded by Peter-Paul Verbeek, argues that technology is not neutral—it co-shapes human experiences, identities, and actions. Social media algorithms, however, go beyond simple mediation. They continuously modify user behavior by structuring what information is seen and prioritized, reinforcing specific perspectives while filtering out others.

Social media platforms use algorithmic recommendation systems to personalize content based on user engagement patterns. For example, Facebook prioritizes “meaningful social interactions” by ranking content based on likes, shares, and comments. Similarly, Twitter (now X) curates posts based on past interactions, reinforcing existing beliefs and preferences. This dynamic suggests that while users think they are making choices, their decisions are heavily guided by invisible algorithmic processes.

The study highlights that algorithmic mediation operates within a Foucauldian framework of productive power - not by imposing rules but by shaping the conditions in which behavior occurs. Unlike traditional top-down control, where power is exerted through restrictions, social media platforms exert power through participation, ensuring users willingly engage with curated content while feeling a sense of choice and agency.

Transparency, opacity, and the illusion of choice

One of the key challenges of algorithmic power is opacity - the “black box” nature of AI systems. While these systems collect vast amounts of user data to optimize engagement, they do so in ways that users cannot easily audit or understand. This creates an illusion of choice - users believe they are freely selecting content, unaware that the platform has already filtered and ranked information in ways that shape their perceptions.

The study discusses how Explainable AI (XAI) has emerged as a response to algorithmic opacity, aiming to make AI decision-making more transparent. However, even with advancements in interpretability, many AI-generated recommendations remain opaque, making it difficult for users to critically engage with the content they consume.

This lack of transparency challenges traditional ideas of power and resistance. In Foucault’s view, power always creates the possibility of resistance - where there is control, there is the potential to push back. However, when users are unaware that they are being influenced, resistance becomes difficult, if not impossible. This marks a shift from explicit forms of power (censorship, restriction) to subtler, more insidious forms of control, where users participate in their own subjugation without realizing it.

The rise of intelligent power in the digital age

Byung-Chul Han introduces the concept of intelligent power, a form of influence that is invisible, seductive, and embedded within systems designed to feel natural and helpful. Unlike traditional authoritarian power, which relies on force and restriction, intelligent power operates by guiding choices in ways that feel organic rather than imposed.

The study argues that social media algorithms exemplify intelligent power by providing users with recommendations that align with their existing preferences while subtly shaping their behavior over time. This is particularly evident in echo chambers and filter bubbles, where users are repeatedly exposed to information that reinforces their beliefs while alternative perspectives become less visible.

This dynamic has profound implications for democracy, free thought, and digital autonomy. If individuals are increasingly shaped by algorithmic nudges rather than independent critical thinking, what happens to the possibility of true freedom? The study suggests that this subtle form of power is more effective than overt manipulation - users feel they are in control, even as their perspectives and actions are subtly guided by unseen forces.

Rethinking power, freedom, and algorithmic governance

The implications of this research extend beyond social media into broader discussions of digital ethics, AI regulation, and the future of online autonomy. The study calls for rethinking traditional ideas of power and resistance, recognizing that algorithmic mediation creates new forms of influence that are neither entirely coercive nor entirely free.

As platforms refine their recommendation systems, the challenge will be ensuring transparency, accountability, and user agency. This could involve:

  • Increased AI explainability, giving users clearer insights into how recommendations are made.
  • Algorithmic audits and regulatory oversight, ensuring that platforms prioritize fairness and diversity of information.
  • User empowerment tools, such as allowing individuals to adjust how algorithms influence their feeds.

Ultimately, the study highlights that social media algorithms are not just tools for engagement - they are engines of power that shape human perception, identity, and action. Understanding and critically engaging with these systems is essential for navigating the increasingly algorithm-driven world we inhabit.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback