Do AI algorithms influence public opinion?
Do algorithmic recommendations actually shape what people believe others think and do? New research suggests the influence of algorithmic curation may be far more complex than widely assumed.
The study, titled “Recommended to You: An Experimental Study of Normative Influences from Algorithmic and Social Recommendations on Social Media,” published in the journal AI & Society, examines how social and algorithmic recommendation systems influence perceived social norms and people’s willingness to engage with controversial technological issues online.
Through a large-scale experimental survey involving more than a thousand participants, the researchers tested whether recommendation labels, such as those generated by algorithms or by social contacts, can alter how users interpret the social importance of emerging debates.
Do algorithmic and social recommendations shape perceived norms?
Social media platforms operate through a mixture of human and machine influence. On the one hand, users encounter posts shared or liked by friends, family members, or colleagues. On the other, artificial intelligence systems analyze behavioral data and curate content that appears most relevant to individual users. These recommendation systems often present posts with labels indicating that content was recommended by an algorithm, highlighted because it is widely read, or shared by someone within a user’s network.
Communication researchers have long argued that such cues might shape perceived social norms, meaning people’s beliefs about what others commonly do and what behaviors are socially approved. If users repeatedly see a post promoted as popular or recommended, they might assume that many others support or discuss the topic, potentially encouraging them to engage with it themselves.
To test this idea, the authors conducted a controlled experiment involving 1,021 adult social media users recruited through an online panel. Participants were exposed to simulated social media posts addressing a morally complex and emerging technological issue: digital immortality. The concept refers to technologies that enable interactions with digital replicas of deceased individuals through chatbots, avatars, or other AI-driven systems.
Digital immortality was chosen deliberately because it represents a novel and ethically ambiguous issue. When public attitudes toward a topic are not yet firmly established, external cues, such as algorithmic recommendations, might be more influential in shaping perceptions and engagement.
Participants in the experiment viewed posts about digital immortality under four different conditions. In one scenario, the post appeared as recommended by an online friend, representing a social recommendation. In another, the post was algorithmically recommended by the platform. A third scenario presented the content as algorithmically recommended because it was among the most read posts, representing popularity-based recommendation. A fourth condition served as a control, showing the post without any recommendation label.
The researchers also varied the tone of the posts themselves. Some versions emphasized the potential benefits of digital immortality, such as helping people cope with grief through virtual interactions with deceased loved ones. Other versions highlighted ethical concerns and potential risks associated with the technology. This design allowed the researchers to assess whether recommendations influenced normative perceptions regardless of whether the content supported or criticized the idea.
The results challenge common assumptions about algorithmic influence. The study found no significant difference in perceived social norms across the recommendation conditions. Whether the post was recommended by a friend, an algorithm, or labeled as widely read had little effect on participants’ beliefs about how common or socially accepted engagement with digital immortality was.
Participants exposed to algorithmically recommended content did not perceive stronger social norms compared to those who saw posts without recommendation labels. Likewise, social recommendations from friends did not significantly increase perceptions that others in the participant’s social environment were discussing or acting on the issue.
These findings suggest that recommendation labels alone may not be powerful enough to shape normative perceptions in a single exposure. In other words, simply labeling content as recommended does not automatically convince users that an issue is socially important or widely supported.
Why perceived social norms still matter for online engagement
Although the recommendation labels themselves showed little direct influence on normative perceptions, the study revealed a crucial insight about the role of social norms in shaping behavior.
Participants who believed that others around them were discussing or acting on the issue were significantly more likely to express intentions to engage with the topic themselves. Engagement in this context included discussing digital immortality with others, sharing information about the issue, or taking actions related to the topic.
The researchers measured two types of perceived social norms: descriptive norms, referring to perceptions about how common a behavior is, and injunctive norms, referring to perceptions about whether others approve of a behavior. Both types were examined in relation to two reference groups: participants’ immediate social environment and the broader community of social media users.
The analysis revealed that norms associated with a person’s immediate social environment were the strongest predictors of engagement. When participants believed that people in their own social circle found it important to discuss or act on the issue, they were significantly more likely to express similar intentions.
On the other hand, perceptions about the broader population of social media users had a weaker influence. Beliefs about what “social media users in general” were doing did not consistently predict engagement with the issue.
This distinction highlights an important feature of social influence in digital environments. Even though social media connects millions of users, people remain most influenced by the perceived behavior of those closest to them. Norms within personal networks appear more powerful than abstract signals from large online audiences.
The findings reinforce existing theories in social psychology that emphasize the role of proximal reference groups, meaning the individuals with whom people identify most strongly. Family members, friends, and colleagues often carry greater influence over attitudes and behavior than distant or anonymous groups.
Rethinking the influence of AI-driven content curation
While algorithms undeniably shape which information people encounter, their influence on social norms may not be as immediate or deterministic as sometimes assumed. Instead, the research suggests that the impact of algorithmic recommendations may depend on broader contextual factors.
One possible explanation lies in the temporal nature of social influence. Normative perceptions often develop gradually through repeated exposure rather than through a single encounter. On real social media platforms, users are exposed to continuous streams of posts, comments, likes, and shares that collectively signal which topics are socially relevant.
In such environments, algorithmic recommendations interact with multiple other signals, including social feedback, peer engagement, and platform design features. The single-exposure experiment used in the study may therefore represent a conservative test of algorithmic influence.
Another factor is individual differences in how users interpret algorithmic cues. Some people trust algorithms and rely heavily on them to navigate information online, while others remain skeptical of machine-generated recommendations. The researchers tested whether a user’s appreciation for algorithms might moderate the influence of recommendation labels.
However, the results showed that algorithmic appreciation did not significantly change the effects of recommendations. Even participants who generally valued algorithmic guidance were not more likely to perceive stronger social norms when exposed to algorithmically recommended posts.
The nature of the issue itself may also play a role. Digital immortality is a relatively unfamiliar topic for many people. When users lack prior knowledge or strong opinions about an issue, they may not respond strongly to recommendation cues because they lack the context needed to interpret them.
Additionally, recommendation labels such as “recommended” or “most read” may not automatically function as meaningful social signals. Users might interpret these labels as technical features rather than as indicators of collective approval or engagement.
Implications for public discourse in the age of AI
When people perceive that those around them care about an issue, they are more likely to participate in discussions and actions related to that topic. AI-powered recommendation systems may not directly shape social norms in the short term, but they can still influence the visibility of issues that later become subjects of social discussion.
Over time, repeated exposure to recommended content, combined with signals from friends and social networks, may gradually shift perceptions about what topics matter and how people should respond to them.
Understanding these dynamics is increasingly important as AI-driven content curation becomes central to online communication. Platforms rely on recommendation algorithms to manage massive volumes of information, determine which posts appear in users’ feeds, and guide public attention.
According to the research, the relationship between AI and social influence is more nuanced than simple cause-and-effect models suggest. Algorithms may provide cues about relevance or popularity, but the formation of social norms remains deeply embedded in human relationships and social contexts.
Future research, as the study suggests, may explore how repeated exposures, interactive platform features, and more familiar or controversial topics influence normative perceptions over time. Researchers may also investigate how algorithmic curation interacts with political polarization, misinformation, and emerging technologies.
- FIRST PUBLISHED IN:
- Devdiscourse

