Who do we trust more: Humans or AI?


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 02-03-2026 06:53 IST | Created: 02-03-2026 06:53 IST
Who do we trust more: Humans or AI?
Representative Image. Credit: ChatGPT

Do people trust artificial intelligence (AI) more than they trust other human beings? A new cross-cultural study published in the journal Behavioral Sciences finds that AI is neither trusted more than humans nor treated as just another tool. Instead, it occupies a middle ground, and the reasons behind that position vary sharply across cultures.

The research, titled Who Gets More Trust—AI or Humans, and Why? A Cross-Cultural Analysis of AI and Interpersonal Trust, maps where AI sits in the social trust hierarchy and dissects the psychological forces that shape trust in machines compared to people.

AI in the social trust hierarchy

The research team surveyed 577 adults, 289 in China and 288 in the United States, using established psychological scales and newly developed measures. Participants rated their trust in intimate groups such as family and close friends, intermediate groups such as acquaintances, and distant groups such as strangers. They also rated trust in two types of artificial intelligence: embodied AI, such as robots and physical smart devices, and disembodied AI, such as chatbots and software systems.

Across both countries, a consistent pattern was observed. People trusted their intimate circles the most. They trusted strangers the least. AI fell in between.

In both China and the United States, trust in embodied and disembodied AI was significantly lower than trust in close family members and friends. At the same time, AI was trusted more than distant social targets like strangers. Disembodied AI in the United States showed a level of trust similar to intermediate human groups, while embodied AI sat clearly between intermediate and intimate trust levels.

This cross-cultural consistency suggests that people do not treat AI as either fully human or entirely alien. Instead, AI appears to occupy a quasi-social position. It is seen as more reliable than unknown individuals but less dependable than close relationships built on emotional bonds and shared history.

The researchers argue that current AI systems lack key features of intimate human relationships, including emotional reciprocity and deep mutual understanding. That gap may explain why AI does not reach the level of trust reserved for family and close friends. At the same time, the perceived objectivity and competence of AI may elevate it above strangers in the trust ranking.

The findings also speak to a broader debate in psychology and human-computer interaction. One school of thought, often called the Computers Are Social Actors perspective, suggests that people naturally apply social rules and expectations to machines. Another view, known as the Unique Agent Hypothesis, argues that trust in AI follows a distinct logic from interpersonal trust. The new study finds support for both views, depending on context.

Trust levels may look similar on the surface, but the underlying psychological mechanisms differ in important ways.

Culture, deception and the psychology of risk

The researchers did not stop at measuring trust levels. They also examined how past experiences and personality traits shape trust in AI and in humans.

One key factor was deception experience, defined as how often individuals had been deceived and how strongly those experiences affected them. In China, deception experience was negatively associated with both interpersonal trust and trust in AI. Individuals who reported stronger impacts from past deception tended to trust less across the board.

In the United States, the pattern was more selective. Deception experience showed a negative link mainly with embodied AI trust and had little effect on trust in disembodied systems or on interpersonal trust overall.

The authors suggest that this difference may reflect cultural styles of thinking. Chinese participants, who are often described in psychological research as more attentive to relational and contextual cues, may generalize negative social experiences more broadly. Americans, who are often characterized as more analytic and target-focused in cognition, may restrict the impact of past deception to agents that appear more human-like.

Embodiment played a critical role in this dynamic. Embodied AI, such as robots or physical devices, may trigger social schemas more strongly than abstract software. In the U.S. sample, only these more human-like systems showed a measurable link to past deception.

The study also examined two personality traits: risk propensity and trust propensity. Risk propensity reflects a willingness to take risks in decision-making, including social and financial risks. Trust propensity reflects a general tendency to see oneself as trusting and to act accordingly.

In China, trust propensity emerged as a powerful mediator of trust in both embodied AI and interpersonal relationships. People who generally saw themselves as trusting were more likely to trust embodied AI. This suggests that, in China, embodied AI may be processed through intuitive, person-based trust pathways similar to those used in human relationships.

Risk propensity also played a role, but in a more nuanced way. Social risk was more strongly associated with trust in embodied AI, while financial risk was more strongly associated with trust in disembodied AI. This pattern implies that embodied systems may evoke concerns about social harmony and relational consequences, while disembodied systems are judged more on outcome-based or financial considerations.

In the United States, the picture was different. Financial risk was strongly associated with trust in both embodied and disembodied AI. Social risk showed little consistent influence. Trust propensity predicted interpersonal trust but did not significantly mediate AI trust.

This divergence reinforces the idea that Americans tend to evaluate AI primarily through a functional lens, focusing on performance and material consequences rather than relational factors. Chinese participants, by contrast, appear more likely to interpret embodied AI as a social presence.

Honesty norms and the future of Human–AI trust

The researchers also explored the role of perceived honesty norms, meaning beliefs about how honest people in society generally are and whether dishonest behavior is punished.

In China, perceived honesty norms significantly shaped how personality traits translated into trust. When participants believed that honesty norms were strong, trust propensity became the main driver of embodied AI trust. When perceived norms were weaker, risk considerations carried more weight. For disembodied AI, honesty norms moderated the link between risk propensity and trust.

In the United States, honesty norms played a more limited role. They moderated the link between trust propensity and AI trust but did not significantly influence interpersonal trust or risk-based pathways.

The authors interpret this difference through the lens of cultural tightness and looseness. China is often described as a tighter culture, with stronger social norms and lower tolerance for deviance. The United States is described as looser, with greater behavioral flexibility and weaker normative constraints.

In a tighter context, social norms may shape trust decisions more broadly, extending even to interactions with artificial agents. In a looser context, norms may matter primarily when evaluating new or uncertain technologies, rather than established human relationships.

The study also conducted additional analyses using a broader multi-item AI trust scale and controlling for participants’ frequency of AI use. While some specific associations shifted, the core pattern remained intact: AI sits between intimate and distant human trust, and the mechanisms underlying AI trust differ across cultures.

Put simply, the study asserts that trust in AI is not simply a copy of interpersonal trust, nor is it entirely separate. It emerges from an interplay of past experiences, personality tendencies, perceived risks, embodiment, and cultural norms.

Efforts to calibrate trust in AI systems may need to be culturally tailored. In contexts like China, emphasizing social integration, moral alignment, and norm compliance may resonate more strongly. In contexts like the United States, transparency about performance, financial reliability, and risk management may be more effective.

The research also carries a broader warning. As AI systems become more interactive and emotionally responsive, they may move closer to the inner circles of social trust. The authors note that some scholars have raised concerns about individuals forming deep attachments to AI companions. If trust in machines begins to rival trust in close human relationships, the psychological and social consequences could be profound.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback