Trust and fear can coexist in AI use

The effect of trust on perceived risk, however, was far from uniform. A trusting stance did not lower risk awareness. Instead, it was associated with higher perceptions of both benefits and risks. Individuals who were more open to engaging with AI were also more likely to notice potential downsides, including concerns about privacy, control, and unintended consequences. This pattern supports the idea that openness leads to deeper cognitive engagement, not complacency.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 13-01-2026 17:21 IST | Created: 13-01-2026 17:21 IST
Trust and fear can coexist in AI use
Representative Image. Credit: ChatGPT

Public confidence in artificial intelligence (AI) has long been treated as a problem of reassurance. Policymakers, technology firms, and regulators often assume that public trust grows when perceived risks fall, and that acceptance follows once fear is reduced. New research from China now challenges that assumption, showing that trust in AI does not necessarily calm public concern and may instead sharpen it.

The study suggests that acceptance of artificial intelligence is not driven by blind optimism, but by a simultaneous recognition of both promise and peril. The findings come from the study Beyond Risk Reduction: Vigilant Trust in Artificial Intelligence, published in the journal Behavioral Sciences.

Trust that sharpens attention rather than dulling concern

The study discusses a concept called vigilant trust, which rejects the idea that trust and caution sit at opposite ends of a spectrum. Instead, the authors argue that trust can operate as an active psychological stance, one that encourages engagement while maintaining alertness to uncertainty.

To test this idea, the researchers broke trust into four distinct dimensions: trusting stance, competence, benevolence, and integrity. Trusting stance reflects a general openness toward engaging with AI systems, even when information is incomplete. Competence refers to beliefs about whether AI systems are capable and effective. Benevolence captures expectations that AI systems act in users’ interests. Integrity reflects beliefs that AI systems follow accepted norms such as fairness and accountability.

Across all four dimensions, trust consistently increased perceptions of AI’s benefits. Respondents who trusted AI more strongly expected gains in efficiency, convenience, and societal value. These perceived benefits emerged as the single strongest predictor of AI acceptance, outweighing all other psychological factors examined in the study.

The effect of trust on perceived risk, however, was far from uniform. A trusting stance did not lower risk awareness. Instead, it was associated with higher perceptions of both benefits and risks. Individuals who were more open to engaging with AI were also more likely to notice potential downsides, including concerns about privacy, control, and unintended consequences. This pattern supports the idea that openness leads to deeper cognitive engagement, not complacency.

Benevolence played a different role. When respondents believed AI systems were designed to serve human interests, their perception of risk declined. Expectations of goodwill appeared to reduce fears of exploitation or malicious intent. Competence and integrity, by contrast, did not consistently reduce risk perceptions. In some contexts, seeing AI as highly capable even increased awareness of potential harm, likely because powerful systems are perceived as having broader and more unpredictable impacts.

Together, these findings undermine the notion that trust operates primarily by dampening concern. Instead, trust reorganizes how people process information about AI, amplifying attention to both positive and negative signals. The authors describe this as a form of epistemic vigilance, where acceptance is built through scrutiny rather than its absence.

Why perceived risk does not always deter acceptance

Conventional technology adoption models assume that higher perceived risk leads to lower acceptance. In this survey, that assumption does not consistently hold.

While perceived benefits strongly and reliably increased acceptance, perceived risks did not uniformly suppress it. In the main analysis, higher awareness of AI-related risks was actually associated with greater acceptance. In other contexts, risk perception had no significant effect at all. These results suggest that risk awareness and adoption willingness can move in the same direction, rather than canceling each other out.

The researchers offer several explanations for this counterintuitive pattern. First, higher risk perception may signal deeper engagement. Individuals who actively seek information about AI are more likely to recognize both its advantages and its dangers. Their acceptance reflects informed judgment rather than ignorance.

Second, benefits appear to dominate decision-making when both benefits and risks are salient. Even when respondents acknowledged substantial risks, strong expectations of efficiency and usefulness outweighed those concerns. Acceptance, in this sense, reflects prioritization rather than denial.

Third, the broader social context matters. In environments where AI deployment is widespread and institutionally endorsed, risks may be seen as an unavoidable feature of technological progress rather than a reason to resist adoption. In such settings, awareness of risk does not translate into rejection, especially when governance and oversight are assumed to be in place.

These dynamics help explain why public attitudes toward AI often appear ambivalent rather than polarized. People can simultaneously worry about job displacement, data misuse, or bias while still supporting AI adoption across key sectors. The study shows that this ambivalence is not a sign of confusion, but a stable psychological response to complex technologies.

Implications for AI governance and public communication

The findings carry important consequences for how governments, regulators, and technology developers approach public trust. Efforts that focus narrowly on reassuring the public by minimizing perceived risk may miss the deeper drivers of acceptance.

According to the study, acceptance is primarily built through benefit recognition, supported by trust in system competence, integrity, and benevolence. Risk awareness does not necessarily undermine this process and may even accompany it when engagement is high. This suggests that effective governance should not aim to eliminate public concern, but to support informed and reflective engagement.

Transparency, ethical safeguards, and demonstrated competence remain critical, not because they erase fear, but because they allow trust and vigilance to coexist. Communication strategies that acknowledge uncertainty while clearly articulating benefits may be more credible than those that promise safety alone.

The research also challenges technology acceptance models that treat trust as a simple risk-reduction mechanism. By showing that different dimensions of trust shape perceptions in different ways, the study provides a more realistic framework for understanding public responses to AI. Trusting stance, benevolence, competence, and integrity each play distinct roles, and none operate as a universal antidote to concern.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback