Assistive AI gains trust, autonomous AI raises fear
With AI systems moving into classrooms, workplaces, hospitals, and public services, a new question is emerging: why do some human-like AI systems inspire trust while others trigger suspicion or fear? The answer, new research suggests, lies in how people interpret warmth and competence in machines.
The study Beyond the Machine: An Integrative Framework of Anthropomorphism in AI, published in Behavioral Sciences, presents a theoretical model explaining how human-like attributes assigned to AI influence perceptions of usefulness, control, opportunity, and threat across different levels of AI autonomy.
Warmth and competence: The two dimensions that shape AI acceptance
Based on the Stereotype Content Model from social psychology, the authors argue that people evaluate AI systems along the same two axes used to judge other humans. Warmth reflects perceived friendliness, empathy, sincerity, and moral intent. Competence reflects perceived intelligence, expertise, authority, and problem-solving ability.
These two dimensions, the researchers contend, are not superficial impressions. They directly shape key predictors of technology adoption derived from the Theory of Planned Behavior, the Technology Acceptance Model, and the Threat Rigidity Model. Perceived usefulness, perceived behavioral control, subjective norms, and perceptions of opportunity or threat are all influenced by how warm and competent an AI system appears.
In assistive contexts such as tutoring systems, customer service chatbots, and recommendation engines, warmth signals alignment with user goals. Friendly language, polite greetings, human-like voices, and expressions of empathy increase perceived behavioral control and reduce anxiety. Users feel more capable of interacting effectively with AI when it appears cooperative and socially attuned.
Competence in these same assistive settings enhances perceived usefulness. When AI demonstrates intelligence, anticipates needs, or provides accurate recommendations, users are more likely to see it as a valuable tool that improves performance and decision-making.
However, the authors stress that these effects are not linear and not universally positive. Warmth and competence operate differently when AI systems move from supportive roles to autonomous decision-making. In high-autonomy contexts such as algorithmic hiring, autonomous driving, or AI-powered diagnostics, anthropomorphic cues can produce unexpected consequences.
Warmth displayed by autonomous AI may be interpreted as pseudo-empathy. Instead of signaling cooperation, it can create suspicion if users believe the system is simulating care while retaining full decision authority. Likewise, extreme competence in autonomous AI can be perceived as cognitive superiority. When AI appears more capable than its human users, it may trigger status threat and fears of replacement.
The study proposes that warmth increases perceived behavioral control and reduces threat primarily in assistive AI systems. Competence increases perceived usefulness and perceived opportunity, again more strongly in assistive contexts. Under high autonomy, both warmth and competence can backfire, intensifying perceptions of manipulation, domination, or loss of agency.
Assistive versus autonomous AI: A critical boundary condition
The paper clearly highlights the difference between assistive and autonomous AI. Assistive systems support human decision-making but do not replace it. Autonomous systems act independently and may override or substitute human judgment.
This distinction, the authors argue, functions as a boundary condition that determines whether anthropomorphism facilitates acceptance or fuels resistance. In assistive settings, AI is perceived as collaborative. Warmth cues reinforce social presence and goal alignment. Competence cues reinforce reliability and performance enhancement. Together, high warmth and high competence produce strong perceived usefulness, high perceived control, favorable subjective norms, and high opportunity perceptions with minimal threat.
On the other hand, in autonomous contexts, the same cues may be reinterpreted through a threat lens. Competence can be seen as domination. Warmth can appear insincere. High levels of both warmth and competence may create an unsettling impression of an AI system that is both powerful and socially skilled, a combination that may heighten fears of status displacement.
The framework also explores intermediate combinations. High warmth combined with low competence may generate social comfort but limited perceptions of usefulness. High competence combined with low warmth may increase perceived usefulness but also increase threat and reduce perceived behavioral control. Low warmth and low competence, predictably, produce low acceptance and high threat perceptions.
Some studies have shown that anthropomorphic design increases trust, while others report decreased liking or resistance. According to the authors, these discrepancies arise because anthropomorphism activates both opportunity and threat pathways, and the balance depends on autonomy level and perceived goal alignment.
Social categorization cues and identity threat
The study examines how social categorization cues such as gender, age, race, voice tone, and language style influence AI perception. Drawing on social identity and categorization research, the authors argue that users apply familiar human stereotypes to AI systems.
Gendered voices, for example, often shape warmth perceptions. Female-presenting chatbots may be judged as more friendly and forgiving, particularly in service roles. Age cues influence both warmth and competence. Younger-sounding AI may signal agility and technical proficiency, while older-sounding voices may evoke wisdom and care.
Racial cues and perceived similarity between user and AI also shape trust. Users tend to respond more positively to AI agents that appear socially similar to them. Perceived similarity increases perceived behavioral control and reduces threat in assistive contexts.
However, the framework cautions that similarity can also intensify identity threat in autonomous settings. If AI appears highly competent, warm, and socially similar, users may engage in unfavorable social comparison. Rather than fostering connection, similarity may amplify concerns about replacement or status loss.
Language style further illustrates the dual nature of anthropomorphism. Informal, personable language increases warmth and social presence. Formal, literal language enhances credibility and competence. However, excessive human-like disclosure or overly emotional interaction can trigger socially desirable responding or distrust, especially if users question the authenticity of AI expressions.
The study also integrates the Threat Rigidity Model to explain defensive reactions. When AI is perceived as threatening professional identity, autonomy, or job security, users may narrow their information processing and resist adoption. Conversely, when AI is perceived as an opportunity for performance enhancement and collaboration, users show greater flexibility and openness.
Toward responsible anthropomorphism in AI design
The authors point out that designers and organizations must calibrate anthropomorphic cues carefully. Warmth may be strategically beneficial in assistive contexts where cooperation and social presence enhance user experience. Competence cues should signal expertise without crossing into perceived dominance.
In autonomous systems, designers may need to avoid exaggerated human-like cues that suggest emotional depth or moral agency beyond what the system can realistically provide. Transparency about system capabilities and limitations becomes essential to prevent over-trust or misplaced expectations.
The authors also call for more nuanced measurement tools. Rather than relying solely on positive adjectives, future research should assess both positive and negative poles of warmth and competence, including traits such as manipulative versus sincere or incompetent versus intelligent. They suggest that extreme levels of warmth or competence may produce non-linear effects, consistent with concepts such as the uncanny valley or technology threat avoidance.
- FIRST PUBLISHED IN:
- Devdiscourse

