Gender and national context influence willingness to delegate to AI


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 16-02-2026 15:04 IST | Created: 16-02-2026 14:05 IST
Gender and national context influence willingness to delegate to AI
Representative Image. Credit: ChatGPT

Artificial intelligence is rapidly moving into spaces once considered uniquely human, from offering companionship to advising on mental health and even diagnosing illness. However, public willingness to let AI take on these socially important roles varies sharply across contexts and countries. A new large-scale international study states that delegation to AI is far from uniform and is shaped by trust, psychology, gender, and national context.

The research, titled Who Lets AI Take Over? Cross-National Variation in Willingness to Delegate Socially Important Roles to Artificial Intelligence, published in the journal AI & Society, analyses how people evaluate AI as a potential substitute in high-stakes social roles.

People are most willing to accept AI as a companion, moderately willing to use AI for mental health advice, more cautious about AI teaching children, and least willing to trust AI as a medical doctor. Beneath this ranking lies a deeper pattern: trust in digital information ecosystems plays a far stronger role than anxiety, loneliness, or even optimism in determining who is prepared to let AI take over.

A global hierarchy of AI delegation

The study examines four socially significant roles: AI companion, AI mental health advisor, AI teacher for children, and AI medical doctor. Across countries, willingness to delegate follows a consistent gradient of perceived risk and responsibility.

Companionship ranks highest. A substantial majority of respondents across nations indicate openness to AI serving as a social companion. The relatively low perceived stakes of companionship, combined with the increasing normalization of conversational AI, appear to make this role more socially acceptable.

Mental health advisory roles receive moderate support. While still involving sensitive personal information and emotional vulnerability, mental health support from AI is seen as less physically risky than medical diagnosis. Acceptance rates decline compared to companionship but remain substantial.

Delegation to AI teachers generates more hesitation. Education involves long-term developmental impact and responsibility for children, raising concerns about judgment, empathy, and moral guidance. Willingness drops significantly compared to companionship and mental health roles.

AI as a medical doctor receives the lowest support. Even though AI diagnostic tools are increasingly deployed in healthcare systems, respondents across countries express strong reservations about delegating full medical authority to machines. The potential consequences of error, combined with the expectation of human judgment in life-and-death decisions, contribute to lower acceptance levels.

This structured ranking suggests that public evaluation of AI delegation is closely tied to perceived risk, moral weight, and the degree of irreversibility associated with each role.

Trust in digital information as the strongest predictor

While domain differences are clear, the study’s most striking finding concerns the drivers of willingness to delegate. The researchers move beyond traditional technology acceptance models and instead test a framework combining cognitive appraisals, affective dispositions, and contextual factors.

Cognitive appraisals include trust in online information and life optimism. Affective dispositions include generalized anxiety, loneliness, and life satisfaction. Contextual factors include gender and national background.

Among all predictors, trust in online information emerges as the most powerful and consistent driver across roles. Individuals who report higher trust in online information are significantly more likely to delegate socially important roles to AI. This effect remains strong across companionship, mental health, education, and medicine.

The magnitude of the trust effect is especially pronounced in high-stakes domains such as teaching and medicine. People who trust digital information ecosystems appear more comfortable extending that trust to AI systems embedded within those ecosystems.

Life optimism also predicts greater willingness to delegate, though its effects are smaller than those of digital trust. Optimistic individuals may perceive technological change as an opportunity rather than a threat, making them more open to AI substitution.

On the other hand, affective dispositions show weaker and more role-specific effects. Loneliness modestly increases willingness to adopt AI companions but does not significantly influence attitudes toward AI doctors or teachers. Anxiety predicts slightly higher acceptance of AI in mental health and teaching roles, possibly reflecting a desire for accessible support, but this effect diminishes when trust in online information is high. Life satisfaction exerts only minor positive influence in certain domains.

The findings suggest that delegation to AI is primarily a cognitive judgment about the reliability of digital systems rather than an emotional reaction driven by personal distress.

Gender gaps and national differences

Gender differences are consistent and substantial. Across all four roles, women are less willing than men to delegate socially important functions to AI. The gap is largest in education and medicine, domains traditionally associated with care, responsibility, and vulnerability.

Importantly, these gender differences persist even after controlling for trust, optimism, anxiety, loneliness, and life satisfaction. This indicates that the divergence is not merely a reflection of different psychological profiles but may reflect deeper differences in risk perception, institutional trust, or normative expectations about technology.

National variation is also significant. Even after accounting for individual-level predictors, countries differ markedly in baseline willingness to delegate. In some countries, acceptance of AI roles is broadly high across domains, while in others skepticism remains strong.

These differences suggest that delegation to AI is shaped not only by individual beliefs but also by institutional, cultural, and governance contexts. Public exposure to digital services, regulatory frameworks, and societal narratives about AI likely influence baseline attitudes.

The study confirms that willingness to delegate contains both a general disposition toward AI substitution and domain-specific nuances. Trust and gender exert broad cross-domain influence, while emotional factors operate selectively depending on context.

Delegation is not just technology acceptance

Delegation to AI is not simply about liking or disliking technology. It reflects judgments about authority, responsibility, and social trust.

Allowing AI to act as a companion carries limited institutional consequence. Allowing AI to teach children or diagnose illness raises questions about accountability, liability, and moral judgment. The study shows that people draw boundaries around where AI authority should expand.

The strong role of digital trust suggests that willingness to delegate depends on confidence in the information ecosystem surrounding AI systems. If individuals perceive online information as reliable and well-regulated, they are more likely to extend that confidence to AI-mediated roles.

On the other hand, in environments marked by misinformation, low institutional trust, or polarized discourse, delegation may face stronger resistance.

The study also highlights that emotional vulnerability alone does not drive delegation. While loneliness and anxiety matter in specific contexts, they are not dominant predictors. Delegation appears more grounded in systemic trust and perceived competence than in personal distress.

Policy and design implications

Expanding AI into socially important roles requires building public trust in digital information systems more broadly. Transparency, accountability, and regulatory clarity may shape delegation attitudes as much as technical performance.

Second, communication strategies must recognize domain sensitivity. Public acceptance of AI companionship does not automatically translate into acceptance of AI doctors or teachers. Risk perception varies by role, and deployment strategies must account for this gradient.

Third, gender differences warrant attention. Persistent gaps suggest that AI governance and design processes should incorporate inclusive engagement to address differential risk perceptions and concerns.

Furthermore, national context matters. International AI deployment cannot assume uniform attitudes. Cultural, regulatory, and institutional environments shape public boundaries around AI authority.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback