How technical capacity and risk perception shape AI trust
Artificial intelligence (AI) may promise efficiency and innovation, but public trust depends on more than performance metrics. As AI systems become embedded in daily life, citizens are weighing their technical competence against potential social harm. Moonkyoung Jang has conducted one of the most detailed empirical examinations of how these competing perceptions influence trust in AI.
Published in Information, the study Building Trust in AI: The Role of Technical Capacity, Social Risk, and Corporate Institutional Accountability explores how cognitive capacity, perceived risks, and institutional safeguards shape public confidence. The research shows that while technical competence builds trust, fears about societal disruption and weak accountability structures significantly erode.
Technical capacity emerges as the foundation of trust
The research separates cognitive capacity from autonomous or emotional capacity to understand how each dimension influences public confidence.
Cognitive capacity refers to AI’s analytical strength, rational decision-making ability, and problem-solving competence. This dimension emerged as the most powerful and consistent predictor of trust. Individuals who believed AI systems are capable of logical reasoning and accurate analysis were significantly more likely to express both general trust in AI and confidence in its technical components, such as training data and algorithms.
Cognitive competence appears to function as the backbone of AI legitimacy. When users perceive AI systems as technically proficient and capable of delivering accurate outcomes, they are more willing to rely on them. This reinforces longstanding trust theory, which identifies perceived ability as a primary driver of confidence in institutions and systems.
Autonomous capacity, defined as AI’s perceived ability to act independently or display human-like qualities, produced a more nuanced pattern. While it did not significantly increase overall generalized trust in AI, it positively influenced trust in specific components and in institutional actors involved in AI governance. This suggests that when AI is seen as capable of independent judgment, it may indirectly strengthen perceptions of responsibility among developers and regulators, even if it does not directly elevate broad public confidence.
By differentiating these two capacity dimensions, the study advances trust research beyond simplistic notions of technological capability. Not all forms of perceived intelligence carry equal weight in shaping public attitudes.
Social risk undermines confidence across the board
While technical competence builds trust, perceived risk erodes it. The study distinguishes between personal risk and social risk to assess how different types of concern influence public opinion.
Personal risk refers to potential direct harms, such as privacy breaches, biased decisions, or system errors affecting individuals. Social risk encompasses broader societal concerns, including job displacement, inequality, erosion of democratic institutions, and long-term harm to social cohesion.
The findings show that social risk is the most damaging factor to trust. Individuals who perceive AI as posing a threat to society or future generations report significantly lower levels of overall trust. Social risk consistently undermines trust not only in AI systems themselves but also in the institutions that develop and regulate them.
Personal risk, while still influential, has a narrower impact. It primarily reduces trust in technical components rather than in institutional actors. This distinction highlights an important psychological dynamic: broader societal fears carry more weight in shaping general trust than isolated personal concerns.
The implications are clear. Addressing technical errors alone will not be sufficient to restore public confidence if societal-level anxieties remain unresolved. Public trust is closely linked to perceptions of AI’s collective impact, not merely its individual performance.
The study situates these findings within established trust theory, which emphasizes vulnerability as a key condition for trust formation. When citizens feel vulnerable to widespread societal disruption, their willingness to rely on AI systems diminishes.
Accountability outweighs moral sympathy in building trust
The research separates moral consideration from legal and institutional recognition to understand how each influences trust. Moral personhood reflects the belief that AI deserves ethical respect or inclusion within moral frameworks. Legal or institutional recognition refers to formal governance structures that assign responsibility, accountability, and regulatory oversight to AI systems and their creators.
The results demonstrate that legal and institutional accountability significantly increases trust at all levels. Individuals who support formal regulatory frameworks and corporate accountability mechanisms show higher overall trust in AI, stronger confidence in technical components, and greater trust in actors such as companies and policymakers.
On the other hand, moral consideration alone does not significantly influence trust. Viewing AI as morally deserving of respect does not translate into measurable increases in confidence.
This distinction underscores a central insight of the research: public trust in AI is grounded in governance rather than sentiment. Citizens are more reassured by clear accountability structures than by abstract ethical recognition of AI as a moral entity.
Trust in actors and institutions appears especially sensitive to perceptions of accountability. When AI systems operate within transparent and enforceable legal frameworks, public confidence rises. Without such structures, even technically advanced systems struggle to gain legitimacy.
The study’s regression analyses and supplementary structural equation modeling confirm the robustness of these patterns. Across multiple model specifications, cognitive capacity, social risk, and legal recognition consistently predict trust outcomes. Moral personhood remains statistically insignificant, reinforcing the primacy of institutional accountability.
Demographic patterns and broader implications
The study identifies demographic patterns that further illuminate trust dynamics. Older individuals tend to express lower overall trust in AI compared to younger respondents. Men report slightly higher trust levels than women. Higher income and more frequent interaction with AI are associated with greater confidence.
Interestingly, lower levels of AI awareness are positively correlated with trust in some models, suggesting that familiarity and skepticism may coexist in complex ways. Greater exposure to AI systems may heighten awareness of risks and limitations, influencing trust judgments.
The study makes an important conceptual contribution by disaggregating trust into three levels: overall AI trust, trust in technical components, and trust in actors. This layered approach reveals that individuals may differentiate between confidence in algorithms and confidence in the corporations or governments that deploy them.
For example, personal risk perceptions primarily affect trust in technical systems, while social risk influences both system-level and institutional trust. Legal recognition strengthens trust across all layers, demonstrating the central role of governance.
Building public trust in AI requires more than technical innovation. Demonstrating cognitive competence is necessary but not sufficient. Policymakers and corporate leaders must address societal-level concerns, particularly around inequality, employment disruption, and democratic stability.
Clear regulatory frameworks and enforceable accountability mechanisms emerge as the most direct path to strengthening trust. As AI systems expand into critical sectors, transparent oversight structures may determine whether public confidence stabilizes or declines.
- FIRST PUBLISHED IN:
- Devdiscourse

