Privacy fears, trust gaps fuel resistance to generative AI adoption among students
The study’s integrative model, merging TAM, PMT, and SET, demonstrated that students' attitudes toward generative AI are shaped by a dynamic interplay of functional, risk-related, and governance-related perceptions. Functional factors alone were insufficient if students remained skeptical about privacy and ethical safeguards. Even highly functional AI systems could face resistance when sociolegal trust was absent.

As generative AI technologies increasingly permeate education, healthcare, and professional sectors, understanding the factors influencing user engagement has never been more critical. A recent study titled "Understanding acceptance and resistance toward generative AI technologies: a multi-theoretical framework integrating functional, risk, and sociolegal factors," published in Frontiers in Artificial Intelligence, sheds new light on how college students form their perceptions of these transformative tools.
The research, conducted by Priyanka Shrivastava from Hult International Business School, integrates three major theoretical models - Technology Acceptance Model (TAM), Protection Motivation Theory (PMT), and Social Exchange Theory (SET) - to provide a comprehensive explanation of the factors that drive both acceptance and resistance toward generative AI technologies among students.
What functional, risk, and sociolegal factors drive acceptance and resistance?
According to the study, functional factors play a decisive role in encouraging acceptance. Variables such as perceived usefulness, ease of use, and reliability, central to the Technology Acceptance Model, showed a strong positive correlation with students' willingness to embrace generative AI tools. The study's structural equation modeling (SEM) analysis revealed a substantial positive path coefficient (β = 0.65, p < 0.01) from functional factors to acceptance. Conversely, functional perceptions inversely correlated with resistance behaviors, meaning students who found AI tools easy to use and useful exhibited significantly less skepticism and avoidance (β = −0.32, p < 0.05).
Risk factors, derived from Protection Motivation Theory, exhibited a profound negative impact on acceptance while simultaneously increasing resistance. Privacy concerns, data security fears, and ethical apprehensions contributed heavily to resistance behaviors, with a strong positive relationship to resistance (β = 0.49, p < 0.01) and a negative link to acceptance (β = −0.22, p < 0.05). The perception of high-risk environments, particularly regarding how personal data might be misused, consistently deterred students from adopting generative AI.
Sociolegal factors emerged as pivotal mediators between functional perceptions, risk concerns, and user outcomes. Trust in AI governance, satisfaction with regulatory protections, and perceptions of fairness significantly boosted acceptance (β = 0.48, p < 0.01) and curtailed resistance (β = −0.36, p < 0.01). Moreover, the study found that sociolegal factors could indirectly reduce risk-related resistance and enhance functionality-driven acceptance through mediation pathways.
How do these factors influence overall attitudes toward generative AI?
The study’s integrative model, merging TAM, PMT, and SET, demonstrated that students' attitudes toward generative AI are shaped by a dynamic interplay of functional, risk-related, and governance-related perceptions. Functional factors alone were insufficient if students remained skeptical about privacy and ethical safeguards. Even highly functional AI systems could face resistance when sociolegal trust was absent.
On the other hand, when students trusted the regulatory environment and governance structures, their concerns about risks diminished significantly. Mediation analysis confirmed that sociolegal factors lowered resistance indirectly by mitigating perceived risks (β = −0.18, p < 0.05) and enhanced acceptance by reinforcing perceptions of AI functionality (β = 0.23, p < 0.05).
The study also emphasized that attitudes toward AI technologies are not static. As AI tools evolve and as governance frameworks adapt to address emerging concerns, user perceptions are likely to shift. Hence, the paper calls for longitudinal research to capture how behavioral adaptation unfolds over time.
Additionally, ethical concerns, such as algorithmic bias, ownership of AI-generated content, data privacy, and labor market disruption, were highlighted as underlying influences on risk perception. The research urged policymakers and developers to take proactive steps in addressing these issues to foster a more balanced and inclusive AI ecosystem.
What strategies can mitigate resistance and enhance acceptance among students?
Actionable recommendations arising from the findings suggest a two-pronged strategy involving both technological enhancement and robust governance reforms. For developers and AI providers, the priority is clear: improve perceived functionality by enhancing usability, ensuring reliability, and demonstrating tangible usefulness. Proactive privacy protections, clear data policies, and user-centric design are critical to alleviating privacy and ethical concerns that fuel resistance.
Policymakers and institutions must focus on strengthening trust through transparent governance, comprehensive regulatory frameworks, and clear ethical standards. The study advocated for bias mitigation strategies, transparent AI auditing mechanisms, enhanced data ownership rights, and workforce reskilling initiatives to address AI's broader societal impact.
Comparative analysis of regulatory models, such as the European Union’s General Data Protection Regulation (GDPR) and the U.S. AI Bill of Rights, were cited as examples of how comprehensive governance frameworks can bolster trust in AI systems and facilitate responsible adoption.
At the institutional level, universities and educational policymakers are advised to implement AI literacy programs, ethical use guidelines, and platforms for open dialogue between developers, users, and regulators. These measures would not only inform students about the benefits and risks of AI but also foster a culture of ethical, responsible, and equitable technology use.
Despite the robust model and rich insights, the study acknowledged its limitations. The reliance on a student sample restricts the generalizability of findings across broader populations, such as working professionals or entrepreneurs, whose motivations and risk perceptions might differ. Future research should address these gaps by targeting diverse demographics and exploring longitudinal changes in AI attitudes over time.
- READ MORE ON:
- Generative AI adoption
- Trust in AI governance
- Privacy concerns AI adoption
- Ethical issues in AI
- AI adoption higher education
- Student attitudes toward AI
- AI governance and regulations
- How privacy concerns impact AI adoption among students
- Why college students accept or resist generative AI
- AI ethics in education
- AI data privacy concerns
- Future of AI adoption in education
- FIRST PUBLISHED IN:
- Devdiscourse