Higher education is unprepared for the AI revolution: Here's why
Universities increasingly rely on AI to automate admissions processes, personalize learning, assist grading, detect plagiarism, manage knowledge systems, and support leadership decision-making. These applications promise efficiency and scalability, particularly in systems facing staff shortages, rising enrollment, and financial constraints. Yet the research highlights a critical imbalance: technological adoption has advanced faster than institutional capacity to regulate, evaluate, and ethically integrate these tools.
Universities around the world are rapidly integrating artificial intelligence into teaching, assessment, research, and administration, but evidence suggests higher education is ill-prepared for the scale and speed of this transformation. With generative AI tools becoming embedded in classrooms and decision-making systems, institutional responses remain fragmented, underregulated, and unevenly distributed across regions, raising concerns about academic integrity, equity, and the future role of human judgment in education.
The study “TransformED Futures: Towards Human-Centred, Ethical and Inclusive Use and Governance of AI in Higher Education,” published in World Futures Review, examines in detail how generative and predictive AI are reshaping higher education and why current governance frameworks are failing to safeguard educational values, institutional autonomy, and social inclusion.
Rapid AI expansion exposes structural weaknesses in higher education
The study traces the growing role of artificial intelligence in higher education over the past three decades, noting that AI-assisted systems were initially introduced to support administration, data analytics, and adaptive learning. However, the release of accessible generative AI tools marked a turning point, dramatically lowering the barrier to use for students, educators, and institutions alike.
Universities increasingly rely on AI to automate admissions processes, personalize learning, assist grading, detect plagiarism, manage knowledge systems, and support leadership decision-making. These applications promise efficiency and scalability, particularly in systems facing staff shortages, rising enrollment, and financial constraints. Yet the research highlights a critical imbalance: technological adoption has advanced faster than institutional capacity to regulate, evaluate, and ethically integrate these tools.
Many universities lack adequate digital infrastructure, technical expertise, and policy coherence to manage AI responsibly. In the absence of clear governance, institutions have adopted contradictory approaches. Some imposed blanket bans on AI use, while others allowed unrestricted adoption without safeguards. Both responses, the study argues, fail to address the structural nature of the challenge.
The research also points to a lack of robust evidence supporting many claims about AI’s educational benefits. While studies suggest potential gains in engagement, personalization, and administrative efficiency, large-scale empirical research on long-term learning outcomes, equity, and cognitive development remains limited. This evidence gap has not slowed adoption, creating a policy vacuum in which decisions are driven by urgency rather than understanding.
Ethical risks, academic integrity, and global inequality
Generative AI systems operate as opaque black boxes, producing outputs that cannot be fully traced, explained, or verified. This poses particular dangers in high-stakes academic contexts such as assessment, grading, admissions, and research authorship.
The study documents growing concerns around plagiarism, fabricated content, misinformation, and authorship ambiguity. AI’s strong performance in standardized assessments has intensified pressure on traditional evaluation methods, forcing universities to reconsider how learning is measured and verified. At the same time, detection tools designed to identify AI-generated content have shown high error rates, raising the risk of false accusations and undermining trust between students and institutions.
Beyond academic integrity, the research identifies broader ethical challenges related to bias, data privacy, and fairness. AI systems trained on large datasets often reproduce linguistic, cultural, and racial biases, with disproportionate effects on students from marginalized backgrounds. Predictive AI used in admissions and analytics is flagged as particularly high-risk due to its potential to reinforce structural discrimination.
The study places strong emphasis on global inequality. Universities in the Global South face unique vulnerabilities, including limited infrastructure, funding constraints, and dependence on externally developed AI platforms. Without inclusive governance frameworks, AI adoption risks deepening the global digital divide and accelerating what the study describes as digital colonization, where educational priorities, languages, and epistemologies are shaped by external technological power.
English-language dominance in AI-generated educational content further threatens linguistic diversity and local knowledge systems. The study warns that unchecked AI integration may undermine institutional autonomy, marginalize non-Western perspectives, and entrench asymmetries within the global knowledge economy.
A human-centred roadmap for governing AI in universities
To address these challenges, the study proposes a four-step roadmap aimed at reshaping how universities govern artificial intelligence. At its core is a call for a paradigm shift: institutions must move from being passive consumers of AI tools to active stewards of their design, use, and regulation.
The first step involves rethinking how AI is integrated into teaching, learning, and assessment. Rather than treating AI as a shortcut to efficiency, universities are urged to align its use with educational values such as critical thinking, creativity, and human interaction. This includes redesigning assessment methods to emphasize interpretation, reflection, and collaboration, areas where human judgment remains central.
The next step focuses on strengthening evidence-based research. The study argues that universities and policymakers must invest in large-scale, longitudinal research to evaluate AI’s real impact on learning outcomes, equity, mental well-being, and institutional governance. High-risk applications, including automated grading and admissions screening, require continuous monitoring and auditing to prevent unintended harm.
The third step highlights the importance of AI literacy. Educators, administrators, and students must be equipped with the skills to understand how AI systems work, what their limitations are, and how to use them responsibly. Developing digital citizenship and algorithmic literacy is presented as essential to preserving human agency in AI-mediated environments.
The final step focuses on international cooperation and inclusion. The study calls for greater involvement of universities from the Global South in shaping AI policies and standards. Ethical governance frameworks must reflect diverse contexts rather than imposing uniform solutions. Collaboration between academia, governments, civil society, and technology providers is identified as critical to building trust and accountability.
- FIRST PUBLISHED IN:
- Devdiscourse

