Equity, not detection, will decide AI’s future in higher education
The study identifies a major shift in how faculty view the relationship between AI and inclusion. Instead of seeing AI solely as a source of risk, many educators describe its potential to expand access and to level structural inequalities. According to the researchers, faculty repeatedly point to the way generative AI can accelerate the creation of accessible learning materials. This includes the rapid production of transcripts, simplified explanations, multilingual content and alternative formats for students with reading difficulties.
Higher education institutions are entering a decisive phase in their response to generative artificial intelligence, with a new multi-university study warning that approaches focused on surveillance and misconduct detection are rapidly collapsing under practical and pedagogical pressure. The research shows that faculty are now redirecting attention toward redesigning assessment, restructuring workload, and embedding equity into the use of AI, marking a fundamental shift in how universities attempt to align technology with Sustainable Development Goal 4 on inclusive and high-quality education.
The findings of the study “From Policing to Design: A Qualitative Multisite Study of Generative Artificial Intelligence and SDG 4 in Higher Education” are published in the journal Sustainability and are based on 28 semi-structured interviews, focus groups and document analysis across three public universities in India, involving 36 academics from diverse disciplines.
The research reveals a decisive movement away from punitive measures and toward design-driven approaches that prioritise inclusion, critical thinking and transparent learning processes.
AI becomes a tool for inclusion when human oversight and institutional funding are guaranteed
The study identifies a major shift in how faculty view the relationship between AI and inclusion. Instead of seeing AI solely as a source of risk, many educators describe its potential to expand access and to level structural inequalities. According to the researchers, faculty repeatedly point to the way generative AI can accelerate the creation of accessible learning materials. This includes the rapid production of transcripts, simplified explanations, multilingual content and alternative formats for students with reading difficulties.
However, the research makes clear that these benefits depend on two conditions: first, all AI-generated content must undergo human verification to prevent errors, bias and misalignment with course goals. Second, institutions must ensure universal access to approved AI tools to prevent the emergence of a two-tier learning system where privileged students gain advantages through paid premium tools while others fall behind.
The findings show that faculty consider institutional funding a critical factor for equity. Without it, AI-driven inclusion efforts risk reinforcing the very inequalities SDG 4 aims to eliminate. The study highlights that inclusive design is not automatic but requires coordinated policy action, infrastructure investment and ongoing professional support for educators.
Assessment emerges as the epicentre of conflict in AI-driven classrooms
Across the study’s three universities, assessment consistently surfaces as the most difficult challenge associated with AI integration. Traditional academic integrity tools that rely on detection, authentication and surveillance are viewed as increasingly ineffective. Faculty report that detection mechanisms fail to keep pace with the rapid evolution of generative tools, making punitive strategies unstable and legally questionable.
Instead of escalating policing, the study shows a decisive movement toward assessment redesign. Educators emphasise tasks that require students to demonstrate reasoning, judgement, process documentation and context-specific decision-making. Examples include reflective rationales, step-by-step prompt histories, error logs, justification notes and tasks anchored in local or discipline-specific scenarios that cannot be outsourced to generic AI systems.
While some educators maintain invigilated, hand-written examinations for narrow gateway competencies, such as quantitative methods, technical skills or language proficiency, the majority focus their efforts on assessment modes that foreground cognitive process and provenance rather than polished output. This shift represents a structural transformation in how learning is measured, signalling a break from decades of assessment orthodoxy.
Workload does not decrease with AI; it shifts and intensifies
Many public narratives depict AI as a labour-saving tool for teachers. The research demonstrates the opposite. Faculty report that generative AI accelerates drafting and content generation but shifts effort into new forms of invisible labour that institutions do not track or compensate.
These new labour categories include verifying AI outputs, testing prompts, curating datasets, checking for bias, providing ethical guidance, managing student confusion, and evaluating the appropriateness of AI use in individual tasks. The teaching role expands from content delivery to verification, mentoring and complex judgement.
This shift raises structural questions for universities. Existing workload models do not reflect the new responsibilities created by AI integration. The study argues that without formal recognition of this expanded labour, institutions risk creating unrealistic expectations and overburdening staff.
The authors highlight the need for updated workload policies that incorporate verification time, guidance responsibilities, continuous professional development and the administrative effort required to maintain equitable AI-enabled learning environments.
Governance, equity and trust depend on clear red lines and transparent structures
The study finds that institutional governance is another major determinant of whether AI supports or harms educational quality. Faculty express a strong preference for policies that provide simple, principle-based guidelines rather than prescriptive lists of prohibited behaviours that become obsolete within months. Effective policies must articulate non-negotiables, such as maintaining academic judgement, protecting student privacy and ensuring access to non-AI alternatives, while leaving room for disciplinary discretion.
The researchers identify several governance practices that build trust:
- Institution-funded access to approved AI tools
- Privacy-by-design infrastructures that protect student data
- Non-AI alternatives for all tasks without penalties
- Routine bias testing and transparency reporting for approved tools
- Clear communication of the pedagogical purpose for any AI-enabled activity
Faculty describe these practices as essential for aligning AI adoption with SDG 4 targets, particularly those related to equity, quality and safe learning environments.
Trust emerges as a central theme. Without transparent rules and institutional support, educators hesitate to integrate AI meaningfully, and students experience uncertainty about what is allowed, leading to inconsistent practices across courses.
Professional identity is being rebuilt around judgement, design and care
The study documents a significant cultural shift in how faculty view their professional roles. Generative AI’s ability to produce content rapidly forces educators to rethink what constitutes expertise. As drafting becomes more automated, faculty increasingly identify their expertise with diagnostic judgement, curricular design, ethical reasoning and the relational aspects of teaching.
This shift is not merely symbolic. Faculty note that students often struggle with interpreting AI outputs, identifying errors and understanding when a generated answer is misleading or incomplete. Teachers describe a growing responsibility to guide students in understanding how knowledge is produced, validated and attributed in an AI-rich environment.
The study argues that this redefinition of identity aligns with broader goals within SDG 4, particularly the emphasis on high-quality teaching, inclusive learning environments and lifelong learning skills. Teachers are not replaced by AI; rather, their responsibilities evolve toward areas where human judgement is irreplaceable.
Conditions for advancing SDG 4 through AI: Four commitments and a 12-month framework
Generative AI will support SDG 4 only when universities restructure their strategies around equity, pedagogy and accountability. The research distils this into four institutional commitments:
- Equitable access to AI tools and low-bandwidth alternatives
- Assessment redesign that foregrounds thinking, judgement and provenance
- Workload reforms that recognise verification and mentoring labour
- Transparent governance with routine bias testing and clear red lines
The study also outlines a 12-month implementation plan for institutions, involving staged rollouts of tools, routine feedback cycles, evidence-gathering on workload effects and public transparency notes documenting accuracy, bias and usage patterns.
- FIRST PUBLISHED IN:
- Devdiscourse

