Generative AI is changing curriculum development inside higher education institutions

The findings highlight an important distinction for universities deploying AI technologies. Ease of use facilitates engagement, experimentation, and sustained interaction with AI tools, but it does not automatically produce innovation. Instead, usability functions as an enabler that supports innovation once perceived value is established.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 23-01-2026 12:22 IST | Created: 23-01-2026 12:22 IST
Generative AI is changing curriculum development inside higher education institutions
Representative Image. Credit: ChatGPT

Large language models (LLMs) are no longer limited to experimental teaching aids or peripheral classroom tools. New academic evidence suggests they are beginning to change the way university faculty design curricula, generate ideas, and pursue innovation inside higher education institutions. 

A newly published study titled “Large Language Models and Innovative Work Behavior in Higher Education Curriculum Development”, appearing in the journal Administrative Sciences, provides an in-depth analysis of how tools such as ChatGPT influence innovation among university faculty members. The research moves beyond adoption rates and technical capabilities to examine how perceptions of AI usefulness and usability translate into concrete innovative behaviors in academic work.

How perceived value of AI reshapes faculty innovation

The study is based on the Technology Acceptance Model, a long-standing framework used to explain why individuals adopt new technologies. Rather than stopping at adoption, the researchers extend the model to examine innovative work behavior, a concept that captures how individuals identify opportunities, generate new ideas, promote those ideas within organizations, and reflect on their practices to improve outcomes.

Drawing on survey data from 493 faculty members across five universities in Saudi Arabia, the research finds that perceived usefulness of large language models is the strongest predictor of innovation-oriented behavior in curriculum development. Faculty who believe LLMs meaningfully improve teaching quality, decision-making, and curriculum planning are significantly more likely to engage in opportunity exploration, idea generation, idea promotion, and reflective improvement.

This finding has major implications for higher education leaders. The results suggest that innovation does not emerge simply because AI tools are available or easy to use. Instead, innovation accelerates when faculty clearly see how LLMs add academic value. When instructors perceive that AI can streamline complex tasks, support data-informed curriculum decisions, and expand pedagogical possibilities, they become more willing to experiment, propose new approaches, and advocate for AI-supported change within their institutions.

The study also demonstrates that perceived usefulness has a stronger influence on innovation than perceived ease of use across all measured behaviors. While usability matters, faculty innovation is driven more by outcomes than convenience. This pattern reflects the demanding nature of innovative academic work, which requires sustained cognitive effort, institutional engagement, and professional risk-taking. Faculty are more willing to invest that effort when the benefits of AI adoption are clear and substantial.

Ease of use lowers barriers but does not drive innovation alone

While perceived usefulness dominates, perceived ease of use still plays a significant role. The research shows that faculty members who view large language models as intuitive and low-effort tools are more likely to explore new applications, generate ideas, promote AI-supported initiatives, and reflect on their teaching practices. Ease of use reduces cognitive and procedural barriers, freeing instructors to focus on higher-level creative and analytical tasks.

The findings highlight an important distinction for universities deploying AI technologies. Ease of use facilitates engagement, experimentation, and sustained interaction with AI tools, but it does not automatically produce innovation. Instead, usability functions as an enabler that supports innovation once perceived value is established.

In practical terms, this means that institutions should not assume that user-friendly interfaces alone will lead to transformative outcomes. Training programs that focus only on how to operate AI tools may fall short if they do not also demonstrate how those tools enhance teaching quality, curriculum coherence, and academic decision-making. Faculty need to understand not just how to use LLMs, but why they matter.

The study’s results also suggest that ease of use supports reflective practices, an often-overlooked dimension of innovation. Faculty who find AI tools easy to integrate are more likely to assess their own teaching methods, evaluate outcomes, and adjust strategies over time. This reflective dimension is critical for sustainable innovation, as it links experimentation with continuous improvement rather than one-off adoption.

What the findings mean for universities and policymakers

The study also offers broader insights into how higher education systems can foster AI-driven innovation responsibly and effectively. By empirically linking technology acceptance to innovative work behavior, the research provides evidence that institutional strategies matter as much as technical capabilities.

For university leaders, the findings point to the need for policies that emphasize academic value creation. Demonstrating concrete use cases where large language models improve curriculum design, assessment practices, and learning outcomes is likely to be more effective than generic AI adoption mandates. Faculty innovation thrives when AI is framed as a tool that enhances professional judgment rather than replaces it.

The research also draws focus to institutional support structures. Innovation is more likely when faculty feel encouraged to share ideas, advocate for new approaches, and reflect openly on successes and failures. Creating environments that reward experimentation, cross-disciplinary collaboration, and reflective learning can amplify the positive effects of AI adoption.

From a policy perspective, the study provides timely evidence as governments and regulators debate the role of generative AI in education. Rather than focusing solely on risks or efficiency gains, the findings highlight the human and behavioral dimensions of AI integration. AI tools influence academic innovation not simply through automation, but by reshaping how educators think, explore opportunities, and engage with institutional change.

The study suggests that faculty agency remains central in AI-enabled education. Innovation does not emerge automatically from algorithmic power; it emerges when educators align AI tools with pedagogical goals, professional values, and institutional missions.

A data-driven view of AI’s role in curriculum transformation

Perceptions of usefulness and ease of use together explain a substantial share of variation in innovative work behavior. This predictive strength adds weight to the argument that AI adoption and innovation are closely linked, but not interchangeable.

Importantly, the research avoids framing large language models as autonomous drivers of change. Instead, it positions them as cognitive and decision-support tools whose impact depends on how faculty interpret and apply them. This perspective aligns with growing calls for responsible AI integration that prioritizes human judgment, transparency, and educational values.

The study also addresses a gap in existing literature. While previous research has examined AI adoption in education or innovation in academic work, few studies have empirically connected the two. By integrating the Technology Acceptance Model with innovative work behavior theory, the research provides a more complete picture of how AI tools influence not just whether faculty adopt technology, but how they innovate with it.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback