AI readiness in public sector driven more by people than technology

The findings demonstrate that public-sector AI readiness is not a simple matter of acquiring new systems or data platforms. Instead, the human dimension emerged as decisive. In the second round of testing, individual readiness accounted for 32.3 percent of the total weight, followed by organizational factors at 29.5 percent, environmental conditions at 20.1 percent, and technology at just 18 percent.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 20-08-2025 18:29 IST | Created: 20-08-2025 18:29 IST
AI readiness in public sector driven more by people than technology
Representative Image. Credit: ChatGPT

New research reveals that technology alone is not the decisive factor in determining artificial intelligence (AI) readiness in the public sector. Instead, human and organizational elements shape the ability of government institutions to integrate AI effectively.

A study published in Systems, “A Dual-Level Model of AI Readiness in the Public Sector: Merging Organizational and Individual Factors Using TOE and UTAUT,” introduces a decision-making framework that quantifies AI readiness across public-sector organizations. By merging the Technology–Organization–Environment (TOE) framework with the Unified Theory of Acceptance and Use of Technology (UTAUT), the authors present a dual-level model that prioritizes both structural and individual adoption factors.

What does the study reveal about AI readiness in public administration?

The researchers developed their model to address a gap in the way public-sector AI readiness is usually assessed. Traditional models have often emphasized infrastructure, data, and technical capabilities while overlooking how employees perceive and accept AI. To correct this imbalance, the team integrated the TOE framework, which accounts for technological, organizational, and environmental aspects, with UTAUT, which highlights personal attitudes toward technology adoption.

This combined framework was operationalized through the Analytic Hierarchy Process (AHP), allowing multiple managers to provide pairwise comparisons of criteria. These inputs were then aggregated into weighted priorities that result in a readiness score ranging from 0 to 100 percent, classified into five levels from Initial to Optimized.

The study was conducted in Slovenian municipalities through a two-round field survey involving 29 managers, mostly experienced professionals with strong ICT self-assessments. Their responses were analyzed for consistency, reliability, and sensitivity. The results showed a clear pattern: individual readiness, which includes voluntariness of use and behavioral intention, outweighed technological capacity as the most important determinant of readiness.

Which factors carry the most weight in determining AI readiness?

The findings demonstrate that public-sector AI readiness is not a simple matter of acquiring new systems or data platforms. Instead, the human dimension emerged as decisive. In the second round of testing, individual readiness accounted for 32.3 percent of the total weight, followed by organizational factors at 29.5 percent, environmental conditions at 20.1 percent, and technology at just 18 percent.

Within these categories, several criteria stood out. Voluntariness of use received the single highest weighting at 20 percent, followed by social influence at 12.5 percent and behavioral intention to use AI at 12.4 percent. This suggests that employees are more likely to embrace AI if they perceive adoption as voluntary and supported by their peers, rather than imposed top-down.

On the organizational side, innovation and readiness for change emerged as the strongest factor, followed by staff expertise and leadership. Costs and resource concerns ranked lowest, indicating that cultural and strategic readiness play a greater role than budgetary issues in driving adoption.

Environmental influences also mattered, with social influence outranking direct pressures from citizens or state institutions. On the technology side, data quality and availability remained the top priority, ahead of system functionality and effort expectancy.

The global ranking across all criteria revealed that voluntariness, social influence, and behavioral intention consistently held the top three positions across both survey rounds, underscoring the dominance of human-related factors over purely technical dimensions.

How can public institutions use this model to guide AI adoption?

The dual-level model provides public managers with a transparent, systematic tool to diagnose their institution’s AI readiness and identify capability gaps. Unlike rigid maturity models, it adapts to specific contexts and highlights where investments and reforms are most urgently needed.

For example, if voluntariness of use ranks low in an organization, leaders may need to focus on creating more participatory adoption processes. If innovation and readiness for change are weak, organizational culture and leadership approaches may need restructuring. If data availability is the constraint, technical upgrades must be prioritized. The model’s design ensures that readiness assessments do not overemphasize technology at the expense of people and organizational culture.

The framework’s application goes beyond diagnostics. It helps public institutions allocate resources, prioritize training, and design change management strategies. It also fosters a shared understanding among stakeholders by making the decision process transparent and based on structured expert judgment.

Broader implications and future directions

While the study was based on a relatively small sample, the results resonate with challenges seen across governments worldwide. Many administrations invest heavily in AI technologies without adequately addressing the softer dimensions of trust, leadership, and employee engagement.

The study also points to the need for stronger governance mechanisms. As AI becomes embedded in decision-making, issues of fairness, accountability, and citizen trust will gain prominence. The authors suggest extending the model in future research to include additional dimensions such as public acceptability, ethical considerations, and trust in AI systems.

Furthermore, the research recommends applying the framework across different countries and sectors to validate its robustness. Cross-national comparisons could reveal how cultural and institutional differences influence AI readiness. Longitudinal studies could also track how readiness evolves as institutions move from pilot projects to full-scale deployments.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback