From productivity to privacy: What Saudis really think about generative AI


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 30-01-2026 11:28 IST | Created: 30-01-2026 11:28 IST
From productivity to privacy: What Saudis really think about generative AI
Representative Image. Credit: ChatGPT
  • Country:
  • Saudi Arabia

New national survey data show that generative AI is no longer a niche or experimental technology in Saudi Arabia. Instead, it is rapidly becoming part of daily personal, educational, and professional routines, while also raising clear concerns about skills, trust, privacy, and workforce impact.

The study Generative AI in Saudi Arabia: A National Survey of Adoption, Risks, and Public Perceptions, released as an arXiv preprint, assesses how generative AI is being adopted, understood, and judged by the Saudi public at a pivotal moment in the country’s digital transformation.

Rapid adoption driven by everyday utility, not deep technical understanding

The survey data show that generative AI adoption in Saudi Arabia accelerated sharply after 2022, aligning with the global release and mainstream visibility of large language models such as ChatGPT. More than nine in ten respondents reported using generative AI tools, with the majority engaging with them on a weekly or daily basis. Daily use alone accounted for nearly half of active users, underscoring how quickly these tools have moved from novelty to routine.

Usage is strongest among younger adults, particularly those aged 18 to 28, but frequent engagement is not limited to youth. Middle aged and older respondents also reported regular use, indicating that generative AI is diffusing across the adult life span rather than remaining confined to digitally native cohorts. Despite this broad uptake, most users reported modest weekly time commitments, typically five hours or less, suggesting that generative AI is being used as a productivity aid rather than an all-consuming platform.

The primary driver of adoption is practical value. The most common uses involve text-based and knowledge-focused tasks, including research assistance, writing support, summarization, and brainstorming. Learning new concepts and organizing information also ranked highly. By contrast, more advanced or technical applications such as programming support, data analysis, and multimodal content generation remain secondary. This pattern reflects a public that is using generative AI first where it delivers immediate and visible efficiency gains.

Context matters. Personal use leads, followed closely by learning and academic support, with workplace use slightly behind. Many respondents reported using generative AI across multiple contexts, blending personal exploration with study and job-related tasks. This cross-context use suggests that generative AI skills are developing informally, outside structured training or formal institutional frameworks.

Despite widespread use, the study highlights a significant gap between adoption and understanding. While most participants reported average to high awareness of what generative AI can do, technical understanding lagged behind. Fewer respondents felt confident explaining how these systems work or understanding their underlying mechanisms. Awareness was strongest around capabilities and limitations, indicating that users have a practical sense of strengths and weaknesses even if they lack deeper technical knowledge.

The research found that awareness grows through use rather than formal education. Frequent users demonstrated significantly higher understanding than occasional users, regardless of academic background. Employment sector also played a role, suggesting that certain professional environments offer more exposure and informal learning opportunities. By contrast, factors such as education level, general trust, or concern about risks were not strong predictors of awareness once usage was accounted for.

Productivity gains offset by concerns over skills, trust, and misinformation

Perceptions of impact form the core of the study’s findings, revealing a public that largely views generative AI as beneficial while remaining cautious about its broader consequences. Respondents consistently reported positive effects on task speed and efficiency, making faster task completion the most widely recognized benefit. Improved understanding of complex information followed closely, reinforcing the role of generative AI as a learning and comprehension aid.

Creativity and skill development also scored positively, with many users reporting that generative AI helps generate ideas and improve output quality. These perceived gains align with international evidence showing that generative AI can reduce routine workload and support knowledge work across sectors.

However, enthusiasm is not uniform across all dimensions of work and cognition. Perceived benefits were notably weaker when respondents assessed critical thinking and decision confidence. Fewer than half reported strong positive effects in these areas, indicating unease about relying on AI for judgment-heavy tasks. This pattern points to a growing awareness that while generative AI can support thinking, it may not strengthen, and may even weaken, independent reasoning if used without care.

Trust remains measured rather than absolute. Most users reported reviewing AI-generated outputs rather than accepting them at face value. Concerns about content accuracy and hallucinated information emerged as one of the most widely shared anxieties, cutting across age, education, and employment categories. This makes misinformation and reliability the most broadly distributed concern in the Saudi sample.

Other risks show clearer demographic patterns. Younger users expressed stronger skepticism toward AI outputs, while mid-career adults showed heightened sensitivity to privacy and data protection. University-educated respondents, particularly those with bachelor’s and postgraduate degrees, were more concerned about overreliance on AI, decline in personal skills, and long-term job security. Sector-based differences also appeared, with perceptions of job displacement varying by industry.

Trust itself emerged as the single most consistent barrier to wider adoption. Lack of trust varied significantly by age and sector, alongside practical barriers such as unclear use cases, difficulty learning how to use tools effectively, and technical constraints. These findings suggest that adoption is not being limited by access alone, but by confidence, clarity, and perceived reliability.

Data-sharing behavior further illustrates this cautious engagement. While most respondents avoided sharing highly sensitive information, a notable minority disclosed personal data such as email addresses or dates of birth. Students and unemployed respondents were more willing to share personal data than employed users, reflecting differences in risk awareness and institutional norms. Employees showed the lowest disclosure rates, suggesting that workplace standards and professional accountability shape more conservative behavior.

Training demand, governance gaps, and the Vision 2030 alignment challenge

Further, the study provides insight into what Saudi users want next from generative AI. Demand for training is strong. More than half of respondents expressed definite interest in structured generative AI training, with another significant share indicating possible interest. The most desired training focus was not generic AI literacy, but applied use within specific professional or academic fields.

Privacy protection and data preservation ranked almost as highly, underscoring how closely adoption is tied to trust and governance. Foundational knowledge of generative AI and skills for evaluating output quality also attracted substantial interest, reflecting a public that wants both practical competence and critical oversight capabilities.

Training preferences clustered naturally. Respondents who wanted domain-specific training also tended to want foundational understanding, output evaluation skills, and ethical guidance. Privacy training frequently overlapped with interest in legal and ethical aspects of AI use. These patterns suggest that piecemeal instruction may be less effective than integrated training programs that combine application, theory, and responsibility.

The qualitative responses reinforce this picture. Participants largely expressed cautious optimism, emphasizing productivity and time-saving benefits while warning against overuse and blind reliance. Calls for moderation, verification, and balanced use were common. Many respondents stressed that generative AI should support, not replace, human skills.

Education emerged as a central priority, particularly in schools and universities. Respondents highlighted opportunities for generative AI to improve learning and teaching, provided that ethical boundaries and skill development are maintained. Healthcare, government services, and public administration were also identified as areas where generative AI could deliver social value if implemented responsibly.

Cultural and linguistic alignment featured prominently. Participants emphasized the need for high-quality Arabic language support and tools that reflect Saudi values and social norms. Several responses linked generative AI development directly to national goals, framing it as part of the Vision 2030 project rather than a purely commercial or technical endeavor.

At the policy level, the findings point to a clear governance challenge. While Saudi Arabia has already moved to establish national AI guidelines and infrastructure through bodies such as the Saudi Data and Artificial Intelligence Authority, public perceptions suggest that awareness, training, and trust-building must keep pace with technological rollout. The uneven distribution of concerns across age, education, and sector indicates that one-size-fits-all approaches may fall short.

Overall, Saudi Arabia stands at an early but decisive stage of generative AI integration. Adoption is already widespread and normalized, especially for text-based productivity and learning tasks. Yet awareness remains uneven, trust is cautious, and concerns about skills erosion, privacy, and misinformation are firmly present.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback