AI creates new digital divide between access, use and influence

AI creates new digital divide between access, use and influence
Representative image. Credit: ChatGPT

Artificial intelligence is moving into the daily routines through which people seek, filter and judge news. While younger people report higher general use of AI tools, older adults are more likely to use AI specifically for news and current information, raising fresh questions about how generative AI is reshaping access to public knowledge, according to new research published in the AI journal.

The study, titled "From AI Access to AI Influence: Who Uses AI for News, Who Is Concerned About It, and What Are the Implications for the Multi-Level Digital Divide," examines who uses AI for news, who uses it to detect misinformation, who worries about AI's political and social influence, and whether these patterns fit the long-standing theory that digital tools tend to benefit younger, wealthier and more educated users first.

Age, gender and education remain significant predictors of AI-based news use when examined together, but the relationships are modest. Income, often treated as a central factor in digital inequality, does not show a consistent independent effect in the main regression models. The results suggest that the next phase of the digital divide may not be defined only by who has access to AI, but by who uses it to interpret news, simplify information and assess claims in a fragmented media environment.

AI news use cuts across expected age and education patterns

Younger participants reported higher overall AI use, which aligns with familiar patterns in digital adoption. But when the focus narrowed to news and current information, older respondents were more likely to report using AI tools. Age was also positively associated with using AI to identify fake news and with concern about AI's influence on political and social attitudes.

This gap matters because general AI use and news-related AI use are not the same behavior. Younger users may turn to AI more often for work, study, entertainment or daily tasks, but older users may find AI useful as a tool for making sense of news. The study suggests that AI's ability to summarize, explain and organize complex information could make it attractive to users who want support in navigating dense or confusing information environments.

The pattern complicates standard digital divide expectations. In earlier waves of internet and digital media adoption, older adults were often positioned as less likely to use advanced technologies. The author's analysis does not overturn that broader pattern, because younger people still report higher general AI use. But it shows that the picture changes when the question is not whether people use AI at all, but what they use it for.

Education produced another counterintuitive result. Respondents with higher formal education reported lower levels of AI-based news use and lower use of AI for detecting fake news. In the regression model predicting AI-based news use, education had a negative association, meaning that higher educational attainment was linked to less reported reliance on AI for this specific purpose. The same pattern appeared in the model predicting AI use for misinformation detection.

This finding does not mean that AI is closing educational gaps or producing better-informed users. It measures self-reported practices and perceptions, not actual knowledge, accuracy or news literacy outcomes. However, it does suggest that people with less formal education may be more inclined to use AI as a support tool for news comprehension, while more educated users may rely on other methods, have greater skepticism, or feel less need for AI assistance in news interpretation.

The effect sizes, however, are small. The models explain only a limited share of the variance in AI-based news behavior. That means demographic factors matter, but they do not fully explain who uses AI for news. The author argues that future research should examine other drivers, including AI literacy, trust, perceived usefulness, ease of use and digital skills. These factors may be more important than income or education alone in explaining why some people use AI to process news and others do not.

Women report slightly higher use of AI for news and misinformation checks

Women report slightly higher use of AI tools for consuming news and for identifying fake news. In multivariable models, being female remained a significant predictor of AI-based news use and AI use for detecting fake news, even when age, education and income were considered at the same time.

That pattern differs from older assumptions that men tend to adopt advanced digital technologies more readily. In this case, the accessible and conversational format of AI tools may be changing the relationship between gender and technology use. AI systems that respond in plain language, summarize information and allow follow-up questions may reduce some barriers linked to confidence, technical skill or platform familiarity.

The study does not suggest a sharp divide between men and women. Instead, it points to specific contexts in which women reported slightly greater engagement with AI, especially for news consumption and misinformation detection. The finding matters because misinformation has become a central concern in digital public life, and AI tools are increasingly marketed or used as aids for checking claims, identifying false information and comparing sources.

The results also raise questions about trust. Using AI to identify fake news requires some level of confidence that the tool can help distinguish reliable from unreliable information. Yet AI systems themselves can produce errors, reflect bias or generate misleading answers. This creates a new literacy challenge: users must evaluate not only the original news item, but also the AI system's response to it.

The author frames this issue through the concept of "AI-access" and "AI-influence." AI-access refers to the extent and form of AI use for news and information. AI-influence refers to how users perceive the effect of AI on their social and political attitudes. The difference is crucial because a person can use AI frequently without believing it shapes their views, or worry about AI influence without using it often.

The findings show that perceived AI influence is only weakly tied to demographic characteristics. In the regression model predicting concern about AI's influence on political and social attitudes, age was the only significant predictor. Older respondents reported slightly higher perceived influence, while gender and education were not significant predictors. Income showed only a marginal negative association, suggesting that higher-income respondents may report lower perceived influence, but the study does not treat that as a strong independent effect.

Digital divide debate shifts from access to influence

The study challenges a simple access-based view of digital inequality. Earlier digital divide debates often focused on who had internet access, devices and basic technical skills. Over time, researchers expanded the framework to include differences in usage patterns and outcomes. The author applies that multi-level framework to AI, arguing that AI creates a new stage in the divide: unequal ability to use AI not only to access information, but to interpret and evaluate it.

Traditional search engines and social media platforms already shaped public exposure to information through algorithms. But generative AI adds another layer because it can produce explanations, summaries and judgments. A user may ask an AI tool to explain a political issue, simplify a conflict, summarize a policy debate or assess whether a claim is misleading. In each case, AI is not just delivering information. It is shaping the form in which the information is understood.

The research does not claim that AI directly changes political attitudes. It measures perceived influence, not actual attitude change. This is an important limit. Users may overestimate or underestimate AI's effect on their views. Self-reported survey data cannot show whether AI caused a belief to shift, whether it improved understanding, or whether it made misinformation more persuasive. The author presents the findings as exploratory associations, not causal evidence.

The survey was conducted in Israel in October 2025 through a nationally diverse online panel. It was aligned with key demographic benchmarks, including sector, age, gender, region and education. Because the study relies on an online panel and focuses on one national context, the results should not be treated as globally generalizable. Patterns of AI news use may differ in countries with different media systems, political climates, levels of AI access, language availability and trust in institutions.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback