Higher education must close AI skills gap to support inclusive innovation
Artificial intelligence may not automatically make higher education more equal, despite its expanding role in learning, research and innovation. A new study of 1,020 Chinese university students finds that family cultural capital still shapes students' innovative capacity in the AI era, partly by influencing whether they develop the practical AI skills needed to turn ideas, information and academic opportunities into visible innovation outcomes.
The study, titled "Family Cultural Capital and University Students' Innovative Capacity in Higher Education: The Mediating Role of AI Literacy and Implications for Sustainable Development Goal 4," was published in Sustainability. The paper shows that students' family background remains linked to innovation-related achievement in AI-mediated higher education, while technical AI application skills serve as a key pathway through which social advantage is converted into academic and creative capacity.
AI is changing higher education, but family background still matters
AI is rapidly changing the way university students search for information, organize knowledge, write, translate, solve problems, conduct research and develop ideas. AI-assisted retrieval tools, automated feedback systems, writing support platforms, translation software and problem-solving applications are becoming part of everyday academic practice. Universities increasingly present these tools as engines of educational innovation, with the promise that AI can lower technical barriers, improve access to knowledge and help students participate more effectively in research and creative work.
The study challenges any assumption that wider access to AI automatically produces fairer education. The authors argue that technological expansion may create new opportunities, but it can also reproduce old inequalities in new forms. Students do not arrive at university with equal family resources, equal confidence, equal learning habits, equal exposure to cultural knowledge or equal ability to evaluate and use emerging technologies. As AI becomes more central to higher education, these differences may shape who benefits most from digital transformation.
The researchers examine this problem through the lens of family cultural capital, a concept rooted in the work of Pierre Bourdieu. Family cultural capital refers to the resources, habits, dispositions and forms of cultural familiarity that students acquire from their home environment. In the study, this includes both cultural resources and embodied cultural capital. Cultural resources include access to educational information, academic databases, cultural gifts, books, tools and family-based networks related to further education. Embodied cultural capital refers to habits, preferences and dispositions linked to cultural participation, such as interest in museums, concerts, art exhibitions, literature and classical music.
The key question is whether these family-based cultural advantages still influence innovative capacity when AI tools are widely available. The findings show that they do. Cultural resources had a significant direct effect on students' innovative capacity, suggesting that access to educational and cultural support from family environments still helps students produce research, academic work, practical solutions and innovation-related outcomes. Embodied cultural capital, however, worked differently. It did not have a significant direct effect on innovative capacity, but it had a significant total effect because it strongly predicted AI literacy, especially technical application skills.
This distinction suggests that family background is not only giving some students direct academic advantages. It is also helping them acquire the skills needed to use AI productively. In other words, cultural advantage is being converted into AI-related capability. Students from culturally richer family environments may be better positioned to understand how AI tools can support learning, how to use them for academic tasks and how to turn their use into research participation, problem-solving and creative output.
The study, therefore, adds a warning to debates about AI in education. Access to AI tools is not the same as equal ability to use them well. Students may all have access to similar platforms, but the capacity to apply them critically, strategically and effectively may still depend on deeper social and cultural conditions. This is the AI-era version of digital inequality: the gap is not only about who has devices or internet access, but about who has the cultural preparation to use technology for meaningful educational gain.
The researchers also link their findings to Sustainable Development Goal 4, which calls for inclusive, equitable and quality education. The study argues that if AI is to support SDG 4, universities must look beyond tool adoption and focus on fair capability development. Otherwise, AI may increase the efficiency and innovation output of already advantaged students while leaving deeper educational inequalities intact.
Technical AI skills turn cultural advantage into innovative capacity
The study defines AI literacy as students' ability to understand, evaluate and use AI critically and responsibly across different contexts. It separates AI literacy into two dimensions: technical application skills and awareness of the social impact of AI. Technical application skills include the ability to modify AI tool parameters, troubleshoot common technical problems and train custom AI models for discipline-related tasks. Awareness of social impact includes understanding how AI may affect social inequality, interpersonal relationships and human responsibility in AI development.
The results show that these two dimensions do not play the same role. Technical application skills significantly mediated the relationship between both dimensions of family cultural capital and innovative capacity. This means that students with stronger family cultural capital were more likely to develop practical AI skills, and those skills were linked to higher innovative capacity. The pathway was especially strong for embodied cultural capital, indicating that students' cultural habits and dispositions may become valuable when they help students use AI tools for academic and innovation-related work.
On the other hand, awareness of the social impact of AI did not significantly mediate the relationship between family cultural capital and innovative capacity. This does not mean ethical or social awareness is unimportant. The authors stress that reflective AI literacy remains central to responsible education. However, in the model tested, practical AI skills were more directly linked to innovation-related outcomes than broader awareness of AI's social effects.
This finding reflects how innovation is often measured in higher education. The study assessed innovative capacity through research participation, intellectual property output and innovative self-efficacy. These are practical and visible forms of achievement. Students who can use AI tools to gather information, develop ideas, draft academic material, solve technical problems or support discipline-specific tasks may be more likely to show measurable innovation. Students who understand AI's social risks may develop important ethical judgment, but that awareness may not immediately translate into research output, patents, prototypes, academic reports or self-reported innovation performance.
The study found that cultural resources directly predicted technical AI application skills but did not significantly predict awareness of AI's social impact. Embodied cultural capital, however, strongly predicted both technical application skills and social awareness. This suggests that different forms of family cultural capital shape AI literacy in different ways. Practical family resources may help students acquire usable AI skills, while deeper cultural dispositions may support both practical and reflective dimensions of AI understanding.
The researchers used structural equation modeling to test these pathways. The model showed that cultural resources had a significant direct effect on innovative capacity, while embodied cultural capital influenced innovation mainly through AI technical skills. Technical application skills were therefore the central bridge between family background and innovation in AI-mediated higher education.
This has major implications for universities. If practical AI competence becomes a new route to research productivity, academic confidence and innovation, then unequal AI literacy may become a new form of educational inequality. Students whose family environments have already cultivated independent learning, cultural familiarity, academic confidence and access to information networks may be quicker to turn AI into a tool for creative and academic advancement. Students with weaker cultural support may have access to AI but may use it more passively or narrowly.
The findings also challenge universities to rethink how they evaluate innovation. If institutions reward visible outputs such as research participation, patents, prototypes and academic products, students with stronger technical AI application skills may gain an immediate advantage. Yet responsible innovation also requires ethical reflection, critical judgment and awareness of social consequences. If universities fail to value these reflective dimensions, they may encourage tool-driven productivity without enough attention to fairness, bias and social responsibility.
The authors argue that AI literacy should not be treated as a single skill or as simple tool familiarity. It includes operational competence, critical judgment and ethical awareness. However, the study shows that different dimensions of AI literacy may produce different educational outcomes. Universities must therefore design AI education that develops both technical ability and social responsibility, rather than assuming one will automatically produce the other.
Universities must make AI literacy part of equitable innovation policy
The study offers a direct policy message: AI literacy should become a structured educational objective, not an informal skill that students are expected to acquire on their own. If universities leave AI learning to individual initiative, students with stronger family cultural capital are likely to benefit first and most. This could widen gaps in research participation, innovation confidence and academic opportunity.
The authors recommend that AI-related training move beyond short workshops on tool use. Universities should incorporate AI literacy into formal teaching, including instruction on prompt design, source verification, bias recognition, discipline-specific applications and responsible academic judgment. These forms of training are particularly important for students with weaker family-based cultural support, because unequal starting points can quickly become unequal outcomes in AI-supported learning environments.
The study also calls for AI literacy to be embedded into innovation education. In academic writing courses, students can be taught to compare AI-generated drafts with their own work, identify factual errors, strengthen weak arguments and explain how they revised content. In interdisciplinary project courses, students can use AI tools to investigate real-world sustainability and innovation challenges while documenting where human judgment remains necessary. These approaches would frame AI not as a replacement for thinking, but as a tool that must be checked, interpreted and responsibly used.
Faculty members are key to this shift. The study argues that teachers are not only users of educational technology but designers of learning environments. They must set clear boundaries for acceptable AI use, model critical evaluation of AI outputs and design assignments that reward reasoning, interpretation and verification rather than simple content generation. Without such guidance, students with stronger prior knowledge and support resources are more likely to use AI effectively, while others may become passive users of generated material.
Students also need clear responsibility. They should be trained to use AI for exploration, feedback, revision and problem-solving, while remaining accountable for the accuracy, ethics and final quality of their work. This is especially important as universities struggle with how to distinguish legitimate AI-supported learning from overreliance or academic misconduct.
At the policy level, the study argues that digital transformation in higher education must balance innovation with equity. Expanding platforms, software and infrastructure is not enough. Universities and policymakers must also address the unequal social conditions that affect students' ability to benefit from AI. This means designing inclusive curricula, offering targeted support to students from less-privileged backgrounds and recognizing AI-related capability as part of quality education.
The study is especially relevant to SDG 4 because it shows that AI-enabled education cannot be judged only by technological adoption. Inclusive and equitable education requires attention to how students develop capabilities, who receives meaningful support and whether innovation systems reduce or reproduce disadvantage. AI may help students access information and complete tasks more efficiently, but the benefits remain socially shaped.
Higher education institutions should treat AI literacy as part of future-oriented skill development. As labour markets and knowledge systems become more AI-mediated, students need more than basic digital access. They need the ability to evaluate AI outputs, use tools for complex problem-solving, understand limits and risks, and apply technology in creative and responsible ways.
The authors acknowledge limitations. The study relied on non-probability online snowball sampling and voluntary participation, meaning the sample may not fully represent all university students. The data were cross-sectional, so the results show associations rather than definitive causal pathways. The study also relied on self-reported questionnaire data, which may be affected by social desirability or common method bias.
Despite these limitations, AI's role in higher education cannot be separated from social inequality. Family cultural capital remains relevant in the AI era, but its influence is changing. It works not only through direct educational advantage but also through students' ability to acquire and apply AI-related skills.
Universities must act before AI literacy becomes another hidden marker of privilege. If students are left to learn AI unevenly, those with stronger cultural and family support may continue to pull ahead. If institutions build structured, inclusive and responsible AI education, they can turn digital transformation into a tool for broader participation.
- FIRST PUBLISHED IN:
- Devdiscourse
ALSO READ
-
India launches AI literacy programme to train 10 lakh teachers by 2027
-
Jitendra Singh holds bilateral talks with French minister on higher education, space
-
TN polls: Higher Education Minister Govi Chezhiaan defeats TVK candidate Prabakaran at Thiruvidaimarudur.
-
Transforming Higher Education: Inauguration of Futuristic Academic Block at Chanakya University
-
J-K: Girls in remote areas of Kalakote excel in higher education after opening of Govt Degree College
Google News