Global AI boom forces media scholars to confront new power shifts
Generative AI is surrounded by a sense of urgency and inevitability. Policymakers, corporations, and investors portray AI as a transformative force that will reshape national security, labor markets, cultural production, and scientific discovery. Narratives about artificial general intelligence, global AI races, and economic supremacy create a climate in which AI advancement is framed as essential to national competitiveness. These narratives influence regulations, strategic investments, and public sentiments even though many claims about AI capabilities lack grounding in empirical evidence.
A new analysis published in Global Media and China examines how the rise of generative artificial intelligence is transforming the foundations of media and communication studies, arguing that the discipline must confront an urgent set of structural, economic, and cultural disruptions as AI systems become embedded in the global digital infrastructure. The commentary outlines how rapid advances in generative models, cloud platforms, and national AI strategies are reshaping communication systems in ways that demand a fundamental re-evaluation of scholarly frameworks.
The study, “Three Challenges for Media and Communication Studies in the Age of AI,” identifies three interconnected challenges: the economic entanglement between GenAI and platform capitalism, the widening gap between AI hype and real-world practice, and the uncertain future of non-profit or public alternatives to Big AI. These challenges, The author argues, require scholars to reconsider long-standing assumptions about media power, labor, cultural production, data governance, and global inequalities.
With AI increasingly positioned as both an economic engine and a geopolitical tool, the study highlights the pressing need for critical research that explains how this transformation is unfolding and who stands to benefit or lose.
How platform capitalism became the backbone of generative AI
According to the author, the rapid expansion of generative AI is not occurring in isolation. Instead, it is layered directly on top of cloud infrastructures built by major U.S. and Chinese companies over the past two decades. These same corporations already dominate search, social media, advertising, retail logistics, and digital content distribution. Now they control the computational backbone required to train, scale, and embed GenAI systems across industries.
This convergence turns cloud computing into a strategic chokepoint. Any company wishing to develop or deploy large language models depends on the computational capacity owned by Amazon, Google, Microsoft, Alibaba, Baidu, and Tencent. These firms provide not only server capacity but also specialized AI hardware, software pipelines, monitoring tools, and vast engineering support. The result is a deepening of corporate concentration, where even fast-growing AI startups rely on deals with cloud giants, often trading equity, preferred access, or data interoperability for the ability to train models at scale.
The infrastructure powering GenAI is enormously expensive, creating a widening gap between firms able to participate in cutting-edge AI development and those that cannot. At the same time, this dependence redefines value generation. Training data, derived from the public internet and user-generated content, becomes a global resource extracted by corporations with little direct compensation to the individuals, communities, and labor systems that produced it. Even as cloud providers profit from selling computational power, much of the foundational content that fuels GenAI originates from unpaid or unrecognized digital labor.
Another issue identified in the study is the unresolved economic model underlying GenAI. Many firms are investing heavily in model training, server infrastructure, and application integration without clear paths to sustainable revenue. Advertising alone cannot support these costs, and subscription-based revenue remains unproven at scale. As a result, cloud providers and AI developers continue to burn capital at unprecedented levels. The long-term implications for markets, labor, and content production remain unclear, intensifying the need for scholarly scrutiny into the political economy of AI.
The author argues that the discipline must now track how these infrastructures shape global power relations, how new divisions of labor emerge around AI development, and how the economic logic of platform capitalism extends into new domains through the integration of GenAI in retail, manufacturing, healthcare, finance, and cultural industries. Understanding these entanglements, he asserts, is central to mapping how meaning, data, and value circulate in the age of AI.
Why AI hype complicates research and obscures global realities
The author stresses that media and communication scholars face the difficult task of analyzing AI systems within an environment dominated by oversized expectations, speculative narratives, and geopolitical ambitions.
Generative AI is surrounded by a sense of urgency and inevitability. Policymakers, corporations, and investors portray AI as a transformative force that will reshape national security, labor markets, cultural production, and scientific discovery. Narratives about artificial general intelligence, global AI races, and economic supremacy create a climate in which AI advancement is framed as essential to national competitiveness. These narratives influence regulations, strategic investments, and public sentiments even though many claims about AI capabilities lack grounding in empirical evidence.
The study highlights how this hype can drive policy decisions, accelerate capital flows, and reshape industry behavior long before technologies are fully understood. Governments in the U.S., China, France, and Germany have produced national strategies that position AI as central to economic resilience, geopolitical advantage, and technological progress. These strategies often assume linear, inevitable growth, sidelining debates about ethics, labor conditions, or structural inequalities.
At the same time, The author warns that much public debate remains centered on developments in the U.S. and China, ignoring how GenAI is adopted and adapted across diverse global contexts. AI technologies do not manifest uniformly across regions; they are shaped by local political economies, regulatory frameworks, cultural practices, and communication infrastructures. Scholars must therefore avoid applying Western frameworks universally, as this risks reproducing the same colonial knowledge hierarchies that postcolonial and decolonial scholars have critiqued for decades.
In many countries, AI is introduced into cultural industries, educational systems, public services, and creative labor markets in ways shaped by social relations of race, class, gender, and language. GenAI systems inevitably reflect the biases, values, and worldviews embedded in their training data. As a result, AI-assisted meaning-making carries cultural and political implications that vary by region. Understanding these dynamics requires situated research that examines who gets access to AI tools, how communities negotiate or resist AI integration, and how global power structures influence technical design.
This challenge, the study argues, demands methodological shifts, encouraging researchers to incorporate comparative, transnational, and collaborative approaches that genuinely reflect the global diversity of AI practices.
Can public and non-profit AI alternatives survive in a big tech–dominated landscape?
The third question The author raises concerns the future of public-interest and non-profit alternatives to Big AI. As large corporations push toward ever-bigger models, consuming more energy, data, and computational power, critical scholars have called for smaller, locally curated, and ethically governed AI systems. These smaller models could reduce environmental impact, mitigate biases, and align more closely with community needs.
The study explains why these proposals are compelling. Large language models often overrepresent dominant languages, cultures, and worldviews, sidelining the linguistic and cultural diversity that characterizes most of the world. Smaller, carefully curated models could be developed for specific languages or cultural contexts, designed with transparent data selection processes and closer engagement with affected communities. These models might better protect vulnerable groups, limit extractive data practices, and support digital sovereignty.
Yet the study stresses that these alternatives face structural barriers that mirror longstanding issues in digital platform ecosystems. Public or non-profit platforms often fail to scale due to inadequate funding, limited engineering support, and weak integration with mainstream consumer technology. Even when technically strong, they lack the resources to match the seamlessness, reliability, and user experience offered by commercial platforms backed by multibillion-dollar infrastructures.
As a result, many promising public-interest technologies remain niche or experimental. Without sustained investment, institutional support, and community adoption, smaller ethical AI models risk following the same trajectory. The challenge is not only building these tools but ensuring they can operate at scale, maintain performance, and attract real user engagement.
To sum up, building credible alternatives requires a coordinated effort connecting researchers, civil society, policymakers, and local communities. It also requires acknowledging that the current political economy of AI favors concentration, creating an environment in which public-interest AI may struggle to survive without systemic reforms.
- READ MORE ON:
- AI and media studies
- generative AI challenges
- platform capitalism
- big tech influence
- AI political economy
- global AI adoption
- AI hype vs reality
- public-interest AI
- non-profit AI models
- communication research and AI
- digital infrastructure power
- AI governance
- cultural impact of AI
- data extraction economy
- global tech inequalities
- FIRST PUBLISHED IN:
- Devdiscourse

