How big tech is influencing future of AI regulation worldwide

The study identifies what the authors call “hedging imaginaries.” This strategy allows companies to promote multiple, sometimes contradictory, visions of AI governance, enabling them to maintain flexibility while navigating uncertainty. By doing so, corporations present themselves as aligned with public expectations, government goals and global values, even as they work to minimize oversight.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 12-12-2025 13:03 IST | Created: 12-12-2025 13:03 IST
How big tech is influencing future of AI regulation worldwide
Representative Image. Credit: ChatGPT

Corporate actors across China, Germany and the United States are exerting far-reaching influence over how artificial intelligence will be governed worldwide, strategically crafting narratives designed to protect their interests while shaping public expectations and regulatory frameworks, according to a new study published in Big Data & Society.

Titled Strategising imaginaries: How corporate actors in China, Germany and the US shape AI governance, the research traces how 30 major corporations and industry associations have worked since 2017 to steer the global conversation around AI ethics, responsibility and regulation.

The analysis, which examines 102 corporate documents spanning six years, concludes that companies are not simply responding to emerging AI regulations, they are actively defining the ideals, visions and governance concepts that policymakers, publics and international institutions adopt.

The study reveals a coordinated pattern in which global tech firms develop and deploy competing narratives, or “socio-technical imaginaries,” to emphasize their expertise, widen their autonomy, and ensure industry-led solutions dominate future AI governance structures. According to the authors, these imaginaries serve a strategic function: they help companies minimize compliance costs, navigate geopolitical differences and shape institutional frameworks long before formal regulation takes hold.

Competing AI futures: How corporate narratives diverge across regions

The study maps dominant AI imaginaries across the three global power centers. The researchers show that China, Germany and the United States each host influential companies promoting distinct visions for how AI should be integrated into society, governed, and justified. Regardless of regional differences, these visions share a common foundation: they depict AI as a powerful, transformative force and position corporations as moral and technical authorities capable of steering the technology responsibly.

In China, two imaginaries dominate corporate discourse. The first, often promoted by digital giants such as Tencent, Huawei, SenseTime and Baidu, frames AI as a driver of societal development, linked to national ambitions for an “intelligent society” and aligned with global sustainability goals. This narrative presents AI as a tool for economic modernization, public-service improvement and long-term social betterment.

The second imaginary, “Trustworthy AI,” is shaped in close collaboration with government agencies, universities and industry alliances. It places emphasis on safety, explainability, privacy and accountability, and is materialized through industry standards and evaluation frameworks. Both imaginaries reinforce China’s state-centric approach, but they also strategically elevate corporate actors as essential partners in building the country’s AI governance model.

German corporations adopt two different yet interconnected imaginaries. Traditional industry leaders such as Siemens, SAP, Bosch, Deutsche Telekom and Volkswagen project themselves as pioneers in applying and operationalizing European AI principles, especially those tied to the EU’s ethics guidelines and forthcoming AI Act. This imaginary stresses compliance, human-centered design and leadership in ethical technology development. In contrast, Germany’s startup ecosystem and industrial associations promote an “AI made in Europe” narrative grounded in economic sovereignty and competitive advantage. This version of European AI stresses the need for differentiated regulation, reduced burdens on small and medium-sized enterprises, and investment in indigenous data infrastructure. Both imaginaries appeal to Europe’s identity as a regulatory superpower but diverge in their interpretation of how much constraint is appropriate.

In the United States, the study finds a striking degree of narrative consistency across companies such as Google, Microsoft, Amazon, Facebook/Meta, IBM, Intel, Palantir and OpenAI. The dominant imaginary, “Responsible AI,” champions innovation-led governance and positions American companies as global leaders capable of developing voluntary standards and internal ethics frameworks that reduce the need for government intervention. This discourse extends AI’s benefits across all of humanity, with companies presenting themselves as custodians of public interest while pushing back against regulatory mandates that might slow technological progress. The authors note that American corporate materials consistently emphasize technical solutions to risks, advocating fairness tools, transparency techniques and internal review systems as substitutes for binding regulation.

These regional imaginaries reveal striking differences in focus and intent, but the study finds that companies routinely shift between them, sometimes even within the same organization, to maintain strategic advantage in an environment marked by fast-changing political pressures.

Hedging strategies: How corporations navigate regulatory pressure and public trust

The study identifies what the authors call “hedging imaginaries.” This strategy allows companies to promote multiple, sometimes contradictory, visions of AI governance, enabling them to maintain flexibility while navigating uncertainty. By doing so, corporations present themselves as aligned with public expectations, government goals and global values, even as they work to minimize oversight.

In China, large firms simultaneously advance optimistic narratives about AI-driven prosperity while actively shaping national standards for trustworthy AI. They promote visions of AI that align with the state’s development goals while positioning themselves as indispensable architects of governance frameworks. German companies hedge by embracing EU regulatory ambitions publicly while joining industry associations that lobby for reduced restrictions behind the scenes. This dual approach reinforces their leadership role in setting European standards while avoiding the burdens of overly rigid legislation.

In the United States, hedging takes a more symbolic form. Companies frequently reference terms such as “Trustworthy AI,” “AI for Good” and “Responsible AI” interchangeably, often without detailed definitions. This rhetorical flexibility allows them to signal alignment with global ethical debates while advancing their preferred model of voluntary, industry-led governance. According to the study, this strategy helps American corporations maintain narrative dominance across international contexts, subtly influencing how governments and civil society organizations interpret AI principles.

Across all three regions, hedging imaginaries operate as sophisticated tools of corporate strategy. They help companies manage stakeholder relationships, deflect criticism, reduce regulatory exposure, and strengthen their leadership roles in emerging governance ecosystems. Far from being mere marketing language, these imaginaries carry material consequences, shaping standards, technical tools, certification systems and governance practices that will define AI’s global future.

Building governance infrastructure: The hidden power of corporate design

Corporate imaginaries do not stop at narrative framing, they directly shape the technical and institutional infrastructure that will govern AI systems for decades. The researchers document how companies participate in the creation of standards, toolkits, evaluation methods and certification systems that, once widely adopted, become de facto governance mechanisms.

In China, governance infrastructure has become increasingly layered, integrating national laws while relying heavily on corporate-designed risk-assessment tools, data-governance practices and technical frameworks for trustworthy AI. German corporations contribute to EU policy formation not only through compliance but by building standardization networks and certification systems that institutionalize their preferred governance interpretations. In the United States, major firms establish open-source toolkits, fairness frameworks, explainability libraries and internal audit structures that shape global development practices and influence policymakers. These examples show that corporate actors are designing the future foundations of AI oversight long before governments finalize their regulatory responses.

Such influence, the authors argue, exceeds traditional lobbying. By defining what responsible AI looks like, and embedding these definitions into technical infrastructures, companies transform their imaginaries into durable governance mechanisms. The study warns that this dynamic risks sidelining alternative perspectives, constraining democratic debate and reinforcing corporate power in the global AI landscape.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback