Global companies struggle to fully disclose AI risks in ESG reports
New research suggests that while ESG reporting has become the primary channel for communicating artificial intelligence-related information, it is still far from adequate in capturing the full scope of AI’s influence on business and society.
The findings come from the study “Pathways to Green AI: Information Disclosure of Artificial Intelligence Within the ESG Framework of Commercial Entities,” published in Sustainability and authored by Junkai Chen of the University of Chinese Academy of Sciences. The research provides a comprehensive global analysis of how companies disclose AI-related information within ESG frameworks and identifies major structural gaps in transparency, governance, and accountability.
The findings reveal a fragmented landscape where disclosure practices vary widely depending on regulatory environments, market expectations, and corporate strategy.
ESG reporting emerges as primary channel for AI transparency
The research highlights that ESG reporting has become the dominant platform through which companies disclose information about artificial intelligence. As AI systems become integral to business operations, companies are increasingly using ESG frameworks to communicate their strategies, governance practices, and risk management approaches related to AI.
This shift reflects a broader transformation in corporate reporting. Investors and stakeholders are no longer focused solely on financial performance; they are also concerned with how companies manage environmental impact, social responsibility, and governance risks. AI sits at the intersection of all three dimensions, making ESG reporting a natural vehicle for disclosure.
The study finds that companies are using ESG reports to outline their AI strategies, governance structures, and risk management processes. These disclosures often include information on how AI is integrated into business operations, how risks are monitored, and how ethical considerations are addressed. In many cases, ESG reporting serves as the only publicly available source of information on corporate AI practices.
However, the research also shows that ESG frameworks were not originally designed to handle the complexities of AI governance. As a result, disclosures often lack depth, consistency, and standardization. Companies interpret ESG requirements differently, leading to significant variations in how AI-related information is presented.
This lack of standardization creates challenges for stakeholders attempting to assess corporate AI practices. Without consistent reporting frameworks, it becomes difficult to compare companies or evaluate their performance in managing AI risks.
Disclosure gaps expose environmental and social blind spots
The study finds an imbalance in how companies disclose AI-related information across ESG dimensions. While governance-related disclosures are relatively detailed, environmental and social aspects of AI remain underreported.
Companies tend to focus on governance issues such as AI strategy, compliance, and risk management. These areas are closely linked to investor interests and regulatory requirements, making them a priority in ESG reporting. As a result, governance disclosures are often more comprehensive and structured.
In contrast, environmental and social impacts of AI receive far less attention. The study highlights that many companies fail to adequately disclose the energy consumption associated with AI systems, particularly large-scale models that require substantial computational resources. This omission is significant given the growing concern over the carbon footprint of AI technologies.
AI’s environmental impact is complex and multifaceted. While AI can improve energy efficiency and support climate goals, it also consumes significant amounts of electricity during model training and deployment. Without transparent reporting, it is difficult to assess whether AI contributes to or detracts from sustainability objectives.
Social impacts are similarly underreported. The study identifies key issues such as job displacement, workforce transformation, algorithmic bias, and data privacy as critical areas requiring disclosure. However, many companies provide limited information on how they address these challenges.
The lack of transparency in social dimensions raises concerns about accountability. AI systems increasingly influence hiring decisions, customer interactions, and operational processes. Without clear disclosure, stakeholders cannot fully understand the risks associated with these technologies or evaluate how companies are managing them.
This imbalance in disclosure creates a distorted picture of corporate AI practices. By emphasizing governance while neglecting environmental and social factors, companies risk masking the broader implications of AI adoption.
Regional and sectoral divide shapes global AI disclosure landscape
The study also reveals significant regional differences in how companies approach AI disclosure within ESG frameworks. These differences are largely driven by variations in regulatory systems, market expectations, and governance models.
China emerges as a leader in AI-related ESG disclosure, particularly in governance dimensions. This is attributed to stronger regulatory oversight and policy-driven initiatives that require companies to report on AI-related risks and practices. Government policies promoting AI development and regulation have encouraged companies to treat AI governance as a core compliance requirement.
On the other hand, the United States follows a more market-driven approach, where ESG disclosure is largely voluntary. Companies in this environment tend to provide less comprehensive AI-related information, focusing primarily on areas that align with investor interests. This results in lower overall disclosure density compared to more regulated markets.
Europe represents a third model, characterized by strict regulatory frameworks such as the Corporate Sustainability Reporting Directive. These regulations mandate detailed ESG disclosures, including aspects related to AI governance and sustainability. As a result, European companies tend to provide more standardized and comprehensive reporting.
The study highlights that these regional differences create a fragmented global landscape. Companies operating in multiple jurisdictions must navigate varying disclosure requirements, leading to inconsistencies in reporting practices.
Sectoral differences further complicate the picture. The research identifies a polarization effect in AI disclosure, where a small number of large technology companies provide highly detailed information, while the majority of firms offer minimal disclosure. This creates a gap between industry leaders and other companies, limiting the overall transparency of the corporate ecosystem.
Toward standardized and accountable AI disclosure
The current ESG reporting system is not fully equipped to handle the complexities of AI governance. To address this gap, it proposes a standardized framework for AI-related disclosure that integrates environmental, social, and governance dimensions more effectively.
A key recommendation is the adoption of the “double materiality” principle, which requires companies to disclose both how AI impacts their business and how their AI activities affect society and the environment. This approach ensures that disclosure captures the full range of AI-related risks and opportunities.
The research also emphasizes the need for mandatory disclosure requirements for listed companies. While voluntary reporting has driven initial progress, it has resulted in uneven transparency and significant information gaps. Mandatory standards could improve consistency and ensure that critical information is disclosed across industries.
To balance regulatory burden with operational flexibility, the study suggests a “comply or explain” approach. Under this model, companies can choose which aspects of AI disclosure to report, but must provide clear explanations for any omissions. This framework allows for adaptability while maintaining accountability.
The study further calls for the development of technical standards and auditing mechanisms to support AI disclosure. Traditional reporting methods are insufficient for capturing complex AI systems. Instead, companies should use measurable indicators such as energy consumption metrics, algorithmic performance benchmarks, and risk assessment protocols.
Improving disclosure also requires better infrastructure for information sharing. The study proposes centralized platforms for ESG reporting, which would enhance accessibility and enable stakeholders to evaluate corporate performance more effectively.
A critical turning point for AI governance
ESG reporting has the potential to serve as a bridge between technological development and societal expectations. By integrating AI into ESG frameworks, companies can provide stakeholders with a clearer understanding of how they are managing risks and contributing to sustainable development.
However, the study makes clear that current practices fall short of this goal. Without standardized frameworks, comprehensive disclosure, and robust oversight mechanisms, ESG reporting cannot fully capture the complexities of AI governance.
The challenge now lies in aligning regulatory frameworks, corporate practices, and stakeholder expectations to create a more transparent and accountable system. As governments, investors, and companies continue to navigate this evolving landscape, the ability to effectively disclose and manage AI-related risks will play a defining role in shaping the future of the global economy.
The research ultimately points to a broader transformation in corporate governance. In an era where technology increasingly shapes economic and social outcomes, transparency is no longer optional. It is a fundamental requirement for building trust, ensuring accountability, and achieving sustainable growth in the age of artificial intelligence.
- FIRST PUBLISHED IN:
- Devdiscourse

