A smarter, safer future: Trustworthy AI as the foundation of sustainable development

With the increasing prevalence of AI-driven decisions in critical areas such as hiring, finance, law enforcement, and healthcare, the risks associated with AI failures and biases have also escalated. Algorithmic biases in hiring and loan approvals can reinforce societal inequalities, while AI errors in healthcare diagnostics can lead to life-altering consequences.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 01-02-2025 16:59 IST | Created: 01-02-2025 16:59 IST
A smarter, safer future: Trustworthy AI as the foundation of sustainable development
Representative Image. Credit: ChatGPT

Artificial Intelligence (AI) has become a transformative force across industries, reshaping sectors such as healthcare, finance, education, and national security. However, as AI systems become increasingly integrated into everyday life, concerns about algorithmic biases, deepfakes, system failures, and unintended consequences have surged. These AI-related incidents pose significant risks to individuals, organizations, and society, eroding public trust in AI technologies. Addressing these risks requires a systematic and standardized approach to AI incident reporting that ensures transparency, accountability, and continuous improvement in AI deployment.

A recent study titled "Advancing Trustworthy AI for Sustainable Development: Recommendations for Standardizing AI Incident Reporting" by Avinash Agarwal (Telecommunication Engineering Centre, Ministry of Communications, New Delhi, India) and Manisha Nene (Defence Institute of Advanced Technology, Ministry of Defence, Pune, India), published in 2024 ITU Kaleidoscope: Innovation and Digital Transformation for a Sustainable World (ITU K), highlights the urgent need for standardizing AI incident reporting.

The research identifies nine key gaps in existing AI incident reporting mechanisms and provides nine actionable recommendations to bridge these gaps. By ensuring transparent and systematic AI incident documentation, the study underscores how structured reporting can contribute to sustainable digital transformation and help achieve the United Nations Sustainable Development Goals (SDGs).

The urgent need for standardized AI incident reporting

With the increasing prevalence of AI-driven decisions in critical areas such as hiring, finance, law enforcement, and healthcare, the risks associated with AI failures and biases have also escalated. Algorithmic biases in hiring and loan approvals can reinforce societal inequalities, while AI errors in healthcare diagnostics can lead to life-altering consequences. Recognizing these risks, global institutions such as the OECD (Organization for Economic Co-operation and Development) have called for the development of responsible AI frameworks that emphasize fairness, transparency, and accountability.

Despite these efforts, the lack of a centralized and standardized approach to AI incident reporting remains a major hurdle. Unlike aviation and cybersecurity sectors, which have well-established incident reporting systems that guide safety improvements, AI currently lacks a comprehensive and universally accepted system for tracking and mitigating failures. This research identifies critical gaps in current AI incident reporting practices, explores existing databases, and provides recommendations to enhance standardization efforts, ultimately promoting trustworthy AI and sustainable development.

Assessing existing AI incident reporting systems

The study analyzed publicly available AI incident repositories such as the AI Incident Database (AIID), the AI Algorithmic and Automation Incidents and Controversies Repository (AIAAIC), the AI Vulnerability Database (AVID), and the AI Litigation Database (AILD). These databases compile records of AI-related failures, harms, and legal disputes, but they suffer from inconsistencies in reporting standards, lack of legal mandates for incident disclosures, and limited geographical coverage.

To evaluate the effectiveness of these repositories, the researchers examined the policies, reporting procedures, and review mechanisms of AIID and AIAAIC. They submitted AI incident reports to assess the reporting process and the ease of information retrieval. Additionally, they analyzed data interoperability, sectoral representation, and the diversity of sources contributing to the databases. Based on these assessments, they identified key weaknesses in current AI incident reporting and formulated recommendations for improvement.

Gaps in AI incident reporting and standardization

The research identifies nine major gaps in the existing AI incident reporting framework. One of the key gaps is the lack of a standardized definition for AI incidents, leading to inconsistencies in how incidents are reported across different repositories. The study also highlights bias and misclassification in incident reporting, as the voluntary nature of reporting means that AI failures may be interpreted differently based on the expertise and perspective of the reporters.

Another significant issue is the incompatibility of data fields across different AI incident databases, making cross-repository analysis difficult. The absence of legal mandates or incentives for reporting incidents also contributes to severe underreporting, leaving many AI-related risks undocumented. Additionally, the study finds that most incidents are reported by a narrow group of contributors, often based in the U.S. and Europe, resulting in a lack of representation from developing regions. Furthermore, existing databases lack standardized data-sharing protocols, making it difficult for governments, industry leaders, and researchers to collaborate effectively.

Sectoral underrepresentation is another major challenge, as most reported incidents originate from consumer-focused industries like social media, search engines, and e-commerce, while critical infrastructure sectors such as telecommunications, energy, and healthcare receive little attention. The study also finds that AI incident databases disproportionately capture incidents from a few countries, particularly the United States, the United Kingdom, and China, while failing to document AI-related issues in developing countries. Finally, many stakeholders remain unaware of AI incident reporting systems, leading to low engagement and participation.

Recommendations for improving AI incident reporting

To address these gaps, the study proposes nine key recommendations aimed at enhancing the effectiveness, transparency, and accountability of AI incident reporting. One of the primary recommendations is the standardization of AI incident definitions and taxonomies, ensuring a universal framework for classifying incidents based on their domain, severity, and societal impact. This would enable consistent classification and improve data comparability across different repositories.

Another important recommendation is the implementation of regular AI incident database quality audits. These audits would help verify data accuracy, ensure consistency in classification, and reduce bias in reporting. The study also calls for the standardization of AI incident database structures to facilitate interoperability and seamless data exchange across different platforms.

To encourage more comprehensive reporting, the study suggests the introduction of regulatory and policy frameworks that mandate AI companies to disclose significant incidents. This would ensure that AI-related failures are documented in a systematic manner, similar to cybersecurity breach disclosure laws. Additionally, the researchers propose the development of automated AI incident reporting mechanisms, allowing AI systems to flag potential failures in real time, thereby supplementing human-reported cases.

Transparent data-sharing mechanisms should also be established to enable controlled access to AI incident data, facilitating collaboration among researchers, regulators, and industry stakeholders. The study further recommends the creation of sector-specific AI incident databases for industries such as healthcare, telecommunications, finance, and defense, ensuring targeted risk assessment and mitigation.

To promote inclusive global AI incident reporting, international cooperation should be encouraged, with efforts focused on increasing representation from underreported regions. Finally, the study advocates for raising awareness and engagement in AI incident reporting through educational campaigns, workshops, and incentives aimed at increasing participation from AI developers, policymakers, and the public.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback