AI supply-chain blind spots putting critical infrastructure at risk

AI supply-chain mapping is particularly urgent for sectors where errors have immediate real-world consequences. Healthcare, food safety, aviation, utilities, insurance, finance, and public services all rely on AI tools that influence decisions about safety, eligibility, access, and risk.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 29-11-2025 10:33 IST | Created: 29-11-2025 10:33 IST
AI supply-chain blind spots putting critical infrastructure at risk
Representative Image. Credit: ChatGPT

Artificial intelligence (AI) is moving deeper into key sectors like hospitals, food systems, transportation networks, utilities, and other high-stakes environments, but a new analysis warns that most organisations still do not understand the full supply chains behind the systems they deploy. The study finds that risks often attributed to bias, hallucination, or poor training data actually originate from deeper, less visible dependencies that remain undocumented and ungoverned. Without clear visibility into these sources, the authors argue, even well-designed AI tools can fail in ways that institutions are unprepared to manage.

The research, titled “Identifying the Supply Chain of AI for Trustworthiness and Risk Management in Critical Applications”, examines how artificial intelligence relies on complex networks of data providers, model developers, software platforms, integrators, hosting services, and hardware infrastructure.

The paper shows that these interconnected actors form an AI supply chain that mirrors the complexity of global manufacturing or food systems, yet remains largely unexamined in standard AI risk frameworks. Without a clear mapping of this chain, organisations cannot accurately diagnose failures, assign responsibility, or apply risk-mitigation tools.

AI systems depend on invisible chains that traditional risk frameworks fail to capture

The study argues that the AI ecosystem is typically treated as a simple pipeline: data goes in, a model processes it, and an output emerges. In reality, each step involves a long line of actors whose contributions shape the model’s behaviour. A model might be trained on data aggregated by one organisation, hosted by another, fine-tuned by a third, integrated into a broader system by a fourth, and deployed by a fifth. Each layer introduces its own technical and organisational risks.

Existing governance tools, including model cards, dataset documentation, and AI Bills of Materials, address pieces of this puzzle, but none of them provide a full map of actors across the entire chain. The study reviews frameworks such as the NIST AI Risk Management Framework and national AI safety guidelines and finds that they describe high-level risk categories but do not specify how to track the full genealogy of an AI system.

This lack of visibility creates blind spots. Failures that appear to stem from a model’s reasoning may instead originate from corrupt training data, outdated dependencies, silent updates from cloud providers, or integration mistakes made by downstream developers. In critical sectors, such as healthcare triage, industrial automation, energy management, or food safety, these failures can have cascading real-world effects.

The review stresses that tracing these risks requires a supply-chain lens rather than a model-centric one. The authors show that the AI supply chain resembles those of pharmaceuticals or food production, where physical inputs and organisational actors must be traceable to ensure safety. AI, they argue, now warrants similar treatment.

A four-part taxonomy offers a path to clearer risk management

To address these gaps, the authors propose a practical taxonomy that breaks AI systems into four major components: data, models, programs, and infrastructure. Each component includes multiple sub-roles, enabling organisations to identify where data originates, who modifies it, how models are built, who hosts them, and how they are integrated into applications.

This structure is designed to be lightweight so that both technical and non-technical stakeholders can use it. For data, the taxonomy distinguishes between creators, aggregators, and hosts. For models, it differentiates between primary developers, fine-tuners, vendors, and distributors. Program layers include application developers, workflow integrators, and service providers. Infrastructure includes hardware providers, cloud hosts, and security intermediaries.

By using this taxonomy, organisations can document their AI systems the way manufacturers document machinery components. The study provides examples such as a meeting summariser or a hospital triage chatbot. In each case, the authors show how tracing the full chain reveals dependencies that would otherwise remain hidden, including third-party datasets, off-the-shelf models, background libraries, or cloud-hosted tools that influence outcomes without being visible to end users.

The taxonomy also helps identify where responsibility sits. If a model produces a harmful output, the source might be flawed training data, a silent model update from the vendor, a malfunctioning plugin written by another developer, or an infrastructure-level outage. The taxonomy helps organisations pinpoint which actor needs to be consulted to investigate or fix the issue.

Critical applications face greater exposure without supply-chain visibility

AI supply-chain mapping is particularly urgent for sectors where errors have immediate real-world consequences. Healthcare, food safety, aviation, utilities, insurance, finance, and public services all rely on AI tools that influence decisions about safety, eligibility, access, and risk.

In hospitals, AI systems may assist with triage, diagnostics, or patient monitoring. A model that misclassifies a symptom may be less at fault than a dataset updated without documentation or a plugin that interprets data incorrectly. Without a clear chain of responsibility, healthcare staff may not know whom to contact or how to correct the problem.

In food and agriculture, AI systems may detect contamination, optimise supply chains, or support early-warning systems. Models trained on flawed or incomplete data could misjudge safety levels, but unless agricultural organisations understand where that data originated, they cannot trace the error.

Energy and utilities rely on predictive models for resource allocation, grid stabilisation, and risk detection. These systems may depend on real-time sensor feeds, open-source libraries, or cloud-hosted algorithms. If any component updates unexpectedly, system reliability can deteriorate without warning.

In legal, insurance, and compliance contexts, AI-generated reasoning may be influenced by unseen components far upstream. This raises concerns not only about accuracy but also about transparency, accountability, and fairness.

The authors argue that as AI becomes more central to safety-critical functions, the absence of supply-chain visibility becomes a systemic vulnerability.

Supply-chain mapping enables better governance, accountability, and risk control

The study highlights multiple benefits of adopting a supply chain–based approach to AI oversight.

  • It strengthens accountability. Organisations can track which components they control directly and which ones rely on external vendors. This helps determine who is responsible when failures occur and which responsibilities must be documented in service agreements.
  • It enables more accurate risk assessments. Rather than evaluating AI performance solely at the output stage, organisations can identify where errors might emerge, from data quality through to deployment conditions. This helps reduce the likelihood of cascading failures.
  • It facilitates targeted use of governance tools. AI Bills of Materials, model cards, dataset sheets, and security standards become more meaningful when organisations know how components relate to each other. The taxonomy provides a structure for assembling these pieces.
  • It helps regulators identify where oversight is needed. Many countries are developing AI safety rules that focus on high-risk systems. A supply-chain lens reveals where transparency obligations should apply and which actors must be included in compliance processes.
  • It guides procurement decisions. If organisations understand an AI system’s supply chain, they can make informed choices about vendor reliability, infrastructure needs, and long-term maintenance obligations.

A call for real-world testing and validation

The authors acknowledge that their taxonomy is a first step. The next challenge is testing how well it works across different industries and system sizes. They argue that the AI ecosystem is moving too quickly for governance approaches that focus only on models. Without attention to operational dependencies, risk management will remain incomplete.

They also emphasise that AI supply chains change over time. Vendors update models, cloud providers modify services, data hosts change storage formats, and developers patch components. Mapping the supply chain must therefore be an ongoing process rather than a one-time exercise.

The study encourages future research to apply the taxonomy to real deployments, examine how organisations maintain documentation, and study how supply-chain visibility influences response times during system failures.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback