Black-box AI threatens security as IoT expands into high-risk sectors
Apart from healthcare and industry, XAI is being integrated into smart homes, irrigation systems, environmental monitoring frameworks, emergency response platforms, smart grid operations, and early-stage 6G IoT security solutions. In each domain, XAI exposes decision patterns that were previously hidden inside opaque models, strengthening human oversight.
Researchers have warned that the rapid expansion of Internet of Things (IoT) systems across hospitals, factories, cities, and critical infrastructure will not be sustainable without major advances in explainable artificial intelligence.
The findings are presented in “Explainable AI in IoT: A Survey of Challenges, Advancements, and Pathways to Trustworthy Automation,” published in Electronics. The team argues that as billions of connected devices take on high-stakes decision-making roles, failure to understand how AI systems generate their outputs will threaten safety, security, and public trust.
Black-Box AI puts safety-critical IoT systems at risk
According to the study, IoT ecosystems have transitioned from basic sensor networks into dense, autonomous systems responsible for detecting threats, managing medical equipment, controlling industrial machinery, and directing urban operations. As this transition accelerates, the authors warn that black-box AI models, systems that produce decisions without clear insight into their reasoning, pose a significant risk.
The review highlights that transparency is no longer optional in IoT environments. AI-enabled devices are now embedded in hospital monitoring networks, smart grids, precision agriculture, logistics chains, manufacturing floors, and autonomous industrial operations. These systems detect heart failure, flag cyber intrusions, coordinate predictive maintenance, optimize water usage, and track environmental stability. When the reasoning behind their outputs cannot be inspected, accountability breaks down.
The review stresses that XAI, or explainable artificial intelligence, is essential for understanding which features drive predictions, who the explanations are meant for, and how reliable they are in real-time conditions. In contrast to traditional AI pipelines that prioritize accuracy, the study shows that transparency, interpretability, and reliability are now primary requirements for IoT systems that influence human safety.
The authors note that even when IoT AI models achieve high accuracy in laboratory settings, they often fail in the field due to noisy data, edge-device constraints, and unpredictable operational conditions. XAI techniques help operators understand failures by revealing root causes such as sensor malfunction, corrupted data, or adversarial manipulation.
The survey organizes XAI across several categories, including data explainability, model explainability, explanation assessment, and human-centric evaluation. Data-level methods reveal how sensor readings contribute to predictions, model-level methods expose internal logic, and human-centric methods ensure explanations are tailored to engineers, clinicians, or security analysts.
The study notes that explainability methods are becoming non-negotiable in domains where automated decisions carry legal, ethical, and safety implications.
IoT security, healthcare, and industrial automation drive demand for explainable AI
The review analyzes how XAI is being used to interpret machine learning across cybersecurity, healthcare IoT, industrial automation, and other connected systems. The authors find that several IoT domains are experiencing rapid growth in both complexity and risk, forcing researchers and enterprises to rely on explainability tools for auditability and operational confidence.
In cybersecurity, the study examines how XAI supports intrusion detection systems, malware analysis, zero-day threat identification, botnet detection, and crypto-jacking prevention. Techniques such as SHAP, LIME, attention mechanisms, and feature-ranking methods are used to interpret which network patterns, protocol anomalies, or sensor behaviours indicate an attack. XAI helps analysts reduce alert fatigue by identifying the specific factors that triggered an alert, improving both speed and accuracy of incident response.
The authors warn that IoT devices are primary targets for attackers because of limited computing power, weak authentication, and high deployment scale. Explainability tools make defensive systems more transparent and provide insight into attack pathways that would otherwise remain hidden.
In medical IoT, or the Internet of Medical Things, XAI is enabling safe deployment of predictive analytics, anomaly detection, patient monitoring, and secure medical data management. The review highlights how XAI supports clinical decision pathways by connecting physiological signals to model outputs, helping clinicians understand why a device flagged an abnormal heart rhythm or an unsafe ventilator reading. Blockchain-supported XAI models add layers of traceability and security, making it possible to audit medical decisions in regulated environments.
Industrial IoT systems use XAI for predictive maintenance, sensor anomaly detection, energy optimization, fault diagnosis, and automated control of machinery. As manufacturing operations adopt neural networks for real-time control, explainability helps engineers trace decisions back to specific sensor failures or operational instabilities. These insights reduce downtime, improve safety, and build trust among operators who depend on automation to manage high-risk equipment.
Apart from healthcare and industry, XAI is being integrated into smart homes, irrigation systems, environmental monitoring frameworks, emergency response platforms, smart grid operations, and early-stage 6G IoT security solutions. In each domain, XAI exposes decision patterns that were previously hidden inside opaque models, strengthening human oversight.
Despite these advances, the authors outline persistent weaknesses. Many XAI techniques lack real-time capability, suffer from inconsistent explanations across similar inputs, or require computational resources that edge devices cannot support. Model explanations may be vulnerable to data poisoning or adversarial attacks, and there is often a poor match between explanation style and user needs.
The review highlights that many organizations treat XAI as a regulatory requirement or a patch applied late in the pipeline, rather than integrating it into system design from the start. This practice limits the usefulness of explainability and weakens trust in AI-enabled IoT.
Study calls for federated, edge-aware, and governance-ready XAI frameworks
The authors identify several critical challenges that must be addressed before IoT automation can expand safely.
One of the most urgent issues is the computational weight of modern XAI techniques, which are often incompatible with edge devices that operate with low memory, limited energy, and constrained processing. The authors argue that future XAI systems must be optimized for federated and edge-aware environments where data cannot be centralized due to privacy or bandwidth constraints.
Security remains a primary concern. Attackers may exploit explanation systems to reverse-engineer models, identify vulnerabilities, or craft adversarial perturbations that mislead detectors. The study calls for adversarially robust XAI pipelines that resist manipulation while still offering transparency.
The authors also recommend role-specific explanation interfaces designed for engineers, clinicians, operators, and regulators. Generic explanations are insufficient in safety-critical contexts; users need actionable, domain-aligned insights that support operational decisions rather than high-level summaries.
Standardized benchmarks for faithfulness, latency, and stability are needed to evaluate XAI systems fairly across different domains. The study emphasizes that many existing XAI evaluations lack consistency, making it difficult to compare methods or assess real-world reliability.
Ethical and governance issues are another major focal point. The authors highlight the need for bias auditing, traceability, model documentation, and participatory design frameworks to ensure that IoT-enabled AI systems remain accountable across their lifecycle. Without these measures, organizations risk deploying automation that is opaque, discriminatory, or misaligned with human values.
- READ MORE ON:
- explainable AI
- XAI in IoT
- IoT security
- trustworthy automation
- interpretable machine learning
- IoT cybersecurity
- IIoT systems
- Internet of Medical Things
- smart device explainability
- AI transparency
- edge AI
- federated AI
- predictive maintenance IoT
- medical IoT security
- AI governance
- human-centric AI
- IoT threat detection
- XAI benchmarks
- explainable deep learning
- IoT risk management
- FIRST PUBLISHED IN:
- Devdiscourse

