Data deficits and ethical concerns top AI implementation challenges in healthcare

According to the reachability matrix constructed from expert insights, the single most dependent factor was the challenge of introducing innovative and new-generation tools. While often emphasized in policy discussions, this element sits atop the dependency hierarchy, unable to be effectively addressed without solving deeper systemic issues.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 28-03-2025 20:20 IST | Created: 28-03-2025 20:20 IST
Data deficits and ethical concerns top AI implementation challenges in healthcare
Representative Image. Credit: ChatGPT

A newly published study has identified the core challenges obstructing the implementation of artificial intelligence (AI) in healthcare systems, offering the first interpretive structural modeling of their interconnections. The findings, published in Algorithms by researchers from India and South Korea, emphasize that while AI holds immense potential to revolutionize diagnostics, treatment, and hospital operations, its adoption is impeded by a complex hierarchy of technological, ethical, operational, and social barriers.

Led by Q. Angelina and Sushanta Tripathy of KIIT University, with contributions from Inje University, the research used interpretive structural modeling (ISM) and MICMAC analysis to map and categorize 11 key challenges. The team collaborated with 12 medical professionals and AI practitioners to validate findings through expert consultation. Their analysis culminated in a five-tiered framework ranking the relative dependency and driving power of each challenge, from foundational issues like data availability to higher-order concerns like system integration.

The study "A Structural Analysis of AI Implementation Challenges in Healthcare" found that the most deeply embedded drivers, challenges with high influence over others, include insufficient data, limited data acquisition infrastructure, the risk of data misuse, and missing compassion in AI-driven care. These foundational issues shape nearly all other downstream implementation problems.

According to the reachability matrix constructed from expert insights, the single most dependent factor was the challenge of introducing innovative and new-generation tools. While often emphasized in policy discussions, this element sits atop the dependency hierarchy, unable to be effectively addressed without solving deeper systemic issues.

The second tier of challenges includes high costs and technological development. These were found to heavily depend on progress made in data availability and ethical integration. Researchers noted that rapid technological evolution complicates workforce adaptation and infrastructure compatibility, while cost remains a major barrier, especially in lower-resource health systems.

Clinical implementation of AI ranked in the third tier. Despite advances in AI capability, embedding AI tools in daily clinical workflows proved to be a considerable challenge. Healthcare workers remain cautious of "black-box" algorithmic decisions and are reluctant to rely on non-transparent systems when lives are at stake.

The fourth tier grouped issues such as the black-box scenario, social issues, and data privacy and security. These concerns - while less foundational than data and cost issues - undermine trust, create regulatory friction, and introduce ethical dilemmas. They were labeled as "autonomous factors" in the MICMAC framework, indicating lower driving and dependency power but significant symbolic weight.

The most operationally dependent factor was the introduction of new-generation AI tools, which appeared at the top of the ISM hierarchy. Researchers noted that this challenge depends on resolving almost every other issue below it, from data infrastructure to social acceptance.

Notably absent from the linkage quadrant were any challenges that had both high dependency and high driving power, highlighting the linearity of the barrier hierarchy in this domain. The study's MICMAC analysis distributed the challenges into four categories: autonomous (low drive, low dependency), dependent (low drive, high dependency), drivers (high drive, low dependency), and linkage (high in both). The most influential drivers included insufficient data, data misuse, missing compassion, and data acquisition.

To produce the structural model, the researchers constructed a Structural Self-Interaction Matrix (SSIM) using expert evaluations. From this, they derived a reachability matrix and conducted level partitioning. This approach enabled a hierarchical layout, clarifying causal relationships between issues and identifying priority intervention points.

The authors stressed that the successful implementation of AI in healthcare cannot be achieved through isolated technological upgrades. Instead, it requires systemic reform, data governance protocols, clinician training, and ethical frameworks. The study highlights that human factors, such as empathy, trust, and transparency, remain critical in high-stakes environments like medicine.

The black-box challenge, in particular, continues to pose a significant hurdle, as healthcare workers often need explainability in machine-driven diagnoses. Legal and regulatory frameworks also struggle to accommodate opaque AI systems that lack interpretability. In parallel, social resistance to automation and fears about job displacement further hinder acceptance.

At the base of the model, data-related issues dominate. Without comprehensive, interoperable, and ethically collected healthcare data, AI systems cannot achieve accurate predictions or individualized treatment planning. Concerns over data misuse, breaches, and surveillance deepen institutional resistance and public skepticism.

The study is particularly significant for emerging economies such as India, where the potential for AI to address medical workforce shortages is substantial but hindered by systemic limitations. The inclusion of expert voices from both public and private health sectors grounds the findings in practical realities.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback