AI-powered healthcare is becoming prime cyber target
The rapid deployment of artificial intelligence (AI) in healthcare has surpassed efforts to secure it. While AI systems promise faster diagnoses and more efficient care, they also introduce new vulnerabilities that attackers can exploit, with potential consequences for patients and public trust.
These vulnerabilities are examined in a study titled Medicine in the Age of Artificial Intelligence: Cybersecurity, Hybrid Threats and Resilience, published in Applied Sciences. The research warns that without resilience-by-design, AI-driven medicine could become a high-risk target in an increasingly hostile cyber environment.
The authors warn that healthcare systems are adopting AI faster than they are strengthening the institutional, technical, and regulatory safeguards needed to defend it. Consequently, the same technologies designed to improve care may also expose hospitals and patients to unprecedented forms of harm.
AI expands healthcare’s cyber attack surface
AI dramatically expands the cyber attack surface of healthcare systems. Traditional medical technologies were often isolated or analog, limiting the scale of potential damage from external interference. On the other hand, AI-driven medicine depends on continuous data flows, networked devices, cloud infrastructure, and automated decision pipelines. Each of these elements introduces new points of vulnerability.
The authors explain that AI systems rely heavily on large volumes of sensitive data, including medical images, genomic information, and electronic health records. If this data is compromised, altered, or poisoned, the consequences extend beyond privacy violations. Manipulated inputs can lead to incorrect diagnoses, flawed treatment recommendations, or delayed interventions. In AI-assisted medicine, data integrity becomes as critical as data confidentiality.
Medical imaging is highlighted as a particularly exposed domain. AI models trained to detect tumors, fractures, or organ abnormalities depend on standardized digital formats and automated workflows. Weaknesses in these systems can allow malicious actors to subtly alter images or metadata without immediate detection. Unlike overt system failures, these forms of interference may go unnoticed while quietly influencing clinical decisions.
The study also draws attention to ransomware and service disruption attacks targeting hospitals. As AI systems become embedded in scheduling, diagnostics, and resource allocation, disabling them can paralyze entire facilities. The authors note that healthcare institutions are especially attractive targets because downtime directly affects patient care, increasing pressure to pay ransoms or comply with attackers’ demands.
Notably, AI-related risks are not limited to external hackers. Insider threats, supply chain vulnerabilities, and poorly secured third-party software can all compromise AI-enabled healthcare systems. The complexity of modern medical AI ecosystems makes it difficult for institutions to maintain full visibility and control over their security posture.
Hybrid threats blur lines between cyber and clinical harm
The study draws focus to the rise of hybrid threats that combine technical attacks with strategic manipulation. In this context, healthcare becomes a potential target not only for financial gain but for political, economic, or societal disruption.
Hybrid threats may involve coordinated cyberattacks, disinformation campaigns, and exploitation of regulatory or organizational weaknesses. The authors argue that AI systems amplify the impact of such threats by accelerating decision-making and reducing human oversight. When clinicians rely on automated outputs, the margin for detecting subtle manipulation narrows.
The paper highlights scenarios in which AI-supported diagnostics could be intentionally distorted to undermine confidence in healthcare institutions or public health responses. During crises such as pandemics or natural disasters, compromised AI systems could spread uncertainty, delay care, or fuel mistrust. These outcomes extend beyond individual patients to affect national resilience and social stability.
Another concern raised is the manipulation of training data used to develop medical AI models. If datasets are biased, incomplete, or intentionally corrupted, the resulting systems may perform unevenly across populations. This creates not only clinical risks but ethical and legal challenges, particularly when AI-driven decisions disproportionately affect vulnerable groups.
The authors argue that hybrid threats exploit gaps between technical safeguards and institutional readiness. Many healthcare organizations focus narrowly on compliance with data protection rules while underestimating broader security and resilience challenges. This fragmented approach leaves systems exposed to complex, multi-layered attacks that do not fit neatly into existing regulatory categories.
Building resilience into AI-driven medicine
The study advocates for a resilience-by-design approach that integrates cybersecurity, governance, and clinical practice from the outset. The authors argue that resilience must be treated as a core requirement of AI-enabled healthcare, not an afterthought.
A key recommendation is end-to-end protection of the AI lifecycle. This includes securing data collection, storage, model training, deployment, and ongoing operation. Each stage presents distinct risks, and failures at any point can compromise the entire system. Continuous monitoring, validation, and auditing are presented as essential safeguards against both accidental errors and malicious interference.
Human factors also play a key role in resilience. The study emphasizes that clinicians, administrators, and technical staff must be trained to understand the limitations and risks of AI systems. Overreliance on automated outputs without critical evaluation increases vulnerability. Maintaining human oversight and clear accountability structures is essential, especially in high-stakes clinical contexts.
Governance alignment is highlighted as another critical challenge by the authors, suggesting that healthcare institutions operate under multiple regulatory frameworks that address data protection, medical devices, cybersecurity, and AI governance. When these frameworks are implemented in isolation, gaps and contradictions can emerge. The study calls for integrated governance models that align technical standards with clinical responsibility and legal accountability.
Set against Europe’s changing regulatory framework, the study highlights the growing overlap between AI, cybersecurity, and healthcare governance. The authors argue that effective compliance requires more than meeting minimum legal requirements. Institutions must develop internal capacities to adapt to evolving threats and regulatory expectations over time.
AI-enabled healthcare systems are part of national critical infrastructure, and their failure can have cascading effects on public health, economic stability, and social trust. Protecting these systems requires coordination between healthcare providers, regulators, technology developers, and security agencies.
- READ MORE ON:
- AI in healthcare cybersecurity
- medical AI security risks
- healthcare cyber resilience
- AI-enabled medical devices security
- hospital cybersecurity threats
- AI healthcare regulation
- hybrid cyber threats healthcare
- patient safety AI systems
- digital health infrastructure security
- medical AI governance
- FIRST PUBLISHED IN:
- Devdiscourse

