Patients welcome AI support in healthcare, but not without transparency
Virtual healthcare has expanded rapidly beyond its pandemic origins, changing the way patients with chronic conditions interact with clinicians. While remote consultations have improved access and efficiency, many users still question whether digital care can sustain the human connection that underpins long-term treatment and trust.
These issues are examined in a paper titled User Perceptions of Virtual Consultations and Artificial Intelligence Assistance: A Mixed Methods Study, published in Future Internet, which reports that both patients and healthcare professionals value virtual consultations for practicality but view them as a complement rather than a replacement for in-person care, especially when AI tools are involved.
Virtual consultations are usable and valued, but not a full replacement
The study finds that virtual consultations are broadly usable, accessible, and acceptable for chronic care, particularly when they reduce travel burdens and provide continuity for patients in rural or underserved regions. Survey results show high overall satisfaction with virtual care platforms, indicating that most users can navigate the technology and feel comfortable using it for routine interactions.
Patients report that virtual consultations offer practical benefits, including flexibility, reduced waiting times, and easier access to specialists. Healthcare professionals similarly acknowledge that remote consultations help manage workloads, streamline follow-ups, and maintain contact with patients outside hospital settings. These advantages are especially pronounced for individuals managing conditions such as chronic respiratory disease or rheumatological disorders, where frequent monitoring is required.
However, the research makes clear that satisfaction with usability does not equate to confidence in clinical quality. Both patients and clinicians consistently view virtual consultations as a supplement rather than a substitute for face-to-face care. Many participants emphasize that physical presence remains critical for comprehensive assessments, early detection of complications, and building therapeutic relationships.
The preference that emerges most strongly across all participant groups is for hybrid care models. These combine virtual consultations with periodic in-person visits, allowing flexibility while preserving clinical rigor and interpersonal trust. Patients who had reverted to in-person care still indicated that they would consider returning to virtual consultations if hybrid options were available, suggesting that rigid digital-only models may limit long-term adoption.
Healthcare professionals, in particular, express concern about the limits of remote assessment. Without physical examinations, visual cues, or spontaneous observation, clinicians report difficulty forming a complete picture of a patient’s condition. This gap contributes to a perception that virtual care, while efficient, can feel incomplete or less reliable in complex clinical situations.
Empathy and human connection remain the weakest link
Patients frequently report feeling less emotionally supported during remote interactions, especially when consultations are conducted by clinicians they have never met in person. The absence of physical presence and nonverbal cues appears to hinder rapport building and trust formation.
Healthcare professionals echo these concerns, noting that virtual consultations limit their ability to read subtle signs of distress, discomfort, or emotional vulnerability. This loss of relational depth is not merely a matter of preference but is closely tied to perceptions of care quality and patient reassurance.
The study highlights a paradox at the heart of virtual healthcare adoption. While users appreciate convenience and accessibility, they often feel more secure and confident when seen in person. Patients who returned to face-to-face care reported higher comfort levels and greater confidence in communication, even if virtual consultations were easier to schedule.
Importantly, the research suggests that empathy deficits are not inevitable but reflect a lack of training, design, and institutional support for virtual interaction. Clinicians receive extensive preparation for in-person care but far less guidance on how to convey empathy and build relationships through digital channels. Without deliberate intervention, the study warns, virtual care risks becoming transactional rather than therapeutic.
This challenge is particularly relevant for chronic care, where long-term engagement, trust, and emotional support are integral to treatment adherence and patient outcomes. The findings suggest that improving virtual care quality requires not only better technology but also new professional norms and training frameworks that prioritize human connection alongside efficiency.
AI assistance is welcomed cautiously amid literacy and trust gaps
AI, as shown in the study, is a potentially valuable but carefully scrutinized tool. Both patients and healthcare professionals express interest in AI-assisted features that could enhance virtual consultations, such as automated summaries, administrative support, diagnostic prompts, and personalized health information.
Patients see value in AI tools that could help them better understand their condition, track symptoms, and recall information from consultations. Healthcare professionals, meanwhile, identify opportunities for AI to reduce documentation burdens, support decision-making, and flag relevant clinical considerations during remote interactions.
However, enthusiasm for AI is tempered by widespread uncertainty. A substantial proportion of participants report limited understanding of how AI works in healthcare contexts, including its capabilities, limitations, and risks. This lack of familiarity coexists with a willingness to try AI-assisted tools, creating what the authors describe as a gap between acceptance and informed judgment.
Concerns about data privacy, accuracy, and accountability surface repeatedly across both qualitative and quantitative findings. Participants emphasize that AI should not replace human clinicians or operate without oversight. Trust in AI-assisted healthcare is conditional on transparency, explainability, and the assurance that final decisions remain under human control.
The study finds no significant differences in attitudes toward AI between current and former users of virtual consultations, suggesting that perceptions of AI are shaped less by personal experience and more by broader social narratives and institutional trust. This underscores the importance of addressing AI literacy proactively rather than assuming familiarity will develop organically.
From a policy perspective, the findings highlight the risk of deploying AI in healthcare settings without adequate safeguards and user education. Participants’ cautious openness suggests that poorly implemented AI systems could erode trust in virtual care rather than strengthen it. Conversely, well-designed, transparent, and participatory AI tools could address some of the very limitations that drive patients back to in-person care.
Implications for digital health policy and practice
Future digital health strategies must move beyond emergency adoption toward deliberate, evidence-based design, the authors argue. Hybrid care models emerge as the most viable path forward, balancing flexibility with clinical depth and relational continuity. Training healthcare professionals to communicate effectively and empathetically in virtual settings is identified as a critical priority, alongside investment in infrastructure and technical support.
Artificial intelligence, meanwhile, should be positioned as a supportive layer rather than a disruptive force. The study emphasizes the need for regulatory frameworks, ethical oversight, and participatory design processes that involve both patients and clinicians in shaping AI tools. Without such measures, AI risks amplifying existing shortcomings rather than resolving them.
- FIRST PUBLISHED IN:
- Devdiscourse

