Trust crisis in digital health: Patients least likely to open up to medical chatbots
A new study challenges one of the most widely held assumptions in digital health: that artificial intelligence-powered medical chatbots encourage patients to share more sensitive information. Instead, the research finds that patients are significantly less willing to disclose personal health details to chatbots than to human providers, exposing a critical trust gap at the heart of AI-driven healthcare systems.
Published in Healthcare, the study titled "Can Medical Chatbots Trigger Disinhibition and Encourage Health Information Disclosure?" presents experimental evidence showing that AI-mediated interactions do not increase openness in high-stakes medical contexts.
Chatbots fail to trigger expected psychological disinhibition
The adoption of medical chatbots has largely been based on the idea of disinhibition, the idea that individuals feel less judged and more willing to share personal information when interacting through digital or non-human channels. Building on this concept, the study introduces the idea of machine-mediated disinhibition, a psychological state where users may feel less constrained because they are communicating with a machine rather than a person.
To test this, researchers conducted a controlled experiment involving 373 participants, comparing three types of interactions: face-to-face consultation, human consultation through a computer interface, and chatbot-based consultation. Each participant was exposed to the same mental health scenario, ensuring that only the type of interaction varied.
The results show that the expected disinhibition effect does not materialize in chatbot interactions. Participants did not feel significantly less constrained, less judged, or more comfortable sharing information when interacting with a chatbot compared to human providers.
This finding directly challenges earlier assumptions drawn from broader human-computer interaction research, where users often reported feeling more open when engaging with digital systems. In healthcare, however, the study suggests that the stakes are fundamentally different. The sensitivity of health information, combined with the perceived risks of sharing it, alters user behavior in ways that standard disinhibition theories fail to capture.
The study also highlights that disinhibition may not be a universal psychological response but rather a context-dependent one. While users may feel freer to express themselves in low-risk environments such as social media or casual chatbot interactions, the same effect does not extend to situations involving mental health, medical diagnoses, or personal vulnerability.
Patients disclose least to chatbots, favor human interaction
While the absence of increased disinhibition is notable, the most striking finding of the study lies in actual disclosure behavior. Participants were asked to indicate their willingness to share sensitive mental health information across the three interaction types.
The results reveal a clear hierarchy. Face-to-face consultations generated the highest levels of disclosure, followed by human-mediated computer interactions. Chatbot interactions consistently produced the lowest willingness to disclose sensitive information.
The findings show a steady decline in disclosure from in-person consultations to chatbot interactions, with a noticeable gap between human and AI-based communication modes.
This finding is particularly significant because it contradicts a widely promoted advantage of AI chatbots in healthcare. Many developers and healthcare providers have assumed that removing human judgment from the interaction would encourage patients to be more honest, especially about stigmatized conditions such as mental health issues or sexual health concerns.
Instead, the study suggests that the absence of a human counterpart may reduce, rather than increase, a patient's willingness to share. Even when the chatbot presents the same questions and information as a human provider, the perceived identity of the interaction partner plays a decisive role.
The study also finds that the difference between chatbot and human-through-computer interactions is not statistically significant, suggesting that the presence of a human agent, even in a digital format, is enough to sustain higher levels of disclosure compared to fully automated systems.
Trust, privacy fears, and authenticity shape user resistance
To explain why chatbots underperform in eliciting disclosure, the study identifies three key factors: trust deficits, privacy concerns, and perceived lack of authenticity.
Trust emerges as a major barrier. In healthcare, trust is not only about competence but also about relational accountability and emotional connection. Patients may believe that a human doctor has ethical responsibility, empathy, and the ability to understand their condition in a meaningful way. In contrast, AI systems are often perceived as lacking moral agency and accountability, making users less comfortable sharing sensitive information.
Privacy concerns further amplify this hesitation. Unlike human doctors, who are bound by professional confidentiality norms, chatbots are associated with data collection, storage, and potential misuse. The perception that personal health information could be stored, analyzed, or shared by unknown systems creates a strong deterrent to disclosure.
The study notes that this risk perception is heightened in AI systems due to their opacity. Users often do not fully understand how their data is processed or who has access to it, leading to a sense of loss of control. In high-stakes contexts, such as mental health discussions, this uncertainty becomes a critical barrier.
Authenticity is the third factor shaping user behavior. Healthcare interactions are not purely transactional; they involve emotional validation, empathy, and human connection. The study finds that users perceive chatbot responses as simulated rather than genuinely empathetic, reducing the depth of interaction.
This lack of perceived authenticity limits users' willingness to engage in meaningful disclosure. Even if the chatbot provides accurate information, the absence of emotional resonance undermines trust and openness.
Together, these factors create what the study describes as a paradox. While chatbots may theoretically reduce fear of social judgment, they simultaneously introduce new forms of uncertainty and discomfort that suppress disclosure.
Implications for AI healthcare design and policy
The study suggests that chatbots should not be viewed as direct replacements for human providers in contexts requiring sensitive information disclosure. Instead, hybrid models may be more effective, where chatbots handle initial interactions or routine tasks but escalate to human professionals when deeper engagement is required.
Trust-building must become a key design priority. Chatbots should clearly communicate their purpose, data handling practices, and limitations at the beginning of interactions. Transparency may help reduce uncertainty and build user confidence.
Improving the conversational quality of chatbots could help address authenticity concerns. More natural language patterns, adaptive responses, and context-aware communication may enhance user experience, although the study suggests that these improvements alone may not fully overcome the trust gap.
Further, policymakers and healthcare providers must address data governance and privacy issues. Clear regulations on how patient data is collected, stored, and used by AI systems will be essential to building trust and encouraging adoption.
Additionally, the study calls for further research into how cultural, demographic, and contextual factors influence user behavior in AI-mediated healthcare interactions. The current findings are based on a controlled experimental setup, and real-world behavior may vary depending on individual experiences and system design.
- FIRST PUBLISHED IN:
- Devdiscourse