Explainable AI helps consumer trust in digital finance

Explainable AI helps consumer trust in digital finance
Representative image. Credit: ChatGPT

Amid the growing popularity of artificial intelligence-driven financial platforms, a new review warns that transparency alone may not be enough to help users make informed financial decisions. Researchers found that while explainable AI, disclosure mechanisms, and advisory transparency often increase trust and adoption of AI-powered financial tools, they rarely help users critically evaluate whether those systems deserve trust in the first place.

The study, titled "Transparency by Design: A Narrative Synthesis of AI Disclosure, Explainability, and Trust in Consumer-Facing FinTech," was published in FinTech by Stefanos Balaskas of the University of Patras in Greece. It examines how AI disclosure, explainability tools, and transparency features influence trust in consumer-facing fintech platforms such as robo-advisors, AI investment systems, automated credit scoring services, and crowdfunding recommendation platforms.

Robo-advisors dominate AI transparency research while major FinTech sectors remain understudied

The review found that most academic work on AI transparency in finance focuses heavily on robo-advisory platforms and automated investment guidance systems. Six of the nine formally reviewed studies examined robo-advisors or related AI-based investment services, while only isolated studies investigated automated credit decisions, LLM-based investment advice, and crowdfunding recommendation systems.

The author noted that major consumer-facing FinTech sectors such as mobile banking apps, digital wallets, and financial chatbots remain largely absent from the literature despite their growing use in real-world financial ecosystems.

The studies reviewed came from countries including the United States, Germany, South Korea, Malaysia, Vietnam, China, India, and the United Kingdom, showing that concerns around AI transparency are emerging across both developed and emerging financial markets. However, the review found the research landscape fragmented rather than cumulative, with different studies focusing on separate transparency mechanisms and different trust outcomes.

The review grouped transparency mechanisms into several categories, icnluding:

  • AI disclosure, where users are informed that AI is involved in decision-making
  • explainable AI, which attempts to clarify how decisions are generated
  • advisory or platform transparency regarding costs and processes
  • interpretability and comprehensibility of algorithms
  • user control and override options
  • information quality
  • responsibility attribution

Among these categories, explainable AI and broad advisory transparency dominated the literature. On the other hand, user control, responsibility attribution, and direct AI disclosure were only lightly studied.

One study highlighted in the review examined whether clients reacted differently when financial advisors disclosed the use of AI systems in their recommendations. The findings showed that disclosure did not directly increase reliance on AI recommendations. Instead, it indirectly increased reliance by reducing users' sense of personal responsibility for investment decisions.

The review described this as one of the clearest examples of how transparency can alter accountability dynamics rather than simply boosting trust.

Another major finding involved explainable AI in automated credit scoring systems. In lending contexts, explanation mechanisms were closely linked to fairness, procedural legitimacy, and users' ability to challenge unfavorable outcomes. The review argued that this differs significantly from robo-advisory contexts, where transparency often functions more as a reassurance tool designed to reduce uncertainty and encourage adoption.

The paper also found that many studies relied heavily on surveys measuring user attitudes, willingness to adopt AI tools, or general trust ratings rather than examining actual financial behavior. Consequently, the evidence base remains weak in assessing whether transparency genuinely improves decision quality or simply makes consumers feel more comfortable using automated financial systems.

Transparency often increases adoption without improving informed judgment

Increased trust in AI systems does not necessarily mean better trust. The study repeatedly distinguishes between "mere acceptance" and "trust calibration." Mere acceptance refers to higher comfort levels, increased trust ratings, and stronger adoption intentions. Trust calibration, on the other hand, refers to users developing a realistic understanding of what AI systems can and cannot do, including recognizing uncertainty, limitations, and the need for human oversight.

According to the review, most transparency designs in consumer-facing FinTech currently support acceptance rather than calibration. Several studies found that transparency features increased perceived usefulness and willingness to adopt robo-advisory systems. Malaysian research involving low-income users showed that advisory transparency improved adoption intentions by clarifying costs, information, and investment processes. German research similarly found that transparent robo-advisory interfaces improved trust and investment willingness.

However, these studies generally did not test whether users became more discerning or capable of identifying flawed recommendations.

Vulnerable users, including low-income or inexperienced investors, may especially value transparency because it reduces uncertainty and hesitation. Yet this reassurance effect can create risks if transparency cues are mistaken for proof that AI systems are inherently reliable.

The paper warns that explanations themselves can also be misleading. Users may interpret the mere presence of an explanation as evidence that an AI system is trustworthy, even when they do not truly understand the underlying reasoning or when the explanation lacks technical fidelity.

The author therefore argued that transparency mechanisms need to be evaluated not simply on whether they improve trust scores, but on whether they improve actual judgment and decision quality.

The review also identified strong contextual differences between financial applications. In robo-advisory systems, transparency usually centers on portfolio logic, fees, and service clarity. In credit scoring systems, transparency is more closely tied to fairness and contestability because users may need to challenge denied loan applications or disputed outcomes.

Crowdfunding platforms represented another distinct environment where transparency interacted with broader governance mechanisms such as platform control, user engagement, update frequency, and recommendation visibility. The review argues that transparency cannot be treated as a universal design principle operating the same way across all financial technologies.

Researchers call for stronger behavioral testing and fairness-focused AI design

The study identifies major gaps in the current research landscape and outlines an extensive future research agenda focused on improving AI accountability in finance. One major weakness is the lack of behavioral realism in existing studies. Most experiments rely on hypothetical scenarios, trust ratings, or adoption surveys rather than real financial choices involving meaningful risk or consequences.

Future research should examine how people respond to AI systems during adverse or high-stakes situations such as loan denials, fraud alerts, disputed investment recommendations, and financial losses.

The review also calls for more direct comparisons between different transparency designs. Current studies often examine only one transparency mechanism at a time, making it difficult to determine whether disclosure, explanation, platform transparency, or user control are equally effective or fundamentally different.

Another key recommendation involves

  • Studying explanation quality rather than simply explanation presence. Future experiments should test whether explanations are accurate, understandable, contestable, and genuinely useful instead of assuming any explanation automatically improves trust.
  • Need for longitudinal studies examining how trust changes over time: A transparency cue that initially boosts confidence may not sustain appropriate reliance after repeated exposure, inconsistent performance, or system failure.
  • Human-AI hybrid systems: Several reviewed studies found that users still preferred some degree of human involvement even when AI systems were perceived as transparent and trustworthy. This suggests that transparency alone may not eliminate demand for human accountability in financial decision-making.
  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback