Trust and transparency will decide future of AI in mobile banking

The findings show that users remain wary when systems operate as “black boxes.” False positives, which occurred in more than a quarter of cases, created confusion and frustration. Without clear explanations, customers struggled to understand why transactions were flagged, eroding confidence in the technology.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 24-09-2025 23:21 IST | Created: 24-09-2025 23:21 IST
Trust and transparency will decide future of AI in mobile banking
Representative Image. Credit: ChatGPT

Artificial intelligence is reshaping the future of mobile financial services in emerging markets, but a new study warns that its success depends on more than technological efficiency. The research finds that while AI-driven systems are widely seen as more effective in detecting fraud, deep concerns over privacy, trust, and transparency threaten adoption.

The paper, titled AI-Driven Cybersecurity in Mobile Financial Services: Enhancing Fraud Detection and Privacy in Emerging Markets and published in the Journal of Cybersecurity and Privacy, provides one of the first user-centered assessments of AI in mobile finance, combining statistical analysis with qualitative insights. The study is based on survey responses from over 150 participants in Kenya, Nigeria, and Bangladesh.

How effective is AI in fraud detection compared to traditional systems?

The study explores whether AI tools genuinely outperform rule-based fraud detection in mobile financial services. According to participants, the answer is clear: AI systems were overwhelmingly considered more effective. More than nine out of ten respondents reported that AI offered stronger protection against fraudulent activity than conventional methods. Users noted AI’s ability to identify suspicious behavior in real time and adapt quickly to new fraud tactics.

However, effectiveness alone does not guarantee acceptance. The findings show that users remain wary when systems operate as “black boxes.” False positives, which occurred in more than a quarter of cases, created confusion and frustration. Without clear explanations, customers struggled to understand why transactions were flagged, eroding confidence in the technology.

This highlights the dual-edged nature of AI in financial systems: while it enhances operational security, its lack of transparency can undermine trust, the very foundation of user adoption.

Why do trust and transparency matter more than usability?

The research highlights that trust and transparency are stronger predictors of adoption than ease of use. Statistical modelling revealed that perceived usefulness and transparency directly shaped trust, which in turn was the single most important driver of willingness to adopt AI-driven fraud detection. By contrast, interface simplicity or user-friendliness had little effect once trust and risk concerns were factored in.

This dynamic is particularly pronounced in emerging markets, where experiences with digital platforms vary across urban and rural communities. Respondents from regions with stronger enforcement of data protection laws expressed higher levels of trust in mobile financial systems. In contrast, those in jurisdictions with weak oversight viewed AI tools with skepticism, fearing data misuse by companies or third parties.

The findings also revealed socio-cultural dimensions. Digitally literate urban users were more open to AI adoption, while rural participants, especially those with lower levels of education, expressed heightened concern about consent and silent data collection. These patterns suggest that successful deployment of AI tools requires tailoring to local regulatory and cultural contexts, rather than a one-size-fits-all approach.

How can emerging markets balance fraud prevention with privacy?

A key issue identified is the trade-off between enhanced fraud detection and heightened privacy concerns. While AI systems offer superior protection, more than 95 percent of respondents feared risks such as unauthorized data harvesting, opaque third-party sharing, and erosion of user consent. These anxieties highlight a growing demand for technical safeguards that protect privacy without weakening security.

The authors recommend two strategies to strike this balance. First, the integration of explainable AI (XAI) would make fraud detection systems more transparent, helping users understand why particular transactions are flagged and reducing mistrust caused by false alarms. Second, the adoption of federated learning (FL) would enable institutions to collaborate on fraud detection without centralizing sensitive data, ensuring privacy is preserved across borders.

The study also points to the need for non-technical measures. Localized awareness campaigns tailored to cultural contexts could help users build confidence in AI tools, while stronger regulatory frameworks would reinforce accountability for both providers and third parties. Without such measures, the adoption of AI-driven cybersecurity risks being undermined by public resistance.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback