AI alone can’t fight fraud: Bankers demand transparency and ethical design
The study finds that transparency in AI decision-making is a powerful enabler of trust. In financial systems governed by strict compliance protocols, the opacity of “black box” AI models often deters use. Tools like SHAP and LIME, which aim to make complex AI outputs interpretable for auditors and regulators, are increasingly important. In both the UAE and Qatar, the ability to explain and justify why a transaction was flagged as fraudulent is not just desirable - it’s legally necessary.
As financial institutions around the world adopt AI-based tools to combat fraud, a new study reveals that technology alone is not enough to ensure success. In the Gulf banking sector, trust, fairness, and transparency emerge as pivotal determinants of adoption. The study, titled “Adoption of Artificial Intelligence-Driven Fraud Detection in Banking: The Role of Trust, Transparency, and Fairness Perception in Financial Institutions in the United Arab Emirates and Qatar,” was published in April 2025 in the Journal of Risk and Financial Management. Conducted by Hadeel Yaseen and Asma’a Al-Amarneh, the research maps how ethical and perceptual variables condition the uptake of AI tools in highly regulated financial environments.
Based on a survey of 409 banking professionals, including auditors, compliance officers, and risk managers in the UAE and Qatar, the study applies structural equation modeling to measure how transparency, fairness perception, and regulatory compliance influence trust - and in turn, trust's impact on the adoption of AI-based fraud detection systems. The findings challenge the assumption that technical performance alone drives AI integration. Instead, the path to AI adoption in Gulf banking institutions is mediated by whether users perceive the systems as understandable, just, and institutionally credible.
How does transparency shape trust and drive AI adoption?
The study finds that transparency in AI decision-making is a powerful enabler of trust. In financial systems governed by strict compliance protocols, the opacity of “black box” AI models often deters use. Tools like SHAP and LIME, which aim to make complex AI outputs interpretable for auditors and regulators, are increasingly important. In both the UAE and Qatar, the ability to explain and justify why a transaction was flagged as fraudulent is not just desirable - it’s legally necessary.
Yaseen and Al-Amarneh’s research confirms that professionals who perceive AI systems as interpretable are significantly more likely to trust and adopt them. Transparency was found to directly influence both trust and the intention to use AI tools. This effect is stronger in institutions where AI is already part of day-to-day operations, suggesting that experiential familiarity reinforces the benefits of explainable outputs.
Moreover, the study reveals that trust itself is the most powerful predictor of AI adoption. The model shows that trust explains nearly half (48%) of the variance in adoption behavior, even more than regulatory compliance or technical exposure. This finding is consistent across internal and external auditors, though internal auditors, who interact more frequently with AI systems, demonstrated higher levels of trust overall.
To what extent does fairness perception mitigate algorithmic bias?
The research also sheds light on the critical role of fairness in AI adoption. While algorithmic bias is a known challenge in machine learning systems, Yaseen and Al-Amarneh emphasize that it is not the presence of bias alone, but rather how fairly users perceive the system to operate, that determines trust and eventual use. The study finds that fairness perception fully mediates the negative relationship between bias and adoption: when professionals believe a system is fair, they are more likely to use it, even if they recognize that some bias exists.
The authors tested whether fairness perception acts as both a mediator and a moderator. Results confirmed that perceived fairness not only mediates the effect of algorithmic bias on adoption but also amplifies the positive effect of trust. In practice, this means that when a fraud detection system is seen as equitable, it strengthens the link between trust and willingness to use it. This was especially pronounced in UAE-based institutions, where fairness perception had a stronger overall effect than in Qatar.
Subgroup analysis further reveals that trust and fairness are more influential among professionals with high exposure to AI. Those with more than 11 years of experience and prior interaction with AI tools reported higher fairness perception scores and greater readiness to adopt AI solutions. On the other hand, professionals with limited AI exposure demonstrated lower trust levels and more skepticism about fairness, highlighting the importance of training and AI literacy.
How do institutional and regional factors influence AI adoption?
While the underlying variables of trust, transparency, and fairness were consistent across the sample, institutional and regional differences shaped their influence. In the UAE, trust and fairness perception had a stronger predictive effect on AI adoption compared to Qatar. This suggests that local governance cultures, particularly regulatory expectations around explainability and ethical compliance, amplify the importance of these factors.
Internal versus external roles also shaped attitudes. Internal auditors reported significantly higher trust in AI systems than external auditors, likely due to their closer engagement with AI during operational audits. External auditors, constrained by distance and legal mandates, were less likely to adopt AI unless its decisions were highly transparent and consistent with existing regulatory frameworks.
The findings also point to the importance of aligning AI adoption strategies with global regulatory norms. The UAE and Qatar are integrating elements from the EU AI Act and the U.S. NIST AI Risk Management Framework into local data protection laws. These frameworks prioritize explainability, bias mitigation, and human oversight, values that align closely with the ethical expectations observed in the study.
Most importantly, the research debunks the idea that speed and detection accuracy alone drive AI adoption. While black-box models often deliver higher performance, they fall short in explainability, making them less compatible with the Gulf region’s regulatory demands. Hybrid models, combining rule-based and machine learning methods, are emerging as a preferred solution. They offer a middle ground between operational effectiveness and interpretability, satisfying both technical and institutional requirements.
- FIRST PUBLISHED IN:
- Devdiscourse

