Banks embrace GenAI, but security threats and bias risks loom
The study highlights that GenAI is already revolutionizing five key areas of financial services: customer engagement, regulatory compliance, investment management, developer productivity, and strategic decision-making.
Global financial institutions are embracing generative artificial intelligence (GenAI) with unprecedented speed, seeking to transform everything from customer service to regulatory compliance. Yet as the technology penetrates deeper into high-stakes financial operations, cybersecurity, ethical, and governance vulnerabilities are surfacing at an alarming rate.
A comprehensive study titled “Generative AI in Financial Institutions: A Global Survey of Opportunities, Threats, and Regulation”, published by researchers from the Indian Institute of Technology Kanpur, maps this fast-evolving landscape and calls for urgent safeguards to ensure responsible AI integration.
This global survey outlines both the disruptive promise and the profound perils posed by GenAI adoption across banking, insurance, asset management, and fintech ecosystems. Drawing from a wide array of use cases, regulatory insights, threat intelligence, and operational benchmarks, the study provides a sweeping diagnosis of how financial firms can balance innovation with risk management.
How are financial institutions deploying generative AI?
The study highlights that GenAI is already revolutionizing five key areas of financial services: customer engagement, regulatory compliance, investment management, developer productivity, and strategic decision-making.
In customer-facing operations, banks such as SBI, Axis Bank, and HDFC Bank are using LLM-powered chatbots and virtual assistants to respond to queries in multiple languages, deliver personalized financial advice, and dynamically generate content for marketing campaigns. These tools are built on architectures like Retrieval-Augmented Generation (RAG), enabling high-context and individualized recommendations.
Compliance teams are leveraging GenAI to digest lengthy regulatory documents and automate report generation using long-context summarization. Fraud detection has also been upgraded with synthetic data training that helps models detect anomalies unseen by traditional systems. Companies like Citigroup and startups like AdvaRisk are implementing such tools to streamline operational oversight and bolster defenses.
In investment management, JPMorgan’s IndexGPT exemplifies how GenAI enables thematic portfolio creation via natural language input, while virtual financial advisors simulate human interaction through dialogue tuning and sentiment tracking. Internal operations have seen developers at institutions like Goldman Sachs benefit from AI-driven code generation and bug detection, reducing development timelines by up to 30%.
Finally, in strategic planning, financial firms now use GenAI for unstructured data analysis, market scenario simulation, and economic stress testing. These AI-powered foresight tools are already helping firms react faster to macroeconomic shocks and evolving regulations.
Across all these domains, the study notes a phased deployment pattern: starting with internal pilots and gradually expanding to production-scale applications. Importantly, GenAI is mostly deployed in an augmented intelligence model, serving as a decision-making co-pilot, not a replacement for human oversight.
What are the cyber and ethical risks emerging from GenAI?
The study identifies two major categories of threats: GenAI-enabled attacks and attacks targeting GenAI systems themselves.
The first includes AI-generated phishing, deepfake-enabled fraud, and disinformation campaigns. The 2024 spike in AI-driven fraud saw phishing rates surge by 118%, with attackers now crafting flawless messages impersonating CEOs or vendors, often augmented by deepfake voice calls. These campaigns are increasingly used to authorize unauthorized transfers or manipulate stock prices. Tools like “WormGPT” and “FraudGPT”—malicious counterparts of ChatGPT—are spreading through the dark web, lowering barriers for cybercriminals to create malware, phishing sites, and exploits at scale.
The second category targets the integrity of financial AI systems. Prompt injection attacks, model inversion, data poisoning, and supply chain tampering can compromise LLMs. For instance, attackers can use hidden prompts to hijack systems or extract confidential client data. Research efforts like MITRE ATLAS are now cataloging such adversarial techniques and advocating protective measures like model signing, red teaming, and secure model provenance.
Financial institutions face a compounded risk when GenAI models are embedded in transactional workflows or decision pipelines. If compromised, these systems could misapprove loans, leak client data, or even cause systemic disruptions.
Ethically, the deployment of GenAI brings challenges tied to bias, opacity, privacy, and accountability. AI systems trained on historical data may perpetuate discriminatory lending or credit scoring patterns. Financial institutions are adopting fairness audits, adversarial debiasing, and explainability techniques such as SHAP and LIME to ensure that decisions are transparent and justifiable.
Privacy is another pressing concern. As GenAI tools process personal and financial data, institutions must comply with strict regulations like the EU’s GDPR or India’s DPDP Act. The study urges the use of federated learning, anonymization, and explicit consent to ensure compliance. Human-in-the-loop oversight, ethical use disclosures, and customer awareness initiatives are also being integrated to prevent misuse and build trust.
Are regulatory and governance frameworks keeping up?
The study maps a fragmented but converging global regulatory landscape. India’s RBI has formed the FREE-AI committee to draft ethical AI usage standards in finance. Singapore’s MAS is piloting governance models through Project MindForge. The European Union leads with the AI Act, categorizing high-risk systems, like credit scoring and robo-advisors, for rigorous audits and transparency requirements. DORA, the EU’s Digital Operational Resilience Act, complements this by tightening cybersecurity standards for AI systems deployed in financial contexts.
Meanwhile, U.S. agencies like the SEC and CFPB are pushing for disclosure of AI-induced conflicts of interest and establishing guardrails for robo-advisors. In the absence of unified global laws, multinational banks are aligning operations with the most stringent frameworks, typically the EU’s, to ensure compliance and reduce risk.
Regulatory sandboxes have emerged as a favored mechanism for supervised AI experimentation. These controlled environments allow financial institutions to innovate with GenAI under direct regulatory observation. SupTech, supervisory technologies used by regulators, is also on the rise, with AI being deployed to detect insider trading and monitor market anomalies.
The authors recommend a secure AI lifecycle approach to development and deployment, encompassing seven core phases: secure data sourcing, adversarial training, red teaming, access control, model versioning, incident response, and continuous governance. Institutions are encouraged to maintain AI Bills of Materials (BOMs), enforce model signing, and set up ethics committees to oversee AI adoption from pilot to production.
- FIRST PUBLISHED IN:
- Devdiscourse

