AI reshapes India’s cybercrime landscape, raising urgent legal, ethical, and forensic challenges

Deepfakes, synthetic media, and automated impersonation tools are increasingly used to manipulate individuals, organizations, and even state institutions. These techniques blur the line between authentic and fabricated digital evidence, complicating both prevention and prosecution. The study emphasizes that AI allows cybercrime to move beyond financial fraud into areas such as political manipulation, cyber espionage, and psychological harm.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 23-12-2025 16:10 IST | Created: 23-12-2025 16:10 IST
AI reshapes India’s cybercrime landscape, raising urgent legal, ethical, and forensic challenges
Representative Image. Credit: ChatGPT
  • Country:
  • India

India is accelerating its digital transformation across governance, commerce, and public services. Amidst this shift, AI-driven technologies are increasingly shaping how cybercrimes are committed, detected, investigated, and prosecuted. While these systems promise faster threat detection and improved forensic efficiency, they also introduce complex legal, ethical, and institutional challenges that existing frameworks are struggling to address.

A research paper Cybercrime and Computer Forensics in the Epoch of Artificial Intelligence in India, published as a legal research study on arXiv, presents a comprehensive analysis of how AI is simultaneously strengthening cybersecurity and empowering cybercriminals, arguing that India’s legal and forensic systems must urgently adapt to avoid falling behind the evolving threat landscape.

How AI is expanding the scale and sophistication of cybercrime

The study finds that artificial intelligence has fundamentally altered the nature of cybercrime by increasing its scale, speed, and adaptability. Traditional cyber offenses such as phishing, identity theft, ransomware, and malware distribution have been amplified through AI-driven automation. Cybercriminals now deploy machine learning algorithms to generate convincing phishing messages, customize social engineering attacks, and bypass detection systems with greater precision than ever before.

Deepfakes, synthetic media, and automated impersonation tools are increasingly used to manipulate individuals, organizations, and even state institutions. These techniques blur the line between authentic and fabricated digital evidence, complicating both prevention and prosecution. The study emphasizes that AI allows cybercrime to move beyond financial fraud into areas such as political manipulation, cyber espionage, and psychological harm.

AI has also lowered the barrier to entry for cybercriminal activity. Sophisticated attack tools that once required advanced technical expertise can now be deployed through automated systems, enabling a wider range of actors to engage in cybercrime. This democratization of cyber offense capabilities, the study warns, increases the frequency and unpredictability of attacks while overwhelming traditional security defenses.

At the same time, the research highlights that AI has become a target as well as a tool of cybercrime. Attacks aimed at corrupting training data, manipulating algorithms, or exploiting model vulnerabilities introduce new categories of digital offense. These threats challenge conventional legal definitions of cybercrime and raise questions about liability, attribution, and intent when harm is mediated through autonomous or semi-autonomous systems.

The study argues that these developments require lawmakers and enforcement agencies to rethink how cybercrime is classified and addressed. Existing statutes, many of which were drafted before the rise of advanced AI, struggle to capture the complexity of algorithm-driven offenses and their cascading social impacts.

AI’s growing role in cybersecurity and computer forensics

While AI has empowered cybercriminals, the study stresses that it has also become indispensable in defending digital infrastructure and advancing computer forensics. Machine learning systems are now widely used to detect anomalies in network traffic, identify emerging malware strains, and automate threat response. These capabilities allow organizations to respond to cyberattacks with greater speed and accuracy than manual systems permit.

In the field of computer forensics, AI has transformed how digital evidence is collected, processed, and analyzed. Techniques such as natural language processing, image recognition, and pattern analysis enable forensic experts to sift through vast volumes of data, extract relevant signals, and reconstruct events with improved efficiency. AI-driven tools can correlate disparate data sources, identify suspicious behavior, and support investigative decision-making.

The study highlights that these advancements are particularly important in an era of big data, where the volume and diversity of digital evidence exceed human analytical capacity. AI allows forensic teams to manage this complexity while maintaining investigative rigor. However, the paper cautions that reliance on AI also introduces new vulnerabilities.

One major concern is explainability. Complex AI models often operate as opaque systems, making it difficult for investigators, lawyers, and judges to understand how conclusions are reached. This lack of transparency raises serious challenges for evidentiary standards and due process. If forensic conclusions cannot be clearly explained or independently verified, their admissibility and credibility may be undermined.

The research also identifies risks related to bias, error propagation, and adversarial manipulation. AI systems trained on biased or incomplete data can reinforce systemic inequalities, while adversarial attacks can distort outputs without obvious signs of tampering. These risks are particularly acute in criminal justice contexts, where errors can have severe consequences for individuals’ rights and freedoms.

The authors argue that AI should be treated as an assistive rather than determinative tool in forensic practice. Human oversight, professional judgment, and procedural safeguards remain essential to ensure accuracy, fairness, and accountability. Without these checks, the efficiency gains offered by AI may come at the expense of justice.

Privacy, ethics, and the need for stronger governance frameworks

The study also sheds light on the tension between AI-driven security measures and the protection of privacy and human rights. AI systems often rely on large-scale data collection and analysis, raising concerns about surveillance, profiling, and unauthorized inference. The paper situates these concerns within India’s evolving legal landscape, particularly following the enactment of the Digital Personal Data Protection Act, 2023.

The study provides a detailed examination of how this legislation reshapes data governance in India. It outlines the roles and responsibilities of data fiduciaries, data processors, and data principals, emphasizing consent, purpose limitation, and accountability. While the law represents a significant step toward regulating digital personal data, the authors argue that effective enforcement and institutional capacity will determine its real-world impact.

From an ethical perspective, the paper stresses that privacy is not merely a regulatory requirement but a foundational value linked to dignity, autonomy, and democratic participation. AI-driven surveillance and data analytics risk normalizing intrusive practices if not carefully constrained. The study warns that security-driven justifications can erode privacy protections unless balanced by transparency and oversight.

The authors also call for collaboration in addressing AI-enabled cybercrime. Governments, private sector organizations, academia, civil society, and technical experts must work together to share knowledge, develop standards, and respond to emerging threats. Cybercrime is inherently transnational, and fragmented approaches are unlikely to succeed against AI-powered adversaries.

Education and capacity building emerge as critical priorities. The study calls for specialized training for law enforcement, legal professionals, forensic experts, and judges to equip them with the skills needed to handle AI-related cases. Without such investment, the gap between technological capability and institutional understanding will continue to widen.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback