AI-powered red teaming raises global cybersecurity threat level

Among the most commonly used attack techniques are classification algorithms, particularly LSTM, which appeared in five of the eleven reviewed studies. GANs and SVMs were found in four each. These tools, originally developed for benign applications such as natural language processing or pattern recognition, are now used to fabricate phishing emails, spoof URLs, and mimic login interfaces.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 27-03-2025 18:23 IST | Created: 27-03-2025 18:23 IST
AI-powered red teaming raises global cybersecurity threat level
Representative Image. Credit: ChatGPT

Artificial intelligence is transforming the cybersecurity threat landscape, accelerating the automation and sophistication of cyberattacks, according to a new review by researchers at Jamk University of Applied Sciences. The study, based on a systematic analysis of academic sources from 2015 to 2023, warns that AI-powered red teaming has become a double-edged sword, offering cybercriminals scalable capabilities to breach sensitive systems and data.

Red teaming, originally developed in the military as a method to simulate adversarial behavior, has long been a standard in cybersecurity exercises. But the integration of AI has reshaped its role, allowing attackers to generate intelligent, adaptive, and large-scale cyber offensives.

The report titled "Red Teaming with Artificial Intelligence-Driven Cyberattacks: A Scoping Review" identifies AI methods such as generative adversarial networks (GANs), long short-term memory (LSTM) models, and support vector machines (SVMs) as core components in contemporary cyberattack toolkits. These models are used to automate phishing, crack passwords, evade detection, and mimic human behavior, posing a growing threat to individuals, corporations, and governments.

The scoping review, led by Tuomo Sipola and colleagues Mays Al-Azzawi, Dung Doan, Jari Hautamäki, and Tero Kokkonen, screened 471 publications and selected 11 peer-reviewed studies and books that directly addressed AI methodologies and their implementation in cyberattacks. The findings show that attackers are no longer limited to traditional methods. Instead, they increasingly rely on AI tools capable of identifying system vulnerabilities, generating malicious content, and executing precision-targeted social engineering campaigns.

Among the most commonly used attack techniques are classification algorithms, particularly LSTM, which appeared in five of the eleven reviewed studies. GANs and SVMs were found in four each. These tools, originally developed for benign applications such as natural language processing or pattern recognition, are now used to fabricate phishing emails, spoof URLs, and mimic login interfaces. Deep learning methods, such as convolutional neural networks (CNNs) and deep neural networks (DNNs), are used to extract sensitive data and deceive user authentication systems.

The report identifies five primary targets of AI-driven cyberattacks: personal and sensitive data, URLs, social media profiles, passwords, and system configurations. Four of the reviewed studies reported attacks on general data, including health, financial, and government records. In three cases, malicious AI was used to manipulate URLs, either to redirect users to fake websites or distribute malware. Two studies reported attacks on social media profiles, and another two described password cracking using brute-force methods powered by neural networks. System-level attacks, including intrusion into network infrastructure and manipulation of configuration data, were also documented.

The authors point to recent cases where machine learning has been used to bypass CAPTCHA systems, generate persuasive phishing messages in multiple languages, and execute zero-day attacks. “The mass customization of phishing attacks and the ability to dynamically adapt messaging based on user profiles raise significant concerns,” the study states.

In one cited study, AI-driven attacks were most frequently executed during the access and penetration phases of cyber intrusions, accounting for 56% of recorded incidents. Techniques such as PassGAN and DeepPhish were used to guess passwords and simulate realistic attack vectors. In the exploitation phase, attackers used AI to escalate privileges, exfiltrate data, and manipulate trust signals.

Defensive use of AI, while growing, is not keeping pace with the offensive capabilities. The report stresses that while anomaly detection systems and predictive models can identify some AI-based threats, the arms race between attackers and defenders is intensifying. "AI-based defenses must be equally adaptive, transparent, and capable of real-time response," the researchers note.

The review also highlights regulatory and ethical concerns, particularly in the use of generative models like large language systems. The lack of transparency in decision-making processes and potential bias in training data could result in unintended vulnerabilities within AI security systems. Techniques such as explainable AI (XAI) are recommended to address these challenges by providing interpretability and accountability.

While the academic community has begun to explore AI’s dual-use potential in cybersecurity, the study warns that more rigorous frameworks are needed to track the proliferation of AI attack methods. In particular, the authors advocate for the development of taxonomies to classify AI-based threats, and for expanded collaboration between governments, researchers, and industry actors to share threat intelligence and mitigation strategies.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback