AI could make cyber warfare faster, riskier and harder to control
One of the key core questions is whether AI will tilt the cyber offense-defense balance in favor of attackers or defenders. The answer is layered. Offensive actors, especially cybercriminals and state-sponsored groups, benefit significantly from AI's ability to automate attack chains, discover vulnerabilities, and operate at machine speed without human fatigue. AI agents can also disguise themselves, act autonomously, and deliver more precise attacks with less trial-and-error. This makes AI an ideal tool for opportunistic actors looking for one breakthrough moment - given that attackers often need just one success to breach a system.
Artificial intelligence is rapidly becoming a transformative force in cybersecurity, not just in defending digital networks but in reshaping the entire character of cyber conflict. A new study from Georgetown University's Center for Security and Emerging Technology, titled “The Impact of AI on the Cyber Offense-Defense Balance and the Character of Cyber Conflict” and published on arXiv, offers the most comprehensive evaluation yet of how AI could redefine offensive and defensive dynamics across the digital battlefield.
Based on an extensive review of the literature, the study catalogues 44 ways in which AI is expected to influence both the strategic posture of cyber actors and the nature of cyber engagements themselves. It evaluates how AI may amplify or diminish the effectiveness of existing arguments for offensive or defensive advantage. The result is not a binary answer, but a nuanced exploration: AI strengthens both sides, but not evenly, not everywhere, and not all at once. This fragmented and conditional impact is likely to produce more unpredictable, faster, and more crowded cyber confrontations in the years ahead.
Will AI give hackers the edge, or will it reinforce defense?
One of the study’s key questions is whether AI will tilt the cyber offense-defense balance in favor of attackers or defenders. The answer is layered. Offensive actors, especially cybercriminals and state-sponsored groups, benefit significantly from AI's ability to automate attack chains, discover vulnerabilities, and operate at machine speed without human fatigue. AI agents can also disguise themselves, act autonomously, and deliver more precise attacks with less trial-and-error. This makes AI an ideal tool for opportunistic actors looking for one breakthrough moment - given that attackers often need just one success to breach a system.
Yet defense is not standing still. AI enables defenders to reconfigure digital terrain in real time, scale monitoring tools across massive networks, and simulate thousands of threat scenarios. For smaller organizations and critical infrastructure operators with limited security teams, AI acts as a force multiplier. By embedding AI into endpoint protection, intrusion detection, and software patching, defenders can build a resilient perimeter capable of adapting to evolving threats faster than before.
The report identifies specific factors that give defenders a unique edge with AI: they control the environment, can harden systems preemptively, and can pool intelligence across vast networks. AI can help defenders interpret logs, detect anomalies, and flag early signs of infiltration - tasks that have overwhelmed human analysts for years. Moreover, with AI-powered red teaming and vulnerability scanning, defenders can replicate the capabilities of elite attackers and shore up weaknesses in advance.
How Does AI Change the Character of Cyber Conflict?
The study doesn’t stop at the tactical level. It asks broader questions about how AI alters the fundamental nature or “character” of cyber conflict. Here, the researchers integrate 48 propositions drawn from leading cybersecurity scholars to evaluate changes in speed, attribution, scale, escalation risk, and system complexity.
AI accelerates conflict at the tactical level. It can reverse engineer malware, write exploits, and develop evasive attack techniques in hours instead of weeks. However, strategic and operational levels, where bureaucratic approvals, geopolitical concerns, and human judgment still reign, may not move as quickly. That mismatch between AI’s tactical speed and human-led strategic inertia could create pressure for nations to delegate more decisions to AI, raising risks of miscalculation and escalation.
The digital ecosystem is also becoming harder to map and understand. AI contributes to this complexity by generating code at scale, increasing software interdependencies, and embedding logic that may be opaque to human reviewers. Ironically, AI is both a tool for understanding cyberspace and a source of its growing complexity.
Notably, the study highlights that cyberspace is evolving from a scale-free network into a hub-and-spoke configuration, where major AI providers act as central nodes. This centralization could concentrate risk but also allow for more robust control, provided those hubs remain secure. Conversely, proliferation of lightweight AI models may enable more distributed cyber activity, including by non-state actors and hobbyist hackers. This could widen the playing field and make attribution harder.
Can AI Prevent Escalation or Will It Make Cyber Conflicts More Volatile?
A pivotal concern raised in the study is whether AI will increase the chances of cyber conflict spiraling into broader confrontation. There’s reason to worry. Delegating decisions to autonomous agents introduces uncertainty: AI systems may misjudge adversary intent, fail to anticipate cultural responses, or act too aggressively based on narrow objectives. This is especially risky in environments where cyber operations are already difficult to attribute and where intent is often ambiguous.
At the same time, AI could serve as a stabilizing force. It can provide defenders with early warnings, simulate adversary behavior for better preparedness, and assess attack consequences before execution. Transparent, explainable AI might even clarify intent during conflict, for instance, by revealing whether a malware implant is designed for espionage or sabotage. However, that depends on adversaries allowing such transparency, which may not happen in competitive or clandestine scenarios.
The study points out that escalation risk is magnified when attacks are frequent, poorly understood, or poorly controlled. AI lowers the technical barrier for launching attacks, making it easier for terrorists, thrill-seekers, or rogue insiders to execute harmful campaigns. Yet it also raises defensive potential, particularly for actors who previously lacked access to expert-level security tools. The balance will depend on how widely advanced AI models are proliferated and how effectively defensive systems are scaled.
The authors argue that the offense-defense balance is not a single equation. It must be evaluated across multiple axes: scale, speed, reliability, interpretability, and accessibility. AI shifts each of these axes differently, sometimes in favor of attackers, other times in favor of defenders.
- FIRST PUBLISHED IN:
- Devdiscourse

