Agentic AI could expand access to cybersecurity careers


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 25-02-2026 19:14 IST | Created: 25-02-2026 19:14 IST
Agentic AI could expand access to cybersecurity careers
Representative Image. Credit: ChatGPT

Artificial intelligence (AI) is rapidly expanding into high-stakes domains once reserved for trained specialists, and offensive cybersecurity is no exception. Agentic AI frameworks are now capable of coordinating reconnaissance, vulnerability analysis, and tool execution, raising questions not only about automation but about who gets to participate in hacking culture. For decades, entry into competitive cybersecurity has required prior exposure, informal mentorship, or advanced technical schooling.

A newly released preprint study, Can AI Lower the Barrier to Cybersecurity? A Human-Centered Mixed-Methods Study of Novice CTF Learning, explores whether that barrier is beginning to shift. The research suggests that AI systems can act as structured guides for beginners, helping them engage with Capture-the-Flag challenges at a level once considered out of reach for first-time participants.

AI as a cognitive on-ramp to offensive security

CTF competitions are widely regarded as training grounds for real-world cybersecurity skills. Participants must exploit vulnerabilities, reverse engineer binaries, break cryptographic schemes, and uncover hidden digital artifacts. Tasks are often categorized into domains such as reverse engineering, web exploitation, cryptography, binary exploitation, digital forensics, and miscellaneous creative challenges.

Despite their pedagogical value, these competitions can feel inaccessible to newcomers. Participants are expected to coordinate multiple tools, interpret unfamiliar command-line outputs, and chain together attack steps with minimal guidance. According to the study, this creates a cognitive bottleneck that discourages participation before learning even begins.

To examine whether AI could reduce that bottleneck, the researchers conducted a longitudinal mixed-methods case study. The primary participant was an undergraduate computer science student with roughly six years of vocational systems engineering experience but no prior exposure to CTF competitions or penetration testing beyond a single university course.

The student was equipped with Cybersecurity AI, known as CAI, an agentic framework that integrates large language models with established penetration testing tools such as Nmap and Burp Suite. CAI is designed to orchestrate reconnaissance, vulnerability analysis, and workflow sequencing rather than merely generating text responses.

Over nearly a year, the participant used CAI to tackle reconstructed challenges from the Austria Cybersecurity Challenge and from Cyberleague competitions. The performance was then compared to 29 real-world participants from the previous year’s national competition. Those participants represented a range of skill levels, from beginners to experts, and were surveyed about their completion times, number of attempts, and use of AI tools.

The results were mixed but revealing. On some challenges, particularly web and miscellaneous tasks, the AI-assisted novice performed within the range of intermediate competitors. On more complex tasks, such as cryptography-heavy challenges, the participant struggled to match experienced practitioners.

However, the AI-assisted participant demonstrated a distinct strategic pattern. The number of attempts per challenge was often higher than that of competitors, but the time spent per attempt was comparatively lower. In other words, the AI-enabled workflow supported rapid exploration of multiple strategies.

This behavior suggests a shift in learning dynamics. Instead of becoming stuck on a single line of attack, the participant could iterate quickly, test hypotheses, and move between approaches with reduced friction. While total solution time remained longer on harder tasks, the strategic acceleration indicates that AI may compress parts of the apprenticeship phase that typically take months or years to develop.

The researchers deliberately avoided applying statistical significance tests that would artificially inflate claims from a single-case design. Instead, they reported the participant’s positioning within the distribution of competition data. On several beginner and easy-level challenges, the AI-assisted novice’s performance aligned with intermediate-level averages. On more advanced tasks, the gap between novice and expert remained substantial.

The quantitative findings show that AI does not erase expertise differences. What it appears to do is alter the slope of the learning curve.

Learning, identity, and the role of agentic guidance

The research design incorporated structured reflective logs and a retrospective questionnaire, which were analyzed using reflexive thematic analysis. This allowed the authors to capture how AI influenced confidence, decision-making, and professional self-conception.

Before the study began, the participant viewed CTF competitions as intimidating and exclusive. Offensive cybersecurity appeared to belong to a specialized elite. The main barriers were not purely technical but procedural. The participant lacked a mental map of how to begin solving challenges, how to sequence tasks, and how to interpret intermediate results.

The introduction of CAI changed that perception.

The researchers identified three central mechanisms through which AI mediated entry into practice: strategic overview, structured guidance, and cognitive load reduction.

  • CAI provided a strategic overview. It mapped out possible attack surfaces and suggested directions for investigation. This replaced open-ended uncertainty with a set of structured options.
  • The system delivered structured guidance. It did not merely describe concepts but recommended specific steps, tool usage, and sequencing logic. By orchestrating workflows across reconnaissance and exploitation phases, the AI reduced the need for trial-and-error navigation.
  • Cognitive load was reduced. Complex problems were broken into smaller components. Instead of confronting a dense, opaque challenge environment, the participant engaged with manageable tasks. This lowered feelings of overwhelm and sustained motivation.

Importantly, the study found that AI assistance did not eliminate learning. Two forms of knowledge gain were documented: procedural skill acquisition and conceptual understanding.

Procedurally, the participant learned how to use reconnaissance tools, identify vulnerabilities, and execute attack strategies. Some beginner-level tasks could later be performed without AI assistance.

Conceptually, AI explanations helped bridge theory and practice. By contextualizing outputs and clarifying reasoning, the system supported deeper understanding of cybersecurity principles. The researchers argue that the AI functioned less as a shortcut and more as a structured tutor in early-stage learning.

Perhaps the most significant transformation was psychological. The participant experienced what the authors describe as an identity shift. Offensive cybersecurity, once perceived as mysterious and inaccessible, became structured and learnable. Confidence increased. Willingness to participate in future CTF competitions grew.

The transformation did not equate to mastery. The participant did not claim expert-level competence. Instead, the shift involved moving from perceived exclusion to informed entry. The field became navigable rather than intimidating.

In a sector struggling with talent shortages and uneven educational pipelines, that psychological shift may carry long-term implications.

Automation risk, ethical concerns, and the limits of AI

The study does not present AI as a panacea. Alongside positive developments, it highlights emerging risks and tensions.

  • Overreliance. The participant acknowledged moments of heavy dependence, especially during repeated failures. The ease of delegating tasks to AI raises questions about cognitive offloading and the potential weakening of independent problem-solving skills.
  • Trust calibration. The participant had to learn when to rely on AI suggestions and when to question them. Not all outputs were contextually appropriate. Developing discernment became part of the learning process.

Effective human-AI interaction requires strategic delegation, critical evaluation, and responsible use. In cybersecurity contexts, where actions may have legal and ethical implications, these competencies are particularly important.

Ethical concerns also extend to competition fairness and real-world application. The growing accessibility of agentic AI tools may blur lines between skill and automation in competitive settings. While AI can democratize entry, it may also disrupt traditional metrics of expertise.

The study also acknowledges its limitations. It focuses on a single participant with a systems engineering background, which may not represent individuals without prior technical experience. The findings are also tied to a specific AI framework and model combination. Different tools may yield different outcomes.

The comparison group of 29 competition participants was self-selected through voluntary survey responses, which may introduce bias. Demographic data such as age and gender were not collected. The authors caution against broad generalization.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback