Predictions 2026: Surge in Agentic AI for Attacks and Defenses
LevelBlue Completes Acquisition of Cybereason. Learn more
Get access to immediate incident response assistance.
Get access to immediate incident response assistance.
LevelBlue Completes Acquisition of Cybereason. Learn more
Over the years, cybersecurity predictions tend to all sound the same. Ransomware attacks will continue, supply-chain incidents will increase, and phishing will remain a problem.
However, the tail end of 2025 and 2026 presented the cybersecurity industry with a new concern, Agentic AI.
Agentic AI capabilities far exceed the basic AI concerns that were reported in the past. Its ability to make decisions and take actions on its own, without needing a human to approve every step, is a game-changer. When this is tied to its ability to achieve a complex, overarching objective (such as a cyberattack) rather than just completing a single, prompted task, it potentially becomes a huge headache for security teams.
“We are seeing the first big splash of Autonomous AI agents (systems that can independently plan, execute, and adapt cyberattacks or defensive measures) that will become more mainstream,” said Scott Swanson, Practice Leader, Security Advisory, for Stroz Friedberg, A LevelBlue Company, which will accelerate the speed and scale of potential threats.
In 2026, Ed Williams, Vice President, Spiderlabs, CPS, expects the march of AI to continue, embedding itself even deeper into every layer of the stack. This will include agentic agents that manage cloud resources to real-time phishing engines crafting perfect emails.
“This new paradigm will turn AI into the next attack surface multiplier, where a single compromised model or poisoned dataset could trigger cascading breaches far beyond what traditional malware can achieve today,” he noted.
APTs and nation-state agencies will obviously take note and use AI too for those purposes, and their arsenal of potential cyber warfare tools and zero days will grow, noted Ziv Mador, VP, Security Research at LevelBlue SpiderLabs.
However, in the same manner threat actors can use Autonomous Agentic AI, the technology can be adapted as a defensive measure by security teams, but this usage model comes with its own risks.
Bill Rucker, Vice President of LevelBlue Public Sector, sees a shift in managed detection and response (MDR) and security operations center-as-a-service (SOCaaS) toward Autonomous and Agentic AI.
Rucker expects that government agencies will increasingly adopt Agentic AI for threat detection and response, moving beyond traditional SIEM and SOAR platforms. This will present a challenge for MDR and SOCaaS security providers, he noted, as they must integrate AI-driven behavioral analytics, autonomous containment workflows, and real-time telemetry correlation to remain competitive.
To meet emerging compliance standards around this new technology, providers should position their SOCaaS offerings as AI-augmented, with transparent governance frameworks.
In addition, Rucker sees AI-Augmented red teaming and penetration testing to evolve rapidly as this technology will need to simulate advanced, AI-enabled adversaries.
These new simulations will include sophisticated tactics like deepfake-based social engineering and aggressive "living off the land" techniques, he said. Consequently, traditional penetration tests will become insufficient, leading agencies to seek advanced, scenario-based red teaming that can accurately mimic nation-state-level threats. The market opportunity here is to develop specialized pen-testing services that incorporate AI adversarial simulations, thoroughly test cloud-native attack paths, and focus on threat modeling for critical infrastructure.
Grant Hutchons, LevelBlue’s APAC Director - Security Solution Engineering and Architecture, said, defensively, the industry is already seeing agent-style co-workers inside security operations platforms that can assemble context, draft response actions, and even simulate likely attacker next moves. The fastest improvements will appear in extended detection and response suites, security operations automation, email and collaboration security, and in identity threat detection, where sequence analysis matters more than signature matching.
Swanson noted that while security ops centers can shift toward agentic models that can handle routine tasks, this must be done carefully. Using this technology can also introduce new vulnerabilities, such as rogue agents or insufficient guardrails, that can lead to more breaches.
CISOs also risk being blindsided by systemic failures in these AI workflows if they don’t implement formal frameworks for identity management, data provenance, and AI ecosystem mapping, he added.
Hutchons added that threat actors are using models to generate convincing lures in any language, to mutate payloads for each target, and to mine stolen datasets at a scale that manual tradecraft could never match. The net result is that speed and context become the only sustainable advantages, and organizations that do not embed artificial intelligence into their security workflows will find themselves permanently a step behind.
In the end, from what LevelBlue’s experts can divine, Agentic AI is the defining 2026 security battleground. This autonomous technology amplifies both the speed and scale of cyberattacks, demanding immediate defense modernization and transparent governance to harness its power safely.
LevelBlue is a globally recognized cybersecurity leader that reduces cyber risk and fortifies organizations against disruptive and damaging cyber threats. Our comprehensive offensive and defensive cybersecurity portfolio detects what others cannot, responds with greater speed and effectiveness, optimizes client investment, and improves security resilience. Learn more about us.