One of the most visible outcomes of AI usage in cyberattacks is the creation of hyper-realistic deepfakes. If you get a video call from your CEO asking you to transfer funds to a supplier, you should be highly suspicious. Even if the caller appears, sounds, and behaves exactly like your CEO, you can’t be sure anymore that it’s not a deepfake. AI tools have mastered the art of generating highly convincing impersonations.
A major shift is also occurring in software development: the rise of Natural Language Programming. This has significantly lowered the barrier to entry for writing sophisticated code. What once required weeks of manual effort can now be achieved in hours, allowing adversaries to develop customized, polymorphic payloads very fast. This rapid development cycle will make it increasingly difficult for traditional defences to keep up, ultimately forcing organizations to rethink their security posture.
The next frontier is the rise of Agentic AI, which pushes the scale and speed of cyberattacks to a new level. Unlike scripts, these AI agents can reason and adapt, offloading manual labor far more effectively than traditional automation could. This will industrialize the offensive lifecycle, allowing attackers to reach an “always-on” presence that can handle an entire attack chain. We as defenders, this means the volume of sophisticated attacks will skyrocket, making it impossible to rely on manual intervention alone to stay ahead.
In November 2025, AI company Anthropic published a report3 detailing their discovery of the first AI-orchestrated cyberespionage campaign using their Claude Code agentic AI coding tool. The attackers used Claude Code not only as an advisor, but also to carry out a cyberattack.
Naturally, Claude Code is trained to avoid any harmful behavior, including cyberattacks. However, the attackers deceived the AI agent by claiming to be employees of a cybersecurity firm conducting security tests for their clients. They further broke down their attacks into small, seemingly innocent tasks for Claude Code to execute.
The attackers were able to develop a largely AI-driven attack framework with agents using various tools, often through the Model Context Protocol (MCP)4. They managed to let Claude Code perform reconnaissance of the target organization’s infrastructure in a fraction of the time it would have taken a team of human cybercriminals. Claude Code also identified and tested security vulnerabilities, and generated attack payloads tailored to the vulnerability. It harvested credentials, used these to get further access to the systems, and extracted a large amount of private data. Finally, it even produced a comprehensive report to support subsequent attack phases.
Following each stage, the AI agent provided the human operator with a summary of findings to seek guidance on subsequent actions. According to Anthropic’s estimates, the human operator was involved for only a fraction of the total duration. In certain phases, the AI agent performed tasks over a period of one to four hours, while the operator’s active involvement was limited to just two to ten minutes. This disparity illustrates the true scale and efficiency of AI-augmented offensive operations.
Naturally, the same capabilities that empower cybercriminals to launch hyper-scalable cyberattacks also enable the acceleration and scaling of defensive measures. While automation is already foundational to the modern SOC, it traditionally relies on static playbooks with predefined structures. In contrast, AI agents transform these workflows into context-aware, dynamic processes that leverage real-time data rather than following linear security checks.
At Ignite in November 2025, Microsoft announced that Security Copilot will be included for all Microsoft 365 E5 customers5, bringing agentic AI into the daily security workflow. Under this model, every E5 license includes an allocation of Security Compute Units (SCUs), which serve as the 'fuel' for these AI-driven tasks. While we are still discovering how extensively the included credits can power these AI-driven tasks without an additional cost impact, this marks an exciting moment to start using these new defense capabilities into your security posture. In the most recent developments6, Microsoft introduced the Microsoft 365 E7 ‘Frontier Suite,’ which bundles E5 with Microsoft 365 Copilot and the new Agent 365 control plane which shows the strategic importance of AI for Microsoft.
Cegeka Modern SOC has already conducted extensive validation of these new capabilities, such as the phishing triage agent, which shows high potential in speeding up incident handling. Of course, while these agents provide significant acceleration, they can still make mistakes. This means human oversight remains essential to validate findings and tune the agents when needed. We strongly believe that combining AI agents with human expertise ensures that the Modern SOC maintains a decisive advantage over attackers who already leverage AI to scale.
1. https://www.brusselstimes.com/1760976/cyberattack-at-brussels-airport-continues-hackers-likely-used-ai-expert-says
2. https://www.theregister.com/2026/01/23/ai_cyberattack_google_security
3. https://assets.anthropic.com/m/ec212e6566a0d47/original/Disrupting-the-first-reported-AI-orchestrated-cyber-espionage-campaign.pdf
4. https://en.wikipedia.org/wiki/Model_Context_Protocol
5. https://learn.microsoft.com/en-us/copilot/security/security-copilot-inclusion
6. https://techcommunity.microsoft.com/blog/partnernews/partner-blog--introducing-microsoft-365-e7-the-frontier-suite/4500520