There’s no question that AI is already impacting the SOC – augmenting, assisting, and filling the gaps left by staff and skills shortages. We surveyed over 1,500 cybersecurity professionals from around the world to uncover their attitudes to AI cybersecurity in 2025. Our findings revealed striking trends in how AI is changing the way security leaders think about hiring and SOC transformation. Download the full report for the big picture, available now.
Download the full report to explore these findings in depth
The AI-human conundrum
Let’s start with some context. As the cybersecurity sector has rapidly evolved to integrate AI into all elements of cyber defense, the pace of technological advancement is outstripping the development of necessary skills. Given the ongoing challenges in security operations, such as employee burnout, high turnover rates, and talent shortages, recruiting personnel to bridge these skills gaps remains an immense challenge in today’s landscape.
But here, our main findings on this topic seem to contradict each other.
There’s no question over the impact of AI-powered threats – nearly three-quarters (74%) agree that AI-powered threats now pose a significant challenge for their organization.
When we look at how security leaders are defending against AI-powered threats, over 3 out of 5 (62%) see insufficient personnel to manage tools and alerts as the biggest barrier.
Yet at the same time, increasing cyber security staff is at the bottom of the priority list for survey participants, with only 11% planning to increase cybersecurity staff in 2025 – less than in 2024. What 64% of stakeholders are committed to, however, is adding new AI-powered tools onto their existing security stacks.

With burnout pervasive, the talent deficit reaching a new peak, and growing numbers of companies unable to fill cybersecurity positions, it may be that stakeholders realize they simply cannot hire enough personnel to solve this problem, no matter how much they may want to. As a result, leaders are looking for methods beyond increasing staff to overcome security obstacles.
Meanwhile, the results show that defensive AI is becoming integral to the SOC as a means of augmenting understaffed teams.
How is AI plugging skills shortages in the SOC?
As explored in our recent white paper, the CISO’s Guide to Navigating the Cybersecurity Skills Shortage, 71% of organizations report unfilled cybersecurity positions, leading to the estimation that less than 10% of alerts are thoroughly vetted. In this scenario, AI has become an essential multiplier to relieve the burden on security teams.
95% of respondents agree that AI-powered solutions can significantly improve the speed and efficiency of their defenses. But how?

The area security leaders expect defensive AI to have the biggest impact is on improving threat detection, followed by autonomous response to threats and identifying exploitable vulnerabilities.
Interestingly, the areas that participants ranked less highly (reducing alert fatigue and running phishing simulation), are the tasks that AI already does well and can therefore be used already to relieve the burden of manual, repetitive work on the SOC.
Different perspectives from different sides of the SOC
CISOs and SecOps teams aren’t necessarily aligned on the AI defense question – while CISOs tend to see it as a strategic game-changer, SecOps teams on the front lines may be more sceptical, wary of its real-world reliability and integration into workflows.
From the data, we see that while less than a quarter of execs doubt that AI-powered solutions will block and automatically respond to AI threats, about half of SecOps aren’t convinced. And only 17% of CISOs lack confidence in the ability of their teams to implement and use AI-powered solutions, whereas over 40% those in the team doubt their own ability to do so.
This gap feeds into the enthusiasm that executives share about adding AI-driven tools into the stack, while day-to-day users of the tools are more interested in improving security awareness training and improving cybersecurity tool integration.
Levels of AI understanding in the SOC
AI is only as powerful as the people who use it, and levels of AI expertise in the SOC can make or break its real-world impact. If security leaders want to unlock AI’s full potential, they must bridge the knowledge gap—ensuring teams understand not just the different types of AI, but where it can be applied for maximum value.
Only 42% of security professionals are confident that they fully understand all the types of AI in their organization’s security stack.
This data varies between job roles – executives report higher levels of understanding (60% say they know exactly which types of AI are being used) than participants in other roles. Despite having a working knowledge of using the tools day-to-day, SecOps practitioners were more likely to report having a “reasonable understanding” of the types of AI in use in their organization (42%).
Whether this reflects a general confidence in executives rather than technical proficiency it’s hard to say, but it speaks to the importance of AI-human collaboration – introducing AI tools for cybersecurity to plug the gaps in human teams will only be effective if security professionals are supported with the correct education and training.
.jpeg)
Download the full report to explore these findings in depth
The full report for Darktrace’s State of AI Cybersecurity is out now. Download the paper to dig deeper into these trends, and see how results differ by industry, region, organization size, and job title.
Download the full report to explore these findings in depth
The report for Darktrace’s State of AI Cybersecurity is out now. Download to see how results differ by industry, region, company size, and job title.
