Blog
/
AI
/
September 23, 2024

How AI can help CISOs navigate the global cyber talent shortage

The global cybersecurity skills gap is widening, leaving many organizations vulnerable to increasing cyber threats. This blog explores how CISOs can implement AI strategies to make the most of their existing workforce through automation, consolidation and education.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
The Darktrace Community
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
23
Sep 2024

The global picture

4 million cybersecurity professionals are needed worldwide to protect and defend the digital world – twice the number currently in the workforce.1

Innovative technologies are transforming business operations, enabling access to new markets, personalized customer experiences, and increased efficiency. However, this digital transformation also challenges Security Operations Centers (SOCs) with managing and protecting a complex digital environment without additional resources or advanced skills.

At the same time, the cybersecurity industry is suffering a severe global skills shortage, leaving many SOCs understaffed and under-skilled. With a 72% increase in data breaches from 2021-20232, SOCs are dealing with overwhelming alert volumes from diverse security tools. Nearly 60% of cybersecurity professionals report burnout3, leading to high turnover rates. Consequently, only a fraction of alerts are thoroughly investigated, increasing the risk of undetected breaches. More than half of organizations that experienced breaches in 2024 admitted to having short-staffed SOCs.4

How AI can help organizations do more with less

Cyber defense needs to evolve at the same pace as cyber-attacks, but the global skills shortage is making that difficult. As threat actors increasingly abuse AI for malicious purposes, using defensive AI to enable innovation and optimization at scale is reshaping how organizations approach cybersecurity.

The value of AI isn’t in replacing humans, but in augmenting their efforts and enabling them to scale their defense capabilities and their value to the organization. With AI, cybersecurity professionals can operate at digital speed, analyzing vast data sets, identifying more vulnerabilities with higher accuracy, responding and triaging faster, reducing risks, and implementing proactive measures—all without additional staff.

Research indicates that organizations leveraging AI and automation extensively in security functions—such as prevention, detection, investigation, or response—reduced their average mean time to identify (MTTI) and mean time to contain (MTTC) data breaches by 33% and 43%, respectively. These organizations also managed to contain breaches nearly 100 days faster on average compared to those not using AI and automation.5

First, you've got to apply the right AI to the right security challenge. We dig into how different AI technologies can bridge specific skills gaps in the CISO’s Guide to Navigating the Cybersecurity Skills Shortage.

Cases in point: AI as a human force multiplier

Let’s take a look at just some of the cybersecurity challenges to which AI can be applied to scale defense efforts and relieve the burden on the SOC. We go further into real-life examples in our white paper.

Automated threat detection and response

AI enables 24/7 autonomous response, eliminating the need for after-hours SOC shifts and providing security leaders with peace of mind. AI can scale response efforts by analyzing vast amounts of data in real time, identifying anomalies, and initiating precise autonomous actions to contain incidents, which buys teams time for investigation and remediation.  

Triage and investigation

AI enhances the triage process by automatically categorizing and prioritizing security alerts, allowing cybersecurity professionals to focus on the most critical threats. It creates a comprehensive picture of an attack, helps identify its root cause, and generates detailed reports with key findings and recommended actions.  

Automation also significantly reduces overwhelming alert volumes and high false positive rates, enabling analysts to concentrate on high-priority threats and engage in more proactive and strategic initiatives.

Eliminating silos and improving visibility across the enterprise

Security and IT teams are overwhelmed by the technological complexity of operating multiple tools, resulting in manual work and excessive alerts. AI can correlate threats across the entire organization, enhancing visibility and eliminating silos, thereby saving resources and reducing complexity.

With 88% of organizations favoring a platform approach over standalone solutions, many are consolidating their tech stacks in this direction. This consolidation provides native visibility across clouds, devices, communications, locations, applications, people, and third-party security tools and intelligence.

Upskilling your existing talent in AI

As revealed in the State of AI Cybersecurity Survey 2024, only 26% of cybersecurity professionals say they have a full understanding of the different types of AI in use within security products.6

Understanding AI can upskill your existing staff, enhancing their expertise and optimizing business outcomes. Human expertise is crucial for the effective and ethical integration of AI. To enable true AI-human collaboration, cybersecurity professionals need specific training on using, understanding, and managing AI systems. To make this easier, the Darktrace ActiveAI Security Platform is designed to enable collaboration and reduce the learning curve – lowering the barrier to entry for junior or less skilled analysts.  

However, to bridge the immediate expertise gap in managing AI tools, organizations can consider expert managed services that take the day-to-day management out of the SOC’s hands, allowing them to focus on training and proactive initiatives.

Conclusion

Experts predict the cybersecurity skills gap will continue to grow, increasing operational and financial risks for organizations. AI for cybersecurity is crucial for CISOs to augment their teams and scale defense capabilities with speed, scalability, and predictive insights, while human expertise remains vital for providing the intuition and problem-solving needed for responsible and efficient AI integration.

If you’re thinking about implementing AI to solve your own cyber skills gap, consider the following:

  • Select an AI cybersecurity solution tailored to your specific business needs
  • Review and streamline existing workflows and tools – consider a platform-based approach to eliminate inefficiencies
  • Make use of managed services to outsource AI expertise
  • Upskill and reskill existing talent through training and education
  • Foster a knowledge-sharing culture with access to knowledge bases and collaboration tools

Interested in how AI could augment your SOC to increase efficiency and save resources? Read our longer CISO’s Guide to Navigating the Cybersecurity Skills Shortage.

And to better understand cybersecurity practitioners' attitudes towards AI, check out Darktrace’s State of AI Cybersecurity 2024 report.

References

  1. https://www.isc2.org/research  
  2. https://www.forbes.com/advisor/education/it-and-tech/cybersecurity-statistics/  
  3. https://www.informationweek.com/cyber-resilience/the-psychology-of-cybersecurity-burnout  
  4. https://www.ibm.com/downloads/cas/1KZ3XE9D  
  5. https://www.ibm.com/downloads/cas/1KZ3XE9D  
  6. https://darktrace.com/resources/state-of-ai-cyber-security-2024
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
The Darktrace Community

More in this series

No items found.

Blog

/

/

April 16, 2025

Why Data Classification Isn’t Enough to Prevent Data Loss

women looking at laptopDefault blog imageDefault blog image

Why today’s data is fundamentally difficult to protect

Data isn’t what it used to be. It’s no longer confined to neat rows in a database, or tucked away in a secure on-prem server. Today, sensitive information moves freely between cloud platforms, SaaS applications, endpoints, and a globally distributed workforce – often in real time. The sheer volume and diversity of modern data make it inherently harder to monitor, classify, and secure. And the numbers reflect this challenge – 63% of breaches stem from malicious insiders or human error.

This complexity is compounded by an outdated reliance on manual data management. While data classification remains critical – particularly to ensure compliance with regulations like GDPR or HIPAA – the burden of managing this data often falls on overstretched security teams. Security teams are expected to identify, label, and track data across sprawling ecosystems, which can be time-consuming and error-prone. Even with automation, rigid policies that depend on pre-defined data classification miss the mark.

From a data protection perspective, if manual or basic automated classification is the sole methodology for preventing data loss, critical data will likely slip through the cracks. Security teams are left scrambling to fill the gaps, facing compliance risks and increasing operational overhead. Over time, the hidden costs of these inefficiencies pile up, draining resources and reducing the effectiveness of your entire security posture.

What traditional data classification can’t cover

Data classification plays an important role in data loss prevention, but it's only half the puzzle. It’s designed to spot known patterns and apply labels, yet the most common causes of data breaches don’t follow rules. They stem from something far harder to define: human behavior.

When Darktrace began developing its data loss detection capabilities, the question wasn’t what data to protect — it was how to understand the people using it. The numbers pointed clearly to where AI could make the biggest difference: 22% of email data breaches stem directly from user error, while malicious insider threats remain the most expensive, costing organizations an average of $4.99 million per incident.

Data classification is blind to nuance – it can’t grasp intent, context, or the subtle red flags that often precede a breach. And no amount of labeling, policy, or training can fully account for the reality that humans make mistakes. These problems require a system that sees beyond the data itself — one that understands how it’s being used, by whom, and in what context. That’s why Darktrace leans into its core strength: detecting the subtle symptoms of data loss by interpreting human behavior, not just file labels.

Achieving autonomous data protection with behavioral AI

Rather than relying on manual processes to understand what’s important, Darktrace uses its industry-leading AI to learn how your organization uses data — and spot when something looks wrong.

Its understanding of business operations allows it to detect subtle anomalies around data movement for your use cases, whether that’s a misdirected email, an insecure cloud storage link, or suspicious activity from an insider. Crucially, this detection is entirely autonomous, with no need for predefined rules or static labels.

Darktrace uses its contextual understanding of each user to stop all types of sensitive or misdirected data from leaving the organization
Fig 1: Darktrace uses its contextual understanding of each user to stop all types of sensitive or misdirected data from leaving the organization

Darktrace / EMAIL’s DLP add-on continuously learns in real time, enabling:

  • Automatic detection: Identifies risky data behavior to catch threats that traditional approaches miss – from human error to sophisticated insider threats.
  • A dynamic range of actions: Darktrace always aims to avoid business disruption in its blocking actions, but this can be adjusted according to the unique risk appetite of each customer – taking the most appropriate response for that business from a whole scale of possibilities.
  • Enhanced context: While Darktrace doesn’t require sensitivity data labeling, it integrates with Microsoft Purview to ingest sensitivity labels and enrich its understanding of the data – for even more accurate decision-making.

Beyond preventing data loss, Darktrace uses DLP activity to enhance its contextual understanding of the user itself. In other words, outbound activity can be a useful symptom in identifying a potential account compromise, or can be used to give context to that user’s inbound activity. Because Darktrace sees the whole picture of a user across their inbound, outbound, and lateral mail, as well as messaging (and into collaboration tools with Darktrace / IDENTITY), every interaction informs its continuous learning of normal.

With Darktrace, you can achieve dynamic data loss prevention for the most challenging human-related use cases – from accidental misdirected recipients to malicious insiders – that evade detection from manual classification. So don’t stand still on data protection – make the switch to autonomous, adaptive DLP that understands your business, data, and people.

[related-resource]

Continue reading
About the author
Carlos Gray
Senior Product Marketing Manager, Email

Blog

/

Email

/

April 14, 2025

Email bombing exposed: Darktrace’s email defense in action

picture of a computer screen showing a password loginDefault blog imageDefault blog image

What is email bombing?

An email bomb attack, also known as a "spam bomb," is a cyberattack where a large volume of emails—ranging from as few as 100 to as many as several thousand—are sent to victims within a short period.

How does email bombing work?

Email bombing is a tactic that typically aims to disrupt operations and conceal malicious emails, potentially setting the stage for further social engineering attacks. Parallels can be drawn to the use of Domain Generation Algorithm (DGA) endpoints in Command-and-Control (C2) communications, where an attacker generates new and seemingly random domains in order to mask their malicious connections and evade detection.

In an email bomb attack, threat actors typically sign up their targeted recipients to a large number of email subscription services, flooding their inboxes with indirectly subscribed content [1].

Multiple threat actors have been observed utilizing this tactic, including the Ransomware-as-a-Service (RaaS) group Black Basta, also known as Storm-1811 [1] [2].

Darktrace detection of email bombing attack

In early 2025, Darktrace detected an email bomb attack where malicious actors flooded a customer's inbox while also employing social engineering techniques, specifically voice phishing (vishing). The end goal appeared to be infiltrating the customer's network by exploiting legitimate administrative tools for malicious purposes.

The emails in these attacks often bypass traditional email security tools because they are not technically classified as spam, due to the assumption that the recipient has subscribed to the service. Darktrace / EMAIL's behavioral analysis identified the mass of unusual, albeit not inherently malicious, emails that were sent to this user as part of this email bombing attack.

Email bombing attack overview

In February 2025, Darktrace observed an email bombing attack where a user received over 150 emails from 107 unique domains in under five minutes. Each of these emails bypassed a widely used and reputable Security Email Gateway (SEG) but were detected by Darktrace / EMAIL.

Graph showing the unusual spike in unusual emails observed by Darktrace / EMAIL.
Figure 1: Graph showing the unusual spike in unusual emails observed by Darktrace / EMAIL.

The emails varied in senders, topics, and even languages, with several identified as being in German and Spanish. The most common theme in the subject line of these emails was account registration, indicating that the attacker used the victim’s address to sign up to various newsletters and subscriptions, prompting confirmation emails. Such confirmation emails are generally considered both important and low risk by email filters, meaning most traditional security tools would allow them without hesitation.

Additionally, many of the emails were sent using reputable marketing tools, such as Mailchimp’s Mandrill platform, which was used to send almost half of the observed emails, further adding to their legitimacy.

 Darktrace / EMAIL’s detection of an email being sent using the Mandrill platform.
Figure 2: Darktrace / EMAIL’s detection of an email being sent using the Mandrill platform.
Darktrace / EMAIL’s detection of a large number of unusual emails sent during a short period of time.
Figure 3: Darktrace / EMAIL’s detection of a large number of unusual emails sent during a short period of time.

While the individual emails detected were typically benign, such as the newsletter from a legitimate UK airport shown in Figure 3, the harmful aspect was the swarm effect caused by receiving many emails within a short period of time.

Traditional security tools, which analyze emails individually, often struggle to identify email bombing incidents. However, Darktrace / EMAIL recognized the unusual volume of new domain communication as suspicious. Had Darktrace / EMAIL been enabled in Autonomous Response mode, it would have automatically held any suspicious emails, preventing them from landing in the recipient’s inbox.

Example of Darktrace / EMAIL’s response to an email bombing attack taken from another customer environment.
Figure 4: Example of Darktrace / EMAIL’s response to an email bombing attack taken from another customer environment.

Following the initial email bombing, the malicious actor made multiple attempts to engage the recipient in a call using Microsoft Teams, while spoofing the organizations IT department in order to establish a sense of trust and urgency – following the spike in unusual emails the user accepted the Teams call. It was later confirmed by the customer that the attacker had also targeted over 10 additional internal users with email bombing attacks and fake IT calls.

The customer also confirmed that malicious actor successfully convinced the user to divulge their credentials with them using the Microsoft Quick Assist remote management tool. While such remote management tools are typically used for legitimate administrative purposes, malicious actors can exploit them to move laterally between systems or maintain access on target networks. When these tools have been previously observed in the network, attackers may use them to pursue their goals while evading detection, commonly known as Living-off-the-Land (LOTL).

Subsequent investigation by Darktrace’s Security Operations Centre (SOC) revealed that the recipient's device began scanning and performing reconnaissance activities shortly following the Teams call, suggesting that the user inadvertently exposed their credentials, leading to the device's compromise.

Darktrace’s Cyber AI Analyst was able to identify these activities and group them together into one incident, while also highlighting the most important stages of the attack.

Figure 5: Cyber AI Analyst investigation showing the initiation of the reconnaissance/scanning activities.

The first network-level activity observed on this device was unusual LDAP reconnaissance of the wider network environment, seemingly attempting to bind to the local directory services. Following successful authentication, the device began querying the LDAP directory for information about user and root entries. Darktrace then observed the attacker performing network reconnaissance, initiating a scan of the customer’s environment and attempting to connect to other internal devices. Finally, the malicious actor proceeded to make several SMB sessions and NTLM authentication attempts to internal devices, all of which failed.

Device event log in Darktrace / NETWORK, showing the large volume of connections attempts over port 445.
Figure 6: Device event log in Darktrace / NETWORK, showing the large volume of connections attempts over port 445.
Darktrace / NETWORK’s detection of the number of the login attempts via SMB/NTLM.
Figure 7: Darktrace / NETWORK’s detection of the number of the login attempts via SMB/NTLM.

While Darktrace’s Autonomous Response capability suggested actions to shut down this suspicious internal connectivity, the deployment was configured in Human Confirmation Mode. This meant any actions required human approval, allowing the activities to continue until the customer’s security team intervened. If Darktrace had been set to respond autonomously, it would have blocked connections to port 445 and enforced a “pattern of life” to prevent the device from deviating from expected activities, thus shutting down the suspicious scanning.

Conclusion

Email bombing attacks can pose a serious threat to individuals and organizations by overwhelming inboxes with emails in an attempt to obfuscate potentially malicious activities, like account takeovers or credential theft. While many traditional gateways struggle to keep pace with the volume of these attacks—analyzing individual emails rather than connecting them and often failing to distinguish between legitimate and malicious activity—Darktrace is able to identify and stop these sophisticated attacks without latency.

Thanks to its Self-Learning AI and Autonomous Response capabilities, Darktrace ensures that even seemingly benign email activity is not lost in the noise.

Credit to Maria Geronikolou (Cyber Analyst and SOC Shift Supervisor) and Cameron Boyd (Cyber Security Analyst), Steven Haworth (Senior Director of Threat Modeling), Ryan Traill (Analyst Content Lead)

Appendices

[1] https://www.microsoft.com/en-us/security/blog/2024/05/15/threat-actors-misusing-quick-assist-in-social-engineering-attacks-leading-to-ransomware/

[2] https://thehackernews.com/2024/12/black-basta-ransomware-evolves-with.html

Darktrace Models Alerts

Internal Reconnaissance

·      Device / Suspicious SMB Scanning Activity

·      Device / Anonymous NTLM Logins

·      Device / Network Scan

·      Device / Network Range Scan

·      Device / Suspicious Network Scan Activity

·      Device / ICMP Address Scan

·      Anomalous Connection / Large Volume of LDAP Download

·      Device / Suspicious LDAP Search Operation

·      Device / Large Number of Model Alerts

Continue reading
About the author
Maria Geronikolou
Cyber Analyst
Your data. Our AI.
Elevate your network security with Darktrace AI