Blog
/
AI
/
July 24, 2024

The State of AI in Cybersecurity: Understanding AI Technologies

Part 4: This blog explores the findings from Darktrace’s State of AI Cybersecurity Report on security professionals' understanding of the different types of AI used in security programs. Get the latest insights into the evolving challenges, growing demand for skilled professionals, and the need for integrated security solutions by downloading the full report.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
The Darktrace Community
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
24
Jul 2024

About the State of AI Cybersecurity Report

Darktrace surveyed 1,800 CISOs, security leaders, administrators, and practitioners from industries around the globe. Our research was conducted to understand how the adoption of new AI-powered offensive and defensive cybersecurity technologies are being managed by organizations.

This blog continues the conversation from “The State of AI in Cybersecurity: Unveiling Global Insights from 1,800 Security Practitioners”. This blog will focus on security professionals’ understanding of AI technologies in cybersecurity tools.

To access download the full report, click here.

How familiar are security professionals with supervised machine learning

Just 31% of security professionals report that they are “very familiar” with supervised machine learning.

Many participants admitted unfamiliarity with various AI types. Less than one-third felt "very familiar" with the technologies surveyed: only 31% with supervised machine learning and 28% with natural language processing (NLP).

Most participants were "somewhat" familiar, ranging from 46% for supervised machine learning to 36% for generative adversarial networks (GANs). Executives and those in larger organizations reported the highest familiarity.

Combining "very" and "somewhat" familiar responses, 77% had familiarity with supervised machine learning, 74% generative AI, and 73% NLP. With generative AI getting so much media attention, and NLP being the broader area of AI that encompasses generative AI, these results may indicate that stakeholders are understanding the topic on the basis of buzz, not hands-on work with the technologies.  

If defenders hope to get ahead of attackers, they will need to go beyond supervised learning algorithms trained on known attack patterns and generative AI. Instead, they’ll need to adopt a comprehensive toolkit comprised of multiple, varied AI approaches—including unsupervised algorithms that continuously learn from an organization’s specific data rather than relying on big data generalizations.  

Different types of AI

Different types of AI have different strengths and use cases in cyber security. It’s important to choose the right technique for what you’re trying to achieve.  

Supervised machine learning: Applied more often than any other type of AI in cyber security. Trained on human attack patterns and historical threat intelligence.  

Large language models (LLMs): Applies deep learning models trained on extremely large data sets to understand, summarize, and generate new content. Used in generative AI tools.  

Natural language processing (NLP): Applies computational techniques to process and understand human language.  

Unsupervised machine learning: Continuously learns from raw, unstructured data to identify deviations that represent true anomalies.  

What impact will generative AI have on the cybersecurity field?

More than half of security professionals (57%) believe that generative AI will have a bigger impact on their field over the next few years than other types of AI.

Chart showing the types of AI expected to impact security the most
Figure 1: Chart from Darktrace's State of AI in Cybersecurity Report

Security stakeholders are highly aware of generative AI and LLMs, viewing them as pivotal to the field's future. Generative AI excels at abstracting information, automating tasks, and facilitating human-computer interaction. However, LLMs can "hallucinate" due to training data errors and are vulnerable to prompt injection attacks. Despite improvements in securing LLMs, the best cyber defenses use a mix of AI types for enhanced accuracy and capability.

AI education is crucial as industry expectations for generative AI grow. Leaders and practitioners need to understand where and how to use AI while managing risks. As they learn more, there will be a shift from generative AI to broader AI applications.

Do security professionals fully understand the different types of AI in security products?

Only 26% of security professionals report a full understanding of the different types of AI in use within security products.

Confusion is prevalent in today’s marketplace. Our survey found that only 26% of respondents fully understand the AI types in their security stack, while 31% are unsure or confused by vendor claims. Nearly 65% believe generative AI is mainly used in cybersecurity, though it’s only useful for identifying phishing emails. This highlights a gap between user expectations and vendor delivery, with too much focus on generative AI.

Key findings include:

  • Executives and managers report higher understanding than practitioners.
  • Larger organizations have better understanding due to greater specialization.

As AI evolves, vendors are rapidly introducing new solutions faster than practitioners can learn to use them. There's a strong need for greater vendor transparency and more education for users to maximize the technology's value.

To help ease confusion around AI technologies in cybersecurity, Darktrace has released the CISO’s Guide to Cyber AI. A comprehensive white paper that categorizes the different applications of AI in cybersecurity. Download the White Paper here.  

Do security professionals believe generative AI alone is enough to stop zero-day threats?

No! 86% of survey participants believe generative AI alone is NOT enough to stop zero-day threats

This consensus spans all geographies, organization sizes, and roles, though executives are slightly less likely to agree. Asia-Pacific participants agree more, while U.S. participants agree less.

Despite expecting generative AI to have the most impact, respondents recognize its limited security use cases and its need to work alongside other AI types. This highlights the necessity for vendor transparency and varied AI approaches for effective security across threat prevention, detection, and response.

Stakeholders must understand how AI solutions work to ensure they offer advanced, rather than outdated, threat detection methods. The survey shows awareness that old methods are insufficient.

To access the full report, click here.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
The Darktrace Community

Blog

/

Email

/

April 10, 2025

Email bombing exposed: Darktrace’s email defense in action

picture of a computer screen showing a password loginDefault blog imageDefault blog image

What is email bombing?

An email bomb attack, also known as a "spam bomb," is a cyberattack where a large volume of emails—ranging from as few as 100 to as many as several thousand—are sent to victims within a short period.

How does email bombing work?

Email bombing is a tactic that typically aims to disrupt operations and conceal malicious emails, potentially setting the stage for further social engineering attacks. Parallels can be drawn to the use of Domain Generation Algorithm (DGA) endpoints in Command-and-Control (C2) communications, where an attacker generates new and seemingly random domains in order to mask their malicious connections and evade detection.

In an email bomb attack, threat actors typically sign up their targeted recipients to a large number of email subscription services, flooding their inboxes with indirectly subscribed content [1].

Multiple threat actors have been observed utilizing this tactic, including the Ransomware-as-a-Service (RaaS) group Black Basta, also known as Storm-1811 [1] [2].

Darktrace detection of email bombing attack

In early 2025, Darktrace detected an email bomb attack where malicious actors flooded a customer's inbox while also employing social engineering techniques, specifically voice phishing (vishing). The end goal appeared to be infiltrating the customer's network by exploiting legitimate administrative tools for malicious purposes.

The emails in these attacks often bypass traditional email security tools because they are not technically classified as spam, due to the assumption that the recipient has subscribed to the service. Darktrace / EMAIL's behavioral analysis identified the mass of unusual, albeit not inherently malicious, emails that were sent to this user as part of this email bombing attack.

Email bombing attack overview

In February 2025, Darktrace observed an email bombing attack where a user received over 150 emails from 107 unique domains in under five minutes. Each of these emails bypassed a widely used and reputable Security Email Gateway (SEG) but were detected by Darktrace / EMAIL.

Graph showing the unusual spike in unusual emails observed by Darktrace / EMAIL.
Figure 1: Graph showing the unusual spike in unusual emails observed by Darktrace / EMAIL.

The emails varied in recipients, topics, and even languages, with several identified as being in German and Spanish. The most common theme in the subject line of these emails was account registration, indicating that the attacker used the victim’s address to sign up to various newsletters and subscriptions, prompting confirmation emails. Such confirmation emails are generally considered both important and low risk by email filters, meaning most traditional security tools would allow them without hesitation.

Additionally, many of the emails were sent using reputable marketing tools, such as Mailchimp’s Mandrill platform, which was used to send almost half of the observed emails, further adding to their legitimacy.

 Darktrace / EMAIL’s detection of an email being sent using the Mandrill platform.
Figure 2: Darktrace / EMAIL’s detection of an email being sent using the Mandrill platform.
Darktrace / EMAIL’s detection of a large number of unusual emails sent during a short period of time.
Figure 3: Darktrace / EMAIL’s detection of a large number of unusual emails sent during a short period of time.

While the individual emails detected were typically benign, such as the newsletter from a legitimate UK airport shown in Figure 3, the harmful aspect was the swarm effect caused by receiving many emails within a short period of time.

Traditional security tools, which analyze emails individually, often struggle to identify email bombing incidents. However, Darktrace / EMAIL recognized the unusual volume of new domain communication as suspicious. Had Darktrace / EMAIL been enabled in Autonomous Response mode, it would have automatically held any suspicious emails, preventing them from landing in the recipient’s inbox.

Example of Darktrace / EMAIL’s response to an email bombing attack taken from another customer environment.
Figure 4: Example of Darktrace / EMAIL’s response to an email bombing attack taken from another customer environment.

Following the initial email bombing, the malicious actor made multiple attempts to engage the recipient in a call using Microsoft Teams, while spoofing the organizations IT department in order to establish a sense of trust and urgency – following the spike in unusual emails the user accepted the Teams call. It was later confirmed by the customer that the attacker had also targeted over 10 additional internal users with email bombing attacks and fake IT calls.

The customer also confirmed that malicious actor successfully convinced the user to divulge their credentials with them using the Microsoft Quick Access remote management tool. While such remote management tools are typically used for legitimate administrative purposes, malicious actors can exploit them to move laterally between systems or maintain access on target networks. When these tools have been previously observed in the network, attackers may use them to pursue their goals while evading detection, commonly known as Living-off-the-Land (LOTL).

Subsequent investigation by Darktrace’s Security Operations Centre (SOC) revealed that the recipient's device began scanning and performing reconnaissance activities shortly following the Teams call, suggesting that the user inadvertently exposed their credentials, leading to the device's compromise.

Darktrace’s Cyber AI Analyst was able to identify these activities and group them together into one incident, while also highlighting the most important stages of the attack.

Figure 5: Cyber AI Analyst investigation showing the initiation of the reconnaissance/scanning activities.

The first network-level activity observed on this device was unusual LDAP reconnaissance of the wider network environment, seemingly attempting to bind to the local directory services. Following successful authentication, the device began querying the LDAP directory for information about user and root entries. Darktrace then observed the attacker performing network reconnaissance, initiating a scan of the customer’s environment and attempting to connect to other internal devices. Finally, the malicious actor proceeded to make several SMB sessions and NTLM authentication attempts to internal devices, all of which failed.

Device event log in Darktrace / NETWORK, showing the large volume of connections attempts over port 445.
Figure 6: Device event log in Darktrace / NETWORK, showing the large volume of connections attempts over port 445.
Darktrace / NETWORK’s detection of the number of the login attempts via SMB/NTLM.
Figure 7: Darktrace / NETWORK’s detection of the number of the login attempts via SMB/NTLM.

While Darktrace’s Autonomous Response capability suggested actions to shut down this suspicious internal connectivity, the deployment was configured in Human Confirmation Mode. This meant any actions required human approval, allowing the activities to continue until the customer’s security team intervened. If Darktrace had been set to respond autonomously, it would have blocked connections to port 445 and enforced a “pattern of life” to prevent the device from deviating from expected activities, thus shutting down the suspicious scanning.

Conclusion

Email bombing attacks can pose a serious threat to individuals and organizations by overwhelming inboxes with emails in an attempt to obfuscate potentially malicious activities, like account takeovers or credential theft. While many traditional gateways struggle to keep pace with the volume of these attacks—analyzing individual emails rather than connecting them and often failing to distinguish between legitimate and malicious activity—Darktrace is able to identify and stop these sophisticated attacks without latency.

Thanks to its Self-Learning AI and Autonomous Response capabilities, Darktrace ensures that even seemingly benign email activity is not lost in the noise.

Credit to Maria Geronikolou (Cyber Analyst and SOC Shift Supervisor) and Cameron Boyd (Cyber Security Analyst), Steven Haworth (Senior Director of Threat Modeling), Ryan Traill (Analyst Content Lead)

Appendices

[1] https://www.microsoft.com/en-us/security/blog/2024/05/15/threat-actors-misusing-quick-assist-in-social-engineering-attacks-leading-to-ransomware/

[2] https://thehackernews.com/2024/12/black-basta-ransomware-evolves-with.html

Darktrace Models Alerts

Internal Reconnaissance

·      Device / Suspicious SMB Scanning Activity

·      Device / Anonymous NTLM Logins

·      Device / Network Scan

·      Device / Network Range Scan

·      Device / Suspicious Network Scan Activity

·      Device / ICMP Address Scan

·      Anomalous Connection / Large Volume of LDAP Download

·      Device / Suspicious LDAP Search Operation

·      Device / Large Number of Model Alerts

Continue reading
About the author
Maria Geronikolou
Cyber Analyst

Blog

/

Email

/

April 11, 2025

FedRAMP High-compliant email security protects federal agencies from nation-state attacks

U.S. government building with flag against blue skyDefault blog imageDefault blog image

What is FedRAMP High Authority to Operate (ATO)?

Federal Risk and Authorization Management Program (FedRAMP®) High is a government-wide program that promotes the adoption of secure cloud services across the federal government by providing a standardized approach to security and risk assessment for cloud technologies and federal agencies, ensuring the protection of federal information.  

Cybersecurity is paramount in the Defense Industrial Base (DIB), where protecting sensitive information and ensuring operational resilience from the most sophisticated adversaries has national security implications. Organizations within the DIB must comply with strict security standards to work with the U.S. federal government, and FedRAMP High is one of those standards.

Darktrace achieves FedRAMP High ATO across IT, OT, and email

Last week, Darktrace Federal shared that we achieved FedRAMP® High ATO, a significant milestone that recognizes our ability to serve federal customers across IT, OT, and email via secure cloud-native deployments.  

Achieving the FedRAMP High ATO indicates that Darktrace Federal has achieved the highest standard for cloud security controls and can handle the U.S. federal government’s most sensitive, unclassified data in cloud environments.

Azure Government email security with FedRAMP High ATO

Darktrace has now released Darktrace Commercial Government Cloud High/Email (DCGC High/Email). This applies our email coverage to systems hosted in Microsoft's Azure Government, which adheres to NIST SP 800-53 controls and other federal standards. DCGC High/Email both meets and exceeds the compliance requirements of the Department of Defense’s Cybersecurity Maturity Model Certification (CMMC), providing organizations with a much-needed email security solution that delivers unparalleled, AI-driven protection against sophisticated cyber threats.

In these ways, DCGC High/Email enhances compliance, security, and operational resilience for government and federally-affiliated customers. Notably, it is crucial for securing contractors and suppliers within DIB, helping those organizations implement necessary cybersecurity practices to protect Controlled Unclassified Information (CUI) and Federal Contract Information (FCI).

Adopting DCGC High/Email ensures organizations within the DIB can work with the government without needing to invest extensive time and money into meeting the strict compliance standards.

Building DCGC High/Email to ease DIB work with the government

DCGC High/Email was built to achieve FedRAMP High standards and meet the most rigorous security standards required of our customers. This level of compliance not only allows more organizations than ever to leverage our AI-driven technology, but also ensures that customer data is protected by the highest security measures available.

The DIB has never been more critical to national security, which means they are under constant threats from nation state and cyber criminals. We built DCGC High/Email to FedRAMP High controls to ensure sensitive company and federal government communications are secured at the highest level possible.” – Marcus Fowler, CEO of Darktrace Federal

Evolving threats now necessitate DCGC High/Email

According to Darktrace’s 2025 State of AI Cybersecurity report, more than half (54%) of global government cybersecurity professionals report seeing a significant impact from AI-powered cyber threats.  

These aren’t the only types of sophisticated threats. Advanced Persistent Threats (APTs) are launched by nation-states or cyber-criminal groups with the resources to coordinate and achieve long-term objectives.  

These attacks are carefully tailored to specific targets, using techniques like social engineering and spear phishing to gain initial access via the inbox. Once inside, attackers move laterally through networks, often remaining undetected for months or even years, silently gathering intelligence or preparing for a decisive strike.  

However, the barrier for entry for these threat actors has been lowered immensely, likely related to the observed impact of AI-powered cyber threats. Securing email environments is more important than ever.  

Darktrace’s 2025 State of AI Cybersecurity report also found that 89% of government cybersecurity professionals believe AI can help significantly improve their defensive capabilities.  

Darktrace's AI-powered defensive tools are uniquely capable of detecting and neutralizing APTs and other sophisticated threats, including ones that enter via the inbox. Our Self-Learning AI continuously adapts to evolving threats, providing real-time protection.

Darktrace builds to secure the DIB to the highest degree

In summary, Darktrace Federal's achievement of FedRAMP High ATO and the introduction of DCGC High/Email mark significant advancements in our ability to protect defense contractors and federal customers against sophisticated threats that other solutions miss.

For a technical review of Darktrace Federal’s Cyber AI Mission Defense™ solution, download an independent evaluation from the Technology Advancement Center here.

[related-resource]

Continue reading
About the author
Marcus Fowler
CEO of Darktrace Federal and SVP of Strategic Engagements and Threats
Your data. Our AI.
Elevate your network security with Darktrace AI