Blog
/
Email
/
October 9, 2024

How Darktrace won an email security trial by learning the business, not the breach

Discover how Darktrace identified a sophisticated business email compromise (BEC) attack to successfully acquire a prospective customer in a trial alongside two other email security vendors. This case demonstrates the clear differentiator of true unsupervised machine learning applied to the right use cases, compared to miscellaneous vendor hype around AI.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Carlos Gray
Senior Product Marketing Manager, Email
Written by
Django Beek
Solutions Engineer
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
09
Oct 2024

Recently, Darktrace ran a customer trial of our email security product for a leading European infrastructure operator looking to upgrade its email protection.

During this prospective customer trial, Darktrace encountered several security incidents that penetrated existing security layers. Two of these incidents were Business Email Compromise (BEC) attacks, which we’re going to take a closer look at here.  

Darktrace was deployed for a trial at the same time as two other email security vendors, who were also being evaluated by the prospective customer. Darktrace’s superior detection of threats in this trial laid the groundwork for the respective company to choose our product.

Let’s dig into some of the elements of this Darktrace tech win and how they came to light during this trial.

Why truly intelligent AI starts learning from scratch

Darktrace’s detection capabilities are powered by true unsupervised machine learning, which detects anomalous activity from its ever-evolving understanding of normal for every unique environment. Consequently, it learns every business from the beginning, training on an organization’s data to understand normal for its users, devices, assets and the millions of connections between them.  

This learning period takes around a week, during which the AI hones its understanding of the business to a precise degree. At this stage, the system may produce some noise or lack precision, but this is a testament to our unsupervised machine learning. Unlike solutions that promise faster results by relying on preset assumptions, our AI takes the necessary time to learn from scratch, ensuring a deeper understanding and increasingly accurate detection over time.

Real threats detected by Darktrace

Attack 1: Supply chain attack

BEC and supply chain attacks are notoriously difficult to detect, as they take advantage of established, trusted senders.  

This attack came from a legitimate server via a known supplier with which the prospective customer had active and ongoing communication. Using the compromised account, the attacker didn’t just send out randomized spam, they crafted four sophisticated social engineering emails with the aim of soliciting users to click on a link – directly tapping into existing conversations. Darktrace / EMAIL was configured in passive mode during this trial; it would otherwise have held the emails before they arrived in the inbox. Luckily in this instance, one user reported the email to the CISO before any other users clicked the link. Upon investigation, the link contained timed ransomware detonation.  

Darktrace was the only vendor that caught any of these four emails. Our unique behavioral AI approach enables Darktrace / EMAIL to protect customers from even the most sophisticated attacks that abuse prior trust and relationships.

How did Darktrace catch this attack that other vendors missed?

With traditional email security, security teams have been obliged to allow entire organizations to eliminate false positives – on the premise that it’s easier to make a broad decision based on an entire known domain and assume that potential risk of a supply chain attack.

By contrast, Darktrace adopts a zero trust mentality, analyzing every email to understand whether communication that has previously been safe remains safe. That’s why Darktrace is uniquely positioned to detect BEC, based on its deep learning of internal and external users. Because it creates individual profiles for every account, group and business composed of multiple signals, it can detect deviations in their communication patterns based on the context and content of each message. We think of this as the ‘self-learning’ vs ‘learning the breach’ differentiator.

Fig 1: Darktrace analysis of one of four malicious emails sent by the trusted supplier. It gives it an anomaly score of 100, despite it being from a known correspondent with a known domain relationship and moderate mailing history.

If set in autonomous mode where it can apply actions, Darktrace / EMAIL would have quarantined all four emails. Using machine learning indicators such as ‘Inducement Shift’ and ‘General Behavioral Anomaly’, it deemed the four emails ‘Out of Character’. It also identified the link as highly likely to be phishing, based purely on its context. These indicators are critical because the link itself belonged to a widely used legitimate domain, leveraging their established internet reputation to appear safe.  

Around an hour later the supplier regained control of the account and sent a legitimate email alerting a wide distribution list to the phishing emails sent. Darktrace was able to discern the previously sent malicious emails from the current legitimate emails and allowed these emails through. Compared to other vendors that have a static understanding of malicious which needs to be updated (in cases like this, once a supplier is de-compromised), Darktrace’s deep understanding of external entities enables further nuance and precision in determining good from bad.

Fig 2: Darktrace let through four emails (subject line: Virus E-Mail) from the supplier once they had regained control of the compromised account, with a limited anomaly score despite having held the previous malicious emails. If any actions had been taken a red icon would show on the right-hand side – in this instance Darktrace did not take action and let the emails through.

Attack 2: Microsoft 365 account takeover

As part of building behavioral profiles of every email user, Darktrace analyzes their wider account activity. Account activity, such as unusual login patterns and administrative activity, is a key variable to detect account compromise before malicious activity occurs, but it also feeds into Darktrace’s understanding of which emails should belong in every user’s inbox.  

When the customer experienced an account compromise on day two of the trial, Darktrace began an investigation and was able to provide the full breakdown and scope of the incident.

The account was compromised via an email, which Darktrace would have blocked if it had been deployed autonomously at the time. Once the account had been compromised, detection details included:

  • Unusual Login and Account Update
  • Multiple Unusual External Sources for SaaS Credential
  • Unusual Activity Block
  • Login From Rare Endpoint While User is Active
Fig 3: Darktrace flagged the following indicators of compromise that deviated from normal behavior for the user in question, signaling an account takeover

With Darktrace / EMAIL, every user is analyzed for behavioral signals including authentication and configuration activity. Here the unusual login, credential input and rare endpoint were all clear signals a compromised account, contextualized against what is normal for that employee. Because Darktrace isn’t looking at email security merely from the perspective of the inbox. It constantly reevaluates the identity of each individual, group and organization (as defined by their behavioral signals), to determine precisely what belongs in the inbox and what doesn’t.  

In this instance, Darktrace / EMAIL would have blocked the incident were it not deployed in passive mode. In the initial intrusion it would have blocked the compromising email. And once the account was compromised, it would have taken direct blocking actions on the account based on the anomalous activity it detected, providing an extra layer of defense beyond the inbox.  

Account takeover protection is always part of Darktrace / EMAIL, which can be extended to fully cover Microsoft 365 SaaS with Darktrace / IDENTITY. By bringing SaaS activity into scope, security teams also benefit from an extended set of use cases including compliance and resource management.

Why this customer committed to Darktrace / EMAIL

“Darktrace was the only AI vendor that showed learning,” – CISO, Trial Customer

Throughout this trial, Darktrace evolved its understanding of the trial customer’s business and its email users. It identified attacks that other vendors did not, while allowing safe emails through. Furthermore, the CISO explicitly cited Darktrace as the only technology that demonstrated autonomous learning. As well as catching threats that other vendors did not, the CISO saw maturity areas such as how Darktrace dealt with non-productive mail and business-as-usual emails, without any user input.  Because of the nature of unsupervised ML, Darktrace’s learning of right and wrong will never be static or complete – it will continue to revise its understanding and adapt to the changing business and communications landscape.

This case study highlights a key tenet of Darktrace’s philosophy – that a rules and tuning-based approach will always be one step behind. Delivering benign emails while holding back malicious emails from the same domain demonstrates that safety is not defined in a straight line, or by historical precedent. Only by analyzing every email in-depth for its content and context can you guarantee that it belongs.  

While other solutions are making efforts to improve a static approach with AI, Darktrace’s AI remains truly unsupervised so it is dynamic enough to catch the most agile and evolving threats. This is what allows us to protect our customers by plugging a vital gap in their security stack that ensures they can meet the challenges of tomorrow's email attacks.

Interested in learning more about Darktrace / EMAIL? Check out our product hub.

Download: Darktrace / EMAIL Solution Brief

Discover the most advanced cloud-native AI email security solution to protect your domain and brand while preventing phishing, novel social engineering, business email compromise, account takeover, and data loss.

  • Gain up to 13 days of earlier threat detection and maximize ROI on your current email security
  • Experience 20-25% more threat blocking power with Darktrace / EMAIL
  • Stop the 58% of threats bypassing traditional email security

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Carlos Gray
Senior Product Marketing Manager, Email
Written by
Django Beek
Solutions Engineer

More in this series

No items found.

Blog

/

Email

/

May 1, 2026

How email-delivered prompt injection attacks can target enterprise AI – and why it matters

Default blog imageDefault blog image

What are email-delivered prompt injection attacks?

As organizations rapidly adopt AI assistants to improve productivity, a new class of cyber risk is emerging alongside them: email-delivered AI prompt injection. Unlike traditional attacks that target software vulnerabilities or rely on social engineering, this is the act of embedding malicious or manipulative instructions into content that an AI system will process as part of its normal workflow. Because modern AI tools are designed to ingest and reason over large volumes of data, including emails, documents, and chat histories, they can unintentionally treat hidden attacker-controlled text as legitimate input.  

At Darktrace, our analysis has shown an increase of 90% in the number of customer deployments showing signals associated with potential prompt injection attempts since we began monitoring for this type of activity in late 2025. While it is not always possible to definitively attribute each instance, internal scoring systems designed to identify characteristics consistent with prompt injection have recorded a growing number of high-confidence matches. The upward trend suggests that attackers are actively experimenting with these techniques.

Recent examples of prompt injection attacks

Two early examples of this evolving threat are HashJack and ShadowLeak, which illustrate prompt injection in practice.

HashJack is a novel prompt injection technique discovered in November 2025 that exploits AI-powered web browsers and agentic AI browser assistants. By hiding malicious instructions within the URL fragment (after the # symbol) of a legitimate, trusted website, attackers can trick AI web assistants into performing malicious actions – potentially inserting phishing links, fake contact details, or misleading guidance directly into what appears to be a trusted AI-generated output.

ShadowLeak is a prompt injection method to exfiltrate PII identified in September 2025. This was a flaw in ChatGPT (now patched by OpenAI) which worked via an agent connected to email. If attackers sent the target an email containing a hidden prompt, the agent was tricked into leaking sensitive information to the attacker with no user action or visible UI.

What’s the risk of email-delivered prompt injection attacks?

Enterprise AI assistants often have complete visibility across emails, documents, and internal platforms. This means an attacker does not need to compromise credentials or move laterally through an environment. If successful, they can influence the AI to retrieve relevant information seamlessly, without the labor of compromise and privilege escalation.

The first risk is data exfiltration. In a prompt injection scenario, malicious instructions may be embedded within an ordinary email. As in the ShadowLeak attack, when AI processes that content as part of a legitimate task, it may interpret the hidden text as an instruction. This could result in the AI disclosing sensitive data, summarizing confidential communications, or exposing internal context that would otherwise require significant effort to obtain.

The second risk is agentic workflow poisoning. As AI systems take on more active roles, prompt injection can influence how they behave over time. An attacker could embed instructions that persist across interactions, such as causing the AI to include malicious links in responses or redirect users to untrusted resources. In this way, the attacker inserts themselves into the workflow, effectively acting as a man-in-the-middle within the AI system.

Why can’t other solutions catch email-delivered prompt injection attacks?

AI prompt injection challenges many of the assumptions that traditional email security is built on. It does not fit the usual patterns of phishing, where the goal is to trick a user into clicking a link or opening an attachment.  

Most security solutions are designed to detect signals associated with user engagement: suspicious links, unusual attachments, or social engineering cues. Prompt injection avoids these indicators entirely, meaning there are fewer obvious red flags.

In this case, the intention is actually the opposite of user solicitation. The objective is simply for the email to be delivered and remain in the inbox, appearing benign and unremarkable. The malicious element is not something the recipient is expected to engage with, or even notice.

Detection is further complicated by the nature of the prompts themselves. Unlike known malware signatures or consistent phishing patterns, injected prompts can vary widely in structure and wording. This makes simple pattern-matching approaches, such as regex, unreliable. A broad rule set risks generating large numbers of false positives, while a narrow one is unlikely to capture the diversity of possible injections.

How does Darktrace catch these types of attacks?

The Darktrace approach to email security more generally is to look beyond individual indicators and assess context, which also applies here.  

For example, our prompt density score identifies clusters of prompt-like language within an email rather than just single occurrences. Instead of treating the presence of a phrase as a blocking signal, the focus is on whether there is an unusual concentration of these patterns in a way that suggests injection. Additional weighting can be applied where there are signs of obfuscation. For example, text that is hidden from the user – such as white font or font size zero – but still readable by AI systems can indicate an attempt to conceal malicious prompts.

This is combined with broader behavioral signals. The same communication context used to detect other threats remains relevant, such as whether the content is unusual for the recipient or deviates from normal patterns.

Ask your email provider about email-delivered AI prompt injection

Prompt injection targets not just employees, but the AI systems they rely on, so security approaches need to account for both.

Though there are clear indications of emerging activity, it remains to be seen how popular prompt injection will be with attackers going forward. Still, considering the potential impact of this attack type, it’s worth checking if this risk has been considered by your email security provider.

Questions to ask your email security provider

  • What safeguards are in place to prevent emails from influencing AI‑driven workflows over time?
  • How do you assess email content that’s benign for a human reader, but may carry hidden instructions intended for AI systems?
  • If an email contains no links, no attachments, and no social engineering cues, what signals would your platform use to identify malicious intent?

Visit the Darktrace / EMAIL product hub to discover how we detect and respond to advanced communication threats.  

Learn more about securing AI in your enterprise.

Continue reading
About the author
Kiri Addison
Senior Director of Product

Blog

/

AI

/

April 30, 2026

Mythos vs Ethos: Defending in an Era of AI‑Accelerated Vulnerability Discovery

mythos vulnerability discoveryDefault blog imageDefault blog image

Anthropic’s Mythos and what it means for security teams

Recent attention on systems such as Anthropic Mythos highlights a notable problem for defenders. Namely that disclosure’s role in coordinating defensive action is eroding.

As AI systems gain stronger reasoning and coding capability, their usefulness in analyzing complex software environments and identifying weaknesses naturally increases. What has changed is not attacker motivation, but the conditions under which defenders learn about and organize around risk. Vulnerability discovery and exploitation increasingly unfold in ways that turn disclosure into a retrospective signal rather than a reliable starting point for defense.

Faster discovery was inevitable and is already visible

The acceleration of vulnerability discovery was already observable across the ecosystem. Publicly disclosed vulnerabilities (CVEs) have grown at double-digit rates for the past two years, including a 32% increase in 2024 according to NIST, driven in part by AI even prior to Anthropic’s Mythos model. Most notably XBOW topped the HackerOne US bug bounty leaderboard, marking the first time an autonomous penetration tester had done so.  

The technical frontier for AI capabilities has been described elsewhere as jagged, and the implication is that Mythos is exceptional but not unique in this capability. While Mythos appears to make significant progress in complex vulnerability analysis, many other models are already able to find and exploit weaknesses to varying degrees.  

What matters here is not which model performs best, but the fact that vulnerability discovery is no longer a scarce or tightly bounded capability.

The consequence of this shift is not simply earlier discovery. It is a change in the defender-attacker race condition. Disclosure once acted as a rough synchronization point. While attackers sometimes had earlier knowledge, disclosure generally marked the moment when risk became visible and defensive action could be broadly coordinated. Increasingly, that coordination will no longer exist. Exploitation may be underway well before a CVE is published, if it is published at all.

Why patch velocity alone is not the answer

The instinctive response to this shift is to focus on patching faster, but treating patch velocity as the primary solution misunderstands the problem. Most organizations are already constrained in how quickly they can remediate vulnerabilities. Asset sprawl, operational risk, testing requirements, uptime commitments, and unclear ownership all limit response speed, even when vulnerabilities are well understood.

If discovery and exploitation now routinely precede disclosure, then patching cannot be the first line of defense. It becomes one necessary control applied within a timeline that has already shifted. This does not imply that organizations should patch less. It means that patching cannot serve as the organizing principle for defense.

Defense needs a more stable anchor

If disclosure no longer defines when defense begins, then defense needs a reference point that does not depend on knowing the vulnerability in advance.  

Every digital environment has a behavioral character. Systems authenticate, communicate, execute processes, and access resources in relatively consistent ways over time. These patterns are not static rules or signatures. They are learned behaviors that reflect how an organization operates.

When exploitation occurs, even via previously unknown vulnerabilities, those behavioral patterns change.

Attackers may use novel techniques, but they still need to gain access, create processes, move laterally, and will ultimately interact with systems in ways that diverge from what is expected. That deviation is observable regardless of whether the underlying weakness has been formally named.

In an environment where disclosure can no longer be relied on for timing or coordination, behavioral understanding is no longer an optional enhancement; it becomes the only consistently available defensive signal.

Detecting risk before disclosure

Darktrace’s threat research has consistently shown that malicious activity often becomes visible before public disclosure.

In multiple cases, including exploitation of Ivanti, SAP NetWeaver, and Trimble Cityworks, Darktrace detected anomalous behavior days or weeks ahead of CVE publication. These detections did not rely on signatures, threat intelligence feeds, or awareness of the vulnerability itself. They emerged because systems began behaving in ways that did not align with their established patterns.

This reflects a defensive approach grounded in ‘Ethos’, in contrast to the unbounded exploration represented by ‘Mythos’. Here, Mythos describes continuous vulnerability discovery at speed and scale. Ethos reflects an understanding of what is normal and expected within a specific environment, grounded in observed behavior.

Revisiting assume breach

These conditions reinforce a principle long embedded in Zero Trust thinking: assume breach.

If exploitation can occur before disclosure, patching vulnerabilities can no longer act as the organizing principle for defense. Instead, effective defense must focus on monitoring for misuse and constraining attacker activity once access is achieved. Behavioral monitoring allows organizations to identify early‑stage compromise and respond while uncertainty remains, rather than waiting for formal verification.

AI plays a critical role here, not by predicting every exploit, but by continuously learning what normal looks like within a specific environment and identifying meaningful deviation at machine speed. Identifying that deviation enables defenders to respond by constraining activity back towards normal patterns of behavior.

Not an arms race, but an asymmetry

AI is often framed as fueling an arms race between attackers and defenders. In practice, the more important dynamic is asymmetry.

Attackers operate broadly, scanning many environments for opportunities. Defenders operate deeply within their own systems, and it’s this business context which is so significant. Behavioral understanding gives defenders a durable advantage. Attackers may automate discovery, but they cannot easily reproduce what belonging looks like inside a particular organization.

A changed defensive model

AI‑accelerated vulnerability discovery does not mean defenders have lost. It does mean that disclosure‑driven, patch‑centric models no longer provide a sufficient foundation for resilience.

As vulnerability volumes grow and exploitation timelines compress, effective defense increasingly depends on continuous behavioral understanding, detection that does not rely on prior disclosure, and rapid containment to limit impact. In this model, CVEs confirm risk rather than define when defense begins.

The industry has already seen this approach work in practice. As AI continues to reshape both offense and defense, behavioral detection will move from being complementary to being essential.

Continue reading
About the author
Andrew Hollister
Principal Solutions Engineer, Cyber Technician
Your data. Our AI.
Elevate your network security with Darktrace AI