Blog
/
AI
/
November 25, 2024

Why Artificial Intelligence is the Future of Cybersecurity

This blog explores the impact of AI on the threat landscape, the benefits of AI in cybersecurity, and the role it plays in enhancing security practices and tools.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Brittany Woodsmall
Product Marketing Manager, AI & Attack Surface
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
25
Nov 2024

Introduction: AI & Cybersecurity

In the wake of artificial intelligence (AI) becoming more commonplace, it’s no surprise to see that threat actors are also adopting the use of AI in their attacks at an accelerated pace. AI enables augmentation of complex tasks such as spear-phishing, deep fakes, polymorphic malware generation, and advanced persistent threat (APT) campaigns, which significantly enhances the sophistication and scale of their operations. This has put security professionals in a reactive state, struggling to keep pace with the proliferation of threats.

As AI reshapes the future of cyber threats, defenders are also looking to integrate AI technologies into their security stack. Adopting AI-powered solutions in cybersecurity enables security teams to detect and respond to these advanced threats more quickly and accurately as well as automate traditionally manual and routine tasks. According to research done by Darktrace in the 2024 State of AI Cybersecurity Report improving threat detection, identifying exploitable vulnerabilities, and automating low level security tasks were the top three ways practitioners saw AI enhancing their security team’s capabilities [1], underscoring the wide-ranging capabilities of AI in cyber.  

In this blog, we will discuss how AI has impacted the threat landscape, the rise of generative AI and AI adoption in security tools, and the importance of using multiple types of AI in cybersecurity solutions for a holistic and proactive approach to keeping your organization safe.  

The impact of AI on the threat landscape

The integration of AI and cybersecurity has brought about significant advancements across industries. However, it also introduces new security risks that challenge traditional defenses.  Three major concerns with the misuse of AI being leveraged by adversaries are: (1) the increase of novel social engineering attacks that are harder to detect and able to bypass traditional security tools,  (2) the ease of access for less experienced threat actors to now deliver advanced attacks at speed and scale and (3) the attacking of AI itself, to include machine learning models, data corpuses and APIs or interfaces.

In the context of social engineering, AI can be used to create more convincing phishing emails, conduct advanced reconnaissance, and simulate human-like interactions to deceive victims more effectively. Generative AI tools, such as ChatGPT, are already being used by adversaries to craft these sophisticated phishing emails, which can more aptly mimic human semantics without spelling or grammatical error and include personal information pulled from internet sources such as social media profiles. And this can all be done at machine speed and scale. In fact, Darktrace researchers observed a 135% rise in ‘novel social engineering attacks’ across Darktrace / EMAIL customers in 2023, corresponding to the widespread adoption and use of ChatGPT [2].  

Furthermore, these sophisticated social engineering attacks are now able to circumvent traditional security tools. In between December 21, 2023, and July 5, 2024, Darktrace / EMAIL detected 17.8 million phishing emails across the fleet, with 62% of these phishing emails successfully bypassing Domain-based Message Authentication, Reporting, and Conformance (DMARC) verification checks [2].  

And while the proliferation of novel attacks fueled by AI is persisting, AI also lowers the barrier to entry for threat actors. Publicly available AI tools make it easy for adversaries to automate complex tasks that previously required advanced technical skills. Additionally, AI-driven platforms and phishing kits available on the dark web provide ready-made solutions, enabling even novice attackers to execute effective cyber campaigns with minimal effort.

The impact of adversarial use of AI on the ever-evolving threat landscape is important for organizations to understand as it fundamentally changes the way we must approach cybersecurity. However, while the intersection of cybersecurity and AI can have potentially negative implications, it is important to recognize that AI can also be used to help protect us.

A generation of generative AI in cybersecurity

When the topic of AI in cybersecurity comes up, it’s typically in reference to generative AI, which became popularized in 2023. While it does not solely encapsulate what AI cybersecurity is or what AI can do in this space, it’s important to understand what generative AI is and how it can be implemented to help organizations get ahead of today’s threats.  

Generative AI (e.g., ChatGPT or Microsoft Copilot) is a type of AI that creates new or original content. It has the capability to generate images, videos, or text based on information it learns from large datasets. These systems use advanced algorithms and deep learning techniques to understand patterns and structures within the data they are trained on, enabling them to generate outputs that are coherent, contextually relevant, and often indistinguishable from human-created content.

For security professionals, generative AI offers some valuable applications. Primarily, it’s used to transform complex security data into clear and concise summaries. By analyzing vast amounts of security logs, alerts, and technical data, it can contextualize critical information quickly and present findings in natural, comprehensible language. This makes it easier for security teams to understand critical information quickly and improves communication with non-technical stakeholders. Generative AI can also automate the creation of realistic simulations for training purposes, helping security teams prepare for various cyberattack scenarios and improve their response strategies.  

Despite its advantages, generative AI also has limitations that organizations must consider. One challenge is the potential for generating false positives, where benign activities are mistakenly flagged as threats, which can overwhelm security teams with unnecessary alerts. Moreover, implementing generative AI requires significant computational resources and expertise, which may be a barrier for some organizations. It can also be susceptible to prompt injection attacks and there are risks with intellectual property or sensitive data being leaked when using publicly available generative AI tools.  In fact, according to the MIT AI Risk Registry, there are potentially over 700 risks that need to be mitigated with the use of generative AI.

Generative AI impact on cyber attacks screenshot data sheet

For more information on generative AI's impact on the cyber threat landscape download the Darktrace Data Sheet

Beyond the Generative AI Glass Ceiling

Generative AI has a place in cybersecurity, but security professionals are starting to recognize that it’s not the only AI organizations should be using in their security tool kit. In fact, according to Darktrace’s State of AI Cybersecurity Report, “86% of survey participants believe generative AI alone is NOT enough to stop zero-day threats.” As we look toward the future of AI in cybersecurity, it’s critical to understand that different types of AI have different strengths and use cases and choosing the technologies based on your organization’s specific needs is paramount.

There are a few types of AI used in cybersecurity that serve different functions. These include:

Supervised Machine Learning: Widely used in cybersecurity due to its ability to learn from labeled datasets. These datasets include historical threat intelligence and known attack patterns, allowing the model to recognize and predict similar threats in the future. For example, supervised machine learning can be applied to email filtering systems to identify and block phishing attempts by learning from past phishing emails. This is human-led training facilitating automation based on known information.  

Large Language Models (LLMs): Deep learning models trained on extensive datasets to understand and generate human-like text. LLMs can analyze vast amounts of text data, such as security logs, incident reports, and threat intelligence feeds, to identify patterns and anomalies that may indicate a cyber threat. They can also generate detailed and coherent reports on security incidents, summarizing complex data into understandable formats.

Natural Language Processing (NLP): Involves the application of computational techniques to process and understand human language. In cybersecurity, NLP can be used to analyze and interpret text-based data, such as emails, chat logs, and social media posts, to identify potential threats. For instance, NLP can help detect phishing attempts by analyzing the language used in emails for signs of deception.

Unsupervised Machine Learning: Continuously learns from raw, unstructured data without predefined labels. It is particularly useful in identifying new and unknown threats by detecting anomalies that deviate from normal behavior. In cybersecurity, unsupervised learning can be applied to network traffic analysis to identify unusual patterns that may indicate a cyberattack. It can also be used in endpoint detection and response (EDR) systems to uncover previously unknown malware by recognizing deviations from typical system behavior.

Types of AI in cybersecurity
Figure 1: Types of AI in cybersecurity

Employing multiple types of AI in cybersecurity is essential for creating a layered and adaptive defense strategy. Each type of AI, from supervised and unsupervised machine learning to large language models (LLMs) and natural language processing (NLP), brings distinct capabilities that address different aspects of cyber threats. Supervised learning excels at recognizing known threats, while unsupervised learning uncovers new anomalies. LLMs and NLP enhance the analysis of textual data for threat detection and response and aid in understanding and mitigating social engineering attacks. By integrating these diverse AI technologies, organizations can achieve a more holistic and resilient cybersecurity framework, capable of adapting to the ever-evolving threat landscape.

A Multi-Layered AI Approach with Darktrace

AI-powered security solutions are emerging as a crucial line of defense against an AI-powered threat landscape. In fact, “Most security stakeholders (71%) are confident that AI-powered security solutions will be better able to block AI-powered threats than traditional tools.” And 96% agree that AI-powered solutions will level up their organization’s defenses.  As organizations look to adopt these tools for cybersecurity, it’s imperative to understand how to evaluate AI vendors to find the right products as well as build trust with these AI-powered solutions.  

Darktrace, a leader in AI cybersecurity since 2013, emphasizes interpretability, explainability, and user control, ensuring that our AI is understandable, customizable and transparent. Darktrace’s approach to cyber defense is rooted in the belief that the right type of AI must be applied to the right use cases. Central to this approach is Self-Learning AI, which is crucial for identifying novel cyber threats that most other tools miss. This is complemented by various AI methods, including LLMs, generative AI, and supervised machine learning, to support the Self-Learning AI.  

Darktrace focuses on where AI can best augment the people in a security team and where it can be used responsibly to have the most positive impact on their work. With a combination of these AI techniques, applied to the right use cases, Darktrace enables organizations to tailor their AI defenses to unique risks, providing extended visibility across their entire digital estates with the Darktrace ActiveAI Security Platform™.

Credit to: Ed Metcalf, Senior Director Product Marketing, AI & Innovations - Nicole Carignan VP of Strategic Cyber AI for their contribution to this blog.

CISOs guide to buying AI white paper cover

To learn more about Darktrace and AI in cybersecurity download the CISO’s Guide to Cyber AI here.

Download the white paper to learn how buyers should approach purchasing AI-based solutions. It includes:

  • Key steps for selecting AI cybersecurity tools
  • Questions to ask and responses to expect from vendors
  • Understand tools available and find the right fit
  • Ensure AI investments align with security goals and needs
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Brittany Woodsmall
Product Marketing Manager, AI & Attack Surface

More in this series

No items found.

Blog

/

Network

/

September 15, 2025

SEO Poisoning and Fake PuTTY sites: Darktrace’s Investigation into the Oyster backdoor

Default blog imageDefault blog image

What is SEO poisoning?

Search Engine Optimization (SEO) is the legitimate marketing technique of improving the visibility of websites in organic search engine results. Businesses, publishers, and organizations use SEO to ensure their content is easily discoverable by users. Techniques may include optimizing keywords, creating backlinks, or even ensuring mobile compatibility.

SEO poisoning occurs when attackers use these same techniques for malicious purposes. Instead of improving the visibility of legitimate content, threat actors use SEO to push harmful or deceptive websites to the top of search results. This method exploits the common assumption that top-ranking results are trustworthy, leading users to click on URLs without carefully inspecting them.

As part of SEO poisoning, the attacker will first register a typo-squatted domain, slightly misspelled or otherwise deceptive versions of real software sites, such as putty[.]run or puttyy[.]org. These sites are optimized for SEO and often even backed by malicious Google ads, increasing the visibility when users search for download links. To achieve that, threat actors may embed pages with strategically chosen, high-value keywords or replicate content from reputable sources to elevate the domain’s perceived authority in search engine algorithms [4]. In more advanced operations, these tactics are reinforced with paid promotion, such as Google ads, enabling malicious domains to appear above organic search results as sponsored links. This placement not only accelerates visibility but also impacts an unwarranted sense of legitimacy to unsuspected users.

Once a user lands on one of these fake pages, they are presented with what looks like a legitimate software download option. Upon clicking the download indicator, the user will be redirected to another separate domain that actually hosts the payload. This hosting domain is usually unrelated to the nominally referenced software. These third-party sites can involve recently registered domains but may also include legitimate websites that have been recently compromised. By hosting malware on a variety of infrastructure, attackers can prolong the availability of distribution methods for these malicious files before they are taken down.

What is the Oyster backdoor?

Oyster, also known as Broomstick or CleanUpLoader, is a C++ based backdoor malware first identified in July 2023. It enables remote access to infected systems, offering features such as command-line interaction and file transfers.

Oyster has been widely adopted by various threat actors, often as an entry point for ransomware attacks. Notable examples include Vanilla Tempest and Rhysida ransomware groups, both of which have been observed leveraging the Oyster backdoor to enhance their attack capabilities. Vanilla Tempest is known for using Oyster’s stealth persistence to maintain long-term access within targeted networks, often aligning their operations with ransomware deployment [5]. Rhysida has taken this further by deploying Oyster as an initial access tool in ransomware campaigns, using it to conduct reconnaissance and move laterally before executing encryption activities [6].

Once installed, the backdoor gathers basic system information before communicating with a command-and-control (C2) server. The malware largely relies on a ‘cmd.exe’ instance to execute commands and launch other files [1].

In previous SEO poisoning cases, the file downloaded from the fake pages is not just PuTTY, but a trojanized version that includes the stealthy Oyster backdoor. PuTTY is a free and open-source terminal emulator for Windows that allows users to connect to remote servers and devices using protocols like SSH and Telnet. In the recent campaign, once a user visits the fake software download site, ranked highly through SEO poisoning, the malicious payload is downloaded through direct user interaction and subsequently installed on the local device, initiating the compromise. The malware then performs two actions simultaneously: it installs a fully functional version of PuTTY to avoid user suspicion, while silently deploying the Oyster backdoor. Given PuTTY’s nature, it is prominently used by IT administrators with highly privileged account as opposed to standard users in a business, possibly narrowing the scope of the targets.

Oyster’s persistence mechanism involves creating a Windows Scheduled Task that runs every few minutes. Notably, the infection uses Dynamic Link Library (DLL) side loading, where a malicious DLL, often named ‘twain_96.dll’, is executed via the legitimate Windows utility ‘rundll32.exe’, which is commonly used to run DLLs [2]. This technique is frequently used by malicious actors to blend their activity with normal system operations.

Darktrace’s Coverage of the Oyster Backdoor

In June 2025, security analysts at Darktrace identified a campaign leveraging search engine manipulation to deliver malware masquerading as the popular SSH client, PuTTY. Darktrace / NETWORK’s anomaly-based detection identified signs of malicious activity, and when properly configured, its Autonomous Response capability swiftly shut down the threar before it could escalate into a more disruptive attack. Subsequent analysis by Darktrace’s Threat Research team revealed that the payload was a variant of the Oyster backdoor.

The first indicators of an emerging Oyster SEO campaign typically appeared when user devices navigated to a typosquatted domain, such as putty[.]run or putty app[.]naymin[.]com, via a TLS/SSL connection.

Figure 1: Darktrace’s detection of a device connecting to the typosquatted domain putty[.]run.

The device would then initiate a connection to a secondary domain that hosts the malicious installer, likely triggered by user interaction with redirect elements on the landing page. This secondary site may not have any immediate connection to PuTTY itself but is instead a hijacked blog, a file-sharing service, or a legitimate-looking content delivery subdomain.

Figure 2: Darktrace’s detection of the device making subsequent connections to the payload domain.

Following installation, multiple affected devices were observed attempting outbound connectivity to rare external IP addresses, specifically requesting the ‘/secure’ endpoint as noted within the declared URIs. After the initial callback, the malware continued communicating with additional infrastructure, maintaining its foothold and likely waiting for tasking instructions. Communication patterns included:

·       Endpoints with URIs /api/kcehc and /api/jgfnsfnuefcnegfnehjbfncejfh

·       Endpoints with URI /reg and user agent “WordPressAgent”, “FingerPrint” or “FingerPrintpersistent”

This tactic has been consistently linked to the Oyster backdoor, which has shown similar URI patterns across multiple campaigns [3].

Darktrace analysts also noted the sophisticated use of spoofed user agent strings across multiple investigated customer networks. These headers, which are typically used to identify the application making an HTTP request, are carefully crafted to appear benign or mimic legitimate software. One common example seen in the campaign is the user agent string “WordPressAgent”. While this string references a legitimate web application or plugin, it does not appear to correspond to any known WordPress services or APIs. Its inclusion is most likely designed to mimic background web traffic commonly associated with WordPress-based content management systems.

Figure 3: Cyber AI Analyst investigation linking the HTTP C2 activity.

Case-Specific Observations

While the previous section focused on tactics and techniques common across observed Oyster infections, a closer examination reveals notable variations and unique elements in specific cases. These distinct features offer valuable insights into the diverse operational approaches employed by threat actors. These distinct features, from unusual user agent strings to atypical network behavior, offer valuable insights into the diverse operational approaches employed by the threat actors. Crucially, the divergence in post-exploitation activity reflects a broader trend in the use of widely available malware families like Oyster as flexible entry points, rather than fixed tools with a single purpose. This modular use of the backdoor reflects the growing Malware-as-a-Service (MaaS) ecosystem, where a single initial infection can be repurposed depending on the operator’s goals.

From Infection to Data Egress

In one observed incident, Darktrace observed an infected device downloading a ZIP file named ‘host[.]zip’ via curl from the URI path /333/host[.]zip, following the standard payload delivery chain. This file likely contained additional tools or payloads intended to expand the attacker’s capabilities within the compromised environment. Shortly afterwards, the device exhibited indicators of probable data exfiltration, with outbound HTTP POST requests featuring the URI pattern: /upload?dir=NAME_FOLDER/KEY_KEY_KEY/redacted/c/users/public.

This format suggests the malware was actively engaged in local host data staging and attempting to transmit files from the target machine. The affected device, identified as a laptop, aligns with the expected target profile in SEO poisoning scenarios, where unsuspecting end users download and execute trojanized software.

Irregular RDP Activity and Scanning Behavior

Several instances within the campaign revealed anomalous or unexpected Remote Desktop Protocol (RDP) sessions occurring shortly after DNS requests to fake PuTTY domains. Unusual RDP connections frequently followed communication with Oyster backdoor C2 servers. Additionally, Darktrace detected patterns of RDP scanning, suggesting the attackers were actively probing for accessible systems within the network. This behavior indicates a move beyond initial compromise toward lateral movement and privilege escalation, common objectives once persistence is established.

The presence of unauthorized and administrative RDP sessions following Oyster infections aligns with the malware’s historical role as a gateway for broader impact. In previous campaigns, Oyster has often been leveraged to enable credential theft, lateral movement, and ultimately ransomware deployment. The observed RDP activity in this case suggests a similar progression, where the backdoor is not the final objective but rather a means to expand access and establish control over the target environment.

Cryptic User Agent Strings?

In multiple investigated cases, the user agent string identified in these connections featured formatting that appeared nonsensical or cryptic. One such string containing seemingly random Chinese-language characters translated into an unusual phrase: “Weihe river is where the water and river flow.” Legitimate software would not typically use such wording, suggesting that the string was intended as a symbolic marker rather than a technical necessity. Whether meant as a calling card or deliberately crafted to frame attribution, its presence highlights how subtle linguistic cues can complicate analysis.

Figure 4: Darktrace’s detection of malicious connections using a user agent with randomized Chinese-language formatting.

Strategic Implications

What makes this campaign particularly noteworthy is not simply the use of Oyster, but its delivery mechanism. SEO poisoning has traditionally been associated with cybercriminal operations focused on opportunistic gains, such as credential theft and fraud. Its strength lies in casting a wide net, luring unsuspecting users searching for popular software and tricking them into downloading malicious binaries. Unlike other campaigns, SEO poisoning is inherently indiscriminate, given that the attacker cannot control exactly who lands on their poisoned search results. However, in this case, the use of PuTTY as the luring mechanism possibly indicates a narrowed scope - targeting IT administrators and accounts with high privileges due to the nature of PuTTY’s functionalities.

This raises important implications when considered alongside Oyster. As a backdoor often linked to ransomware operations and persistent access frameworks, Oyster is far more valuable as an entry point into corporate or government networks than small-scale cybercrime. The presence of this malware in an SEO-driven delivery chain suggests a potential convergence between traditional cybercriminal delivery tactics and objectives often associated with more sophisticated attackers. If actors with state-sponsored or strategic objectives are indeed experimenting with SEO poisoning, it could signal a broadening of their targeting approaches. This trend aligns with the growing prominence of MaaS and the role of initial access brokers in today’s cybercrime ecosystem.

Whether the operators seek financial extortion through ransomware or longer-term espionage campaigns, the use of such techniques blurs the traditional distinctions. What looks like a mass-market infection vector might, in practice, be seeding footholds for high-value strategic intrusions.

Credit to Christina Kreza (Cyber Analyst) and Adam Potter (Senior Cyber Analyst)

Appendices

MITRE ATT&CK Mapping

·       T1071.001 – Command and Control – Web Protocols

·       T1008 – Command and Control – Fallback Channels

·       T0885 – Command and Control – Commonly Used Port

·       T1571 – Command and Control – Non-Standard Port

·       T1176 – Persistence – Browser Extensions

·       T1189 – Initial Access – Drive-by Compromise

·       T1566.002 – Initial Access – Spearphishing Link

·       T1574.001 – Persistence – DLL

Indicators of Compromise (IoCs)

·       85.239.52[.]99 – IP address

·       194.213.18[.]89/reg – IP address / URI

·       185.28.119[.]113/secure – IP address / URI

·       185.196.8[.]217 – IP address

·       185.208.158[.]119 – IP address

·       putty[.]run – Endpoint

·       putty-app[.]naymin[.]com – Endpoint

·       /api/jgfnsfnuefcnegfnehjbfncejfh

·       /api/kcehc

Darktrace Model Detections

·       Anomalous Connection / New User Agent to IP Without Hostname

·       Anomalous Connection / Posting HTTP to IP Without Hostname

·       Compromise / HTTP Beaconing to Rare Destination

·       Compromise / Large Number of Suspicious Failed Connections

·       Compromise / Beaconing Activity to External Rare

·       Compromise / Quick and Regular Windows HTTP Beaconing

·       Device / Large Number of Model Alerts

·       Device / Initial Attack Chain Activity

·       Device / Suspicious Domain

·       Device / New User Agent

·       Antigena / Network / Significant Anomaly / Antigena Breaches Over Time Block

·       Antigena / Network / External Threat / Antigena Suspicious Activity Block

·       Antigena / Network / Significant Anomaly / Antigena Significant Anomaly from Client Block

References

[1] https://malpedia.caad.fkie.fraunhofer.de/details/win.broomstick

[2] https://arcticwolf.com/resources/blog/malvertising-campaign-delivers-oyster-broomstick-backdoor-via-seo-poisoning-trojanized-tools/

[3] https://hunt.io/blog/oysters-trail-resurgence-infrastructure-ransomware-cybercrime

[4] https://www.crowdstrike.com/en-us/cybersecurity-101/social-engineering/seo-poisoning/

[5] https://blackpointcyber.com/blog/vanilla-tempest-oyster-backdoor-netsupport-unknown-infostealers-soc-incidents-blackpoint-apg/

[6] https://areteir.com/article/rhysida-using-oyster-backdoor-in-attacks/

The content provided in this blog is published by Darktrace for general informational purposes only and reflects our understanding of cybersecurity topics, trends, incidents, and developments at the time of publication. While we strive to ensure accuracy and relevance, the information is provided “as is” without any representations or warranties, express or implied. Darktrace makes no guarantees regarding the completeness, accuracy, reliability, or timeliness of any information presented and expressly disclaims all warranties.

Nothing in this blog constitutes legal, technical, or professional advice, and readers should consult qualified professionals before acting on any information contained herein. Any references to third-party organizations, technologies, threat actors, or incidents are for informational purposes only and do not imply affiliation, endorsement, or recommendation.

Darktrace, its affiliates, employees, or agents shall not be held liable for any loss, damage, or harm arising from the use of or reliance on the information in this blog.

The cybersecurity landscape evolves rapidly, and blog content may become outdated or superseded. We reserve the right to update, modify, or remove any content without notice.

Continue reading
About the author
Christina Kreza
Cyber Analyst

Blog

/

Network

/

September 9, 2025

The benefits of bringing together network and email security

Default blog imageDefault blog image

In many organizations, network and email security operate in isolation. Each solution is tasked with defending its respective environment, even though both are facing the same advanced, multi-domain threats.  

This siloed approach overlooks a critical reality: email remains the most common vector for initiating cyber-attacks, while the network is the primary stage on which those attacks progress. Without direct integration between these two domains, organizations risk leaving blind spots that adversaries can exploit.  

A modern security strategy needs to unify email and network defenses, not just in name, but in how they share intelligence, conduct investigations, and coordinate response actions. Let’s take a look at how this joined-up approach delivers measurable technical, operational, and commercial benefits.

Technical advantages

Pre-alert intelligence: Gathering data before the threat strikes

Most security tools start working when something goes wrong – an unusual login, a flagged attachment, a confirmed compromise. But by then, attackers may already be a step ahead.

By unifying network and email security under a single AI platform (like the Darktrace Active AI Security Platform), you can analyze patterns across both environments in real time, even when there are no alerts. This ongoing monitoring builds a behavioral understanding of every user, device, and domain in your ecosystem.

That means when an email arrives from a suspicious domain, the system already knows whether that domain has appeared on your network before – and whether its behavior has been unusual. Likewise, when new network activity involves a domain first spotted in an email, it’s instantly placed in the right context.

This intelligence isn’t built on signatures or after-the-fact compromise indicators – it’s built on live behavioral baselines, giving your defenses the ability to flag threats before damage is done.

Alert-related intelligence: Connecting the dots in real time

Once an alert does fire, speed and context matter. The Darktrace Cyber AI Analyst can automatically investigate across both environments, piecing together network and email evidence into a single, cohesive incident.

Instead of leaving analysts to sift through fragmented logs, the AI links events like a phishing email to suspicious lateral movement on the recipient’s device, keeping the full attack chain intact. Investigations that might take hours – or even days – can be completed in minutes, with far fewer false positives to wade through.

This is more than a time-saver. It ensures defenders maintain visibility after the first sign of compromise, following the attacker as they pivot into network infrastructure, cloud services, or other targets. That cross-environment continuity is impossible to achieve with disconnected point solutions or siloed workflows.

Operational advantages

Streamlining SecOps across teams

In many organizations, email security is managed by IT, while network defense belongs to the SOC. The result? Critical information is scattered between tools and teams, creating blind spots just when you need clarity.

When email and network data flow into a single platform, everyone is working from the same source of truth. SOC analysts gain immediate visibility into email threats without opening another console or sending a request to another department. The IT team benefits from the SOC’s deeper investigative context.

The outcome is more than convenience: it’s faster, more informed decision-making across the board.

Reducing time-to-meaning and enabling faster response

A unified platform removes the need to manually correlate alerts between tools, reducing time-to-meaning for every incident. Built-in AI correlation instantly ties together related events, guiding analysts toward coordinated responses with higher confidence.

Instead of relying on manual SIEM rules or pre-built SOAR playbooks, the platform connects the dots in real time, and can even trigger autonomous response actions across both environments simultaneously. This ensures attacks are stopped before they can escalate, regardless of where they begin.

Commercial advantages

While purchasing “best-of-breed" for all your different tools might sound appealing, it often leads to a patchwork of solutions with overlapping costs and gaps in coverage. However good a “best-in-breed" email security solution might be in the email realm, it won't be truly effective without visibility across domains and an AI analyst piecing intelligence together. That’s why we think “best-in-suite" is the only “best-in-breed" approach that works – choosing a high-quality platform ensures that every new capability strengthens the whole system.  

On top of that, security budgets are under constant pressure. Managing separate vendors for email and network defense means juggling multiple contracts, negotiating different SLAs, and stitching together different support models.

With a single provider for both, procurement and vendor management become far simpler. You deal with one account team, one support channel, and one unified strategy for both environments. If you choose to layer on managed services, you get consistent expertise across your whole security footprint.

Even more importantly, an integrated AI platform sets the stage for growth. Once email and network are under the same roof, adding coverage for other attack surfaces – like cloud or identity – is straightforward. You’re building on the same architecture, not bolting on new point solutions that create more complexity.

Check out the white paper, The Modern Security Stack: Why Your NDR and Email Security Solutions Need to Work Together, to explore these benefits in more depth, with real-world examples and practical steps for unifying your defenses.

[related-resource]

Continue reading
About the author
Mikey Anderson
Product Marketing Manager, Network Detection & Response
Your data. Our AI.
Elevate your network security with Darktrace AI