Ryuk Ransomware Targeting Major Companies | Expert Analysis
Discover how Ryuk ransomware targets major companies with AI-powered tools to detect unusual activity. Learn the threats behind Autonomous Response technology.
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Max Heinemeyer
Global Field CISO
Share
01
Oct 2019
In recent years, cyber-criminals have increasingly directed their efforts toward sophisticated, long-haul attacks against major companies — a tactic known as “big game hunting.” Unlike standardized phishing campaigns that aim to deliver malware en masse, big game hunting involves exploiting the particular vulnerabilities of a single, high-value target. Catching such attacks requires AI-powered tools that learn what’s normal for each unique user and device, thereby shining a light on the subtle signs of unusual activity that they introduce.
In the threat detailed below, cyber-criminals targeted a major firm with Ryuk ransomware, which Darktrace observed during a trial deployment period. Leveraged very often in the final stage of such tailored attacks, Ryuk encrypts only crucial assets in each targeted environment that the attackers have handpicked. Here’s how this particular incident unfolded, as well as how AI Autonomous Response technology, if in active mode, would have contained the threat in seconds:
Incident overview
Figure 1: Clustering of alerts during intrusion (top right)
Rooted in its evolving understanding of ‘self’ for the targeted firm, Darktrace AI flagged myriad instances of anomalous behavior over the course of the incident — each represented by a dot in the visualization above. The anomalous activity is organized vertically according to how unusual each behavior was in comparison to “normal” for the users and devices involved. The colored dots represent particularly high-confidence detections, which should have prompted immediate investigation by the security team.
Compromised admin keys
The first sign of attack was the highly unusual use of an administrator account not previously seen on the network, suggesting that the attackers had gained access to the account outside the limited scope of the Darktrace trial before moving laterally to the monitored environments. Had Darktrace been deployed across the digital infrastructure, the initial hijacking of the account would have been obvious right away. Nevertheless, Darktrace alerted on the anomalous admin session repeatedly and in real time, as shown below:
Figure 2: Strong detections of compromised admin credential
This behavior is typical of big game hunting. Rather than firing their payload straight away upon accessing the network, the attackers engaged in a longer-term compromise to attain the best position for a crippling attack.
Infiltration via TrickBot
Darktrace then detected the infamous TrickBot banking trojan being downloaded onto the network. While the attacker already had access via the compromised admin credentials, Trickbot was used as a loader for further malicious files and as an additional command & control (C&C) channel. Among the most common post-exploitation steps were:
Figure 3: Detection of later-stage Trickbot download
Command & Control communication
Once the Trickbot infection had begun, Darktrace observed C&C communication back to the attackers. And whereas many devices exhibited anomalous behavior, Darktrace pinpointed one such device at the nexus of the infection. The below image illustrates the plethora of suspicious connections detected on this single device:
Figure 4: Every coloured dot represents a Darktrace detection — very obvious chains of malicious activity is seen above
Using TLS Fingerprinting — also called JA3, the subject of a previous blog post — Darktrace detected a new piece of software making encrypted connections from this device to multiple unusual destinations, a behavior known as beaconing.
Figure 5: The communication in this graph is filtered down to unusual TLS connections — clearly showing a spike in communication during the compromise
Ransomware encryption commences
Following the establishment of the connection with the C&C infrastructure, the Ryuk ransomware was finally deployed. During this “noisy” period with many suspicious SMB activities, Darktrace even more clearly indicated the seriousness and extent of the attack:
Figure 6: A sample of different, non-signature dependant Ransomware detections that fired
In just 12 hours, Ryuk had encrypted more than 200,000 files. The entire incident took place over 36 hours — after that, the company shut down its network to prevent further damage.
Ransomware retrospective
Following the incident, the business traced the initial compromise back to a part of their network in another country that Darktrace did not have visibility over during this trial period. The infection spread until it reached a recently installed file server that Darktrace was, in fact, monitoring. The attacker likely got access to an administrative account that had been used to build this server and, at that point, they had the access needed to fire the Ryuk ransomware.
This incident put Darktrace in the unique position of observing a ransomware attack wherein none of the alerts were seen or actioned by the internal IT team, demonstrating what such an attack can do absent any intervention and response. Had the company actively monitored its Darktrace deployment, the security team would have received and actioned the alerts in real time, as its thousands of users do on a daily basis.
Autonomous Response to the rescue
Had the firm deployed Autonomous Response technology, the lack of attention afforded to Darktrace’s alerts would not have mattered. Whereas four hours passed from the executable download to the first encrypted file, Autonomous Response would have neutralized the threat within seconds, preventing widespread damage and giving the security team the crucial time to catch up.
The screenshot below shows an excerpt of Darktrace’s detections at the beginning of the file server compromise. The detections are listed in chronological order from bottom to top, along with the action that Darktrace’s AI Autonomous Response tool, Antigena, would have taken:
In sum, Antigena would have taken appropriate action by enforcing normal behavior, rather than applying a binary block (e.g. completely quarantining the device) as legacy tools would.
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
React2Shell: How Opportunist Attackers Exploited CVE-2025-55182 Within Hours
What is React2Shell?
CVE-2025-55182, also known as React2Shell is a vulnerability within React server components that allows for an unauthenticated attacker to gain remote code execution with a single request. The severity of this vulnerability and ease of exploitability has led to threat actors opportunistically exploiting it within a matter of days of its public disclosure.
Darktrace security researchers rapidly deployed a new honeypot using the Cloudypots system, allowing for the monitoring of exploitation of the vulnerability in the wild.
Cloudypots is a system that enables virtual instances of vulnerable applications to be deployed in the cloud and monitored for attack. This approach allows for Darktrace to deploy high-interaction, realistic honeypots, that appear as genuine deployments of vulnerable software to attackers.
This blog will explore one such campaign, nicknamed “Nuts & Bolts” based on the naming used in payloads.
Analysis of the React2Shell exploit
The React2Shell exploit relies on an insecure deserialization vulnerability within React Server Components’ “Flight” protocol. This protocol uses a custom serialization scheme that security researchers discovered could be abused to run arbitrary JavaScript by crafting the serialized data in a specific way. This is possible because the framework did not perform proper type checking, allowing an attacker to reference types that can be abused to craft a chain that resolves to an anonymous function, and then invoke it with the desired JavaScript as a promise chain.
This code execution can then be used to load the ‘child_process’ node module and execute any command on the target server.
The vulnerability was discovered on December 3, 2025, with a patch made available on the same day [1]. Within 30 hours of the patch, a publicly available proof of concept emerged that could be used to exploit any vulnerable server. This rapid timeline left many servers remaining unpatched by the time attackers began actively exploiting the vulnerability.
Initial access
The threat actor behind the “Nuts & Bolts” campaign uses a spreader server with IP 95.214.52[.]170 to infect victims. The IP appears to be located in Poland and is associated with a hosting provided known as MEVSPACE. The spreader is highly aggressive, launching exploitation attempts, roughly every hour.
When scanning, he spreader primarily targets port 3000, which is the default port for a NEXT.js server in a default or development configuration. It is possible the attacker is avoiding port 80 and 443, as these are more likely to have reverse proxies or WAFs in front of the server, which could disrupt exploitation attempts.
When the spreader finds a new host with port 3000 open, it begins by testing if it is vulnerable to React2Shell by sending a crafted request to run the ‘whoami’ command and store the output in an error digest that is returned to the attacker.
The above snippet is the core part of the crafted request that performs the execution. This allows the attacker to confirm that the server is vulnerable and fetch the user account under which the NEXT.js process is running, which is useful information for determining if a target is worth attacking.
From here, the attacker then sends an additional request to run the actual payload on the victim server.
This snippet attempts to deploy several payloads by using wget (or curl if wget fails) into the /dev directory and execute them. The x86 binary is a Mirai variant that does not appear to have any major alterations to regular Mirai. The ‘nuts/bolts’ endpoint returns a bash script, which is then executed. The script includes several log statements throughout its execution to provide visibility into which parts ran successfully. Similar to the ‘whoami’ request, the output is placed in an error digest for the attacker to review.
In this case, the command-and-control (C2) IP, 89[.]144.31.18, is hosted on a different server operated by a German hosting provider named myPrepaidServer, which offers virtual private server (VPS) services and accepts cryptocurrency payments [2].
Figure 1: Logs observed in the NEXT.JS console as a result of exploitation. In this case, the honeypot was attacked just two minutes after being deployed.
Nuts & Bolts script
This script’s primary purpose is to prepare the box for a cryptocurrency miner.
The script starts by attempting to terminate any competing cryptocurrency miner processes using ‘pkill’ that match on a specific name. It will check for and terminate:
xmrig
softirq (this also matches a system process, which it will fail to kill each invocation)
watcher
/tmp/a.sh
health.sh
Following this, the script will checks for a process named “fghgf”. If it is not running, it will retrieve hxxp://89[.]144.31.18/nuts/lc and write it to /dev/ijnegrrinje.json, as well as retrieving hxxp://89[.]144.31.18/nuts/x and writing it to /dev/fghgf. The script will the executes /dev/fghgf -c /dev/ijnegrrinje.json -B in the background, which is an XMRig miner.
Figure 2: The XMRig deployment script.
The miner is configured to connect to two private pools at 37[.]114.37.94 and 37[.]114.37.82, using “poop” as both the username and password. The use of a private pool conceals the associated wallet address. From here, a short bash script is dropped to /dev/stink.sh. This script continuously crawls all running processes on the system and reads their /proc/pid/exe path, which contains a copy of the original executable that was run. The ‘strings’ utility is run to output all valid ASCII strings found within the data and checks to see if contains either “xmrig”, “rondo” or “UPX 5”. If so, it sends a SIGKILL to the process to terminate it.
Additionally, it will run ‘ls –l’ on the exe path in case it is symlinked to a specific path or has been deleted. If the output contains any of the following strings, the script sends a SIGKILL to terminate the program:
(deleted) - Indicates that the original executable was deleted from the disk, a common tactic used by malware to evade detection.
xmrig
hash
watcher
/dev/a
softirq
rondo
UPX 5.02
Figure 3: The killer loop and the dropper. In this case ${R}/${K} resolves to /dev/stink.sh.
Darktrace observations in customer environments
Following the public disclosure of CVE‑2025‑55182 on December, Darktrace observed multiple exploitation attempts across customer environments beginning around December 4. Darktrace triage identified a series of consistent indicators of compromise (IoCs). By consolidating indicators across multiple deployments and repeat infrastructure clusters, Darktrace identified a consistent kill chain involving shell‑script downloads and HTTP beaconing.
In one example, on December 5, Darktrace observed external connections to malicious IoC endpoints (172.245.5[.]61:38085, 5.255.121[.]141, 193.34.213[.]15), followed by additional connections to other potentially malicious endpoint. These appeared related to the IoCs detailed above, as one suspicious IP address shared the same ASN. After this suspicious external connectivity, Darktrace observed cryptomining-related activity. A few hours later, the device initiated potential lateral movement activity, attempting SMB and RDP sessions with other internal devices on the network. These chain of events appear to identify this activity to be related to the malicious campaign of the exploitation of React2Shell vulnerability.
Generally, outbound HTTP traffic was observed to ports in the range of 3000–3011, most notably port 3001. Requests frequently originated from scripted tools, with user agents such as curl/7.76.1, curl/8.5.0, Wget/1.21.4, and other generic HTTP signatures. The URIs associated with these requests included paths like /nuts/x86 and /n2/x86, as well as long, randomized shell script names such as /gfdsgsdfhfsd_ghsfdgsfdgsdfg.sh. In some cases, parameterized loaders were observed, using query strings like: /?h=<ip>&p=<port>&t=<proto>&a=l64&stage=true.
Infrastructure analysis revealed repeated callbacks to IP-only hosts linked to ASN AS200593 (Prospero OOO), a well-known “bulletproof” hosting provider often utilized by cyber criminals [3], including addresses such as 193.24.123[.]68:3001 and 91.215.85[.]42:3000, alongside other nodes hosting payloads and staging content.
Darktrace model coverage
Darktrace model coverage consistently highlighted behaviors indicative of exploitation. Among the most frequent detections were anomalous server activity on new, non-standard ports and HTTP requests posted to IP addresses without hostnames, often using uncommon application protocols. Models also flagged the appearance of new user agents such as curl and wget originating from internet-facing systems, representing an unusual deviation from baseline behavior.
Additionally, observed activity included the download of scripts and executable files from rare external sources, with Darktrace’s Autonomous Response capability intervening to block suspicious transfers, when enabled. Beaconing patterns were another strong signal, with detections for HTTP beaconing to new or rare IP addresses, sustained SSL or HTTP increases, and long-running compromise indicators such as “Beacon for 4 Days” and “Slow Beaconing.”
Conclusion
While this opportunistic campaign to exploit the React2Shell exploit is not particularly sophisticated, it demonstrates that attackers can rapidly prototyping new methods to take advantage of novel vulnerabilities before widespread patching occurs. With a time to infection of only two minutes from the initial deployment of the honeypot, this serves as a clear reminder that patching vulnerabilities as soon as they are released is paramount.
Credit to Nathaniel Bill (Malware Research Engineer), George Kim (Analyst Consulting Lead – AMS), Calum Hall (Technical Content Researcher), Tara Gould (Malware Research Lead, and Signe Zaharka (Principal Cyber Analyst).
Organizations are built on increasingly complex digital estates. Nowadays, the average IT ecosystem spans across a large web of interconnected domains like identity, network, cloud, and email.
While these domain-specific technologies may boost business efficiency and scalability, they also provide blind spots where attackers can shelter undetected. Threat actors can slip past defenses because security teams often use different detection tools in each realm of their digital infrastructure. Adversaries will purposefully execute different stages of an attack across different domains, ensuring no single tool picks up too many traces of their malicious activity. Identifying and investigating this type of threat, known as a cross-domain attack, requires mastery in event correlation.
For example, one isolated network scan detected on your network may seem harmless at first glance. Only when it is stitched together with a rare O365 login, a new email rule and anomalous remote connections to an S3 bucket in AWS does it begin to manifest as an actual intrusion.
However, there are a whole host of other challenges that arise with detecting this type of attack. Accessing those alerts in the respective on-premise network, SaaS and IaaS environments, understanding them and identifying which ones are related to each other takes significant experience, skill and time. And time favours no one but the threat actor.
Figure 1: Anatomy of a cross domain attack
Diverse domains and empty grocery shelves
In April 2025, the UK faced a throwback to pandemic-era shortages when the supermarket giant Marks & Spencer (M&S) was crippled by a cyberattack, leaving empty shelves across its stores and massive disruptions to its online service.
The threat actors, a group called Scattered Spider, exploited multiple layers of the organization’s digital infrastructure. Notably, the group were able to bypass the perimeter not by exploiting a technical vulnerability, but an identity. They used social engineering tactics to impersonate an M&S employee and successfully request a password reset.
Once authenticated on the network, they accessed the Windows domain controller and exfiltrated the NTDS.dit file – a critical file containing hashed passwords for all users in the domain. After cracking those hashes offline, they returned to the network with escalated privileges and set their sights on the M&S cloud infrastructure. They then launched the encryption payload on the company’s ESXi virtual machines.
To wrap up, the threat actors used a compromised employee’s email account to send an “abuse-filled” email to the M&S CEO, bragging about the hack and demanding payment. This was possibly more of a psychological attack on the CEO than a technically integral part of the cyber kill chain. However, it revealed yet another one of M&S’s domains had been compromised.
In summary, the group’s attack spanned four different domains:
Identity: Social engineering user impersonation
Network: Exfiltration of NTDS.dit file
Cloud: Ransomware deployed on ESXI VMs
Email: Compromise of user account to contact the CEO
Adept at exploiting nuance
This year alone, several high-profile cyber-attacks have been attributed to the same group, Scattered Spider, including the hacks on Victoria’s Secret, Adidas, Hawaiian Airlines, WestJet, the Co-op and Harrods. It begs the question, what has made this group so successful?
In the M&S attack, they showcased their advanced proficiency in social engineering, which they use to bypass identity controls and gain initial access. They demonstrated deep knowledge of cloud environments by deploying ransomware onto virtualised infrastructure. However, this does not exemplify a cookie-cutter template of attack methods that brings them success every time.
According to CISA, Scattered Spider typically use a remarkable variety of TTPs (tactics, techniques and procedures) across multiple domains to carry out their campaigns. From leveraging legitimate remote access tools in the network, to manipulating AWS EC2 cloud instances or spoofing email domains, the list of TTPs used by the group is eye-wateringly long. Additionally, the group reportedly evades detection by “frequently modifying their TTPs”.
If only they had better intentions. Any security director would be proud of a red team who not only has this depth and breadth of domain-centric knowledge but is also consistently upskilling.
Yet, staying ahead of adversaries who seamlessly move across domains and fluently exploit every system they encounter is just one of many hurdles security teams face when investigating cross-domain attacks.
Resource-heavy investigations
There was a significant delay in time to detection of the M&S intrusion. News outlet BleepingComputer reported that attackers infiltrated the M&S network as early as February 2025. They maintained persistence for weeks before launching the attack in late April 2025, indicating that early signs of compromise were missed or not correlated across domains.
While it’s unclear exactly why M&S missed the initial intrusion, one can speculate about the unique challenges investigating cross-domain attacks present.
Challenges of cross-domain investigation
First and foremost, correlation work is arduous because the string of malicious behaviour doesn’t always stem from the same device.
A hypothetical attack could begin with an O365 credential creating a new email rule. Weeks later, that same credential authenticates anomalously on two different devices. One device downloads an .exe file from a strange website, while the other starts beaconing every minute to a rare external IP address that no one else in the organisation has ever connected to. A month later, a third device downloads 1.3 GiB of data from a recently spun up S3 bucket and gradually transfers a similar amount of data to that same rare IP.
Amid a sea of alerts and false positives, connecting the dots of a malicious attack like this takes time and meticulous correlation. Factor in the nuanced telemetry data related to each domain and things get even more complex.
An analyst who specialises in network security may not understand the unique logging formats or API calls in the cloud environment. Perhaps they are proficient in protecting the Windows Active Directory but are unfamiliar with cloud IAM.
Cloud is also an inherently more difficult domain to investigate. With 89% of organizations now operating in multi-cloud environments time must be spent collecting logs, snapshots and access records. Coupled with the threat of an ephemeral asset disappearing, the risk of missing a threat is high. These are some of the reasons why research shows that 65% of organisations spend 3-5 extra days investigating cloud incidents.
Helpdesk teams handling user requests over the phone require a different set of skills altogether. Imagine a threat actor posing as an employee and articulately requesting an urgent password reset or a temporary MFA deactivation. The junior Helpdesk agent— unfamiliar with the exception criteria, eager to help and feeling pressure from the persuasive manipulator at the end of the phoneline—could easily fall victim to this type of social engineering.
Empowering analysts through intelligent automation
Even the most skilled analysts can’t manually piece together every strand of malicious activity stretching across domains. But skill alone isn’t enough. The biggest hurdle in investigating these attacks often comes down to whether the team have the time, context, and connected visibility needed to see the full picture.
Many organizations attempt to bridge the gap by stitching together a patchwork of security tools. One platform for email, another for endpoint, another for cloud, and so on. But this fragmentation reinforces the very silos that cross-domain attacks exploit. Logs must be exported, normalized, and parsed across tools a process that is not only error-prone but slow. By the time indicators are correlated, the intrusion has often already deepened.
That’s why automation and AI are becoming indispensable. The future of cross-domain investigation lies in systems that can:
Automatically correlate activity across domains and data sources, turning disjointed alerts into a single, interpretable incident.
Generate and test hypotheses autonomously, identifying likely chains of malicious behaviour without waiting for human triage.
Explain findings in human terms, reducing the knowledge gap between junior and senior analysts.
Operate within and across hybrid environments, from on-premise networks to SaaS, IaaS, and identity systems.
This is where Darktrace transforms alerting and investigations. Darktrace’s Cyber AI Analyst automates the process of correlation, hypothesis testing, and narrative building, not just within one domain, but across many. An anomalous O365 login, a new S3 bucket, and a suspicious beaconing host are stitched together automatically, surfacing the story behind the alerts rather than leaving it buried in telemetry.
Figure 2: How threat activity is correlated in Cyber AI Analyst
By analyzing events from disparate tools and sources, AI Analyst constructs a unified timeline of activity showing what happened, how it spread, and where to focus next. For analysts, it means investigation time is measured in minutes, not days. For security leaders, it means every member of the SOC, regardless of experience, can contribute meaningfully to a cross-domain response.
Figure 3: Correlation showcasing cross domains (SaaS and IaaS) in Cyber AI Analyst
Until now, forensic investigations were slow, manual, and reserved for only the largest organizations with specialized DFIR expertise. Darktrace / Forensic Acquisition & Investigation changes that by leveraging the scale and elasticity of the cloud itself to automate the entire investigation process. From capturing full disk and memory at detection to reconstructing attacker timelines in minutes, the solution turns fragmented workflows into streamlined investigations available to every team.
What once took days now takes minutes. Now, forensic investigations in the cloud are faster, more scalable, and finally accessible to every security team, no matter their size or expertise.