Blog
/
Cloud
/
September 8, 2025

Unpacking the Salesloft Incident: Insights from Darktrace Observations

Darktrace identified anomalous activity within its customer base linked to a supply chain attack exploiting Salesloft integrations in August 2025. Learn about the campaign and how Darktrace detects and responds to such threats using AI-driven threat detection and Autonomous Response capabilities.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Emma Foulger
Global Threat Research Operations Lead
Written by
Calum Hall
Technical Content Researcher
solesloft incident Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
08
Sep 2025

Introduction

On August 26, 2025, Google Threat intelligence Group released a report detailing a widespread data theft campaign targeting the sales automation platform Salesloft, via compromised OAuth tokens used by the third-party Drift AI chat agent [1][2].  The attack has been attributed to the threat actor UNC6395 by Google Threat Intelligence and Mandiant [1].

The attack is believed to have begun in early August 2025 and continued through until mid-August 2025 [1], with the threat actor exporting significant volumes of data from multiple Salesforce instances [1]. Then sifting through this data for anything that could be used to compromise the victim’s environments such as access keys, tokens or passwords. This had led to Google Threat Intelligence Group assessing that the primary intent of the threat actor is credential harvesting, and later reporting that it was aware of in excess of 700 potentially impacted organizations [3].

Salesloft previously stated that, based on currently available data, customers that do not integrate with Salesforce are unaffected by this campaign [2]. However, on August 28, Google Threat Intelligence Group announced that “Based on new information identified by GTIG, the scope of this compromise is not exclusive to the Salesforce integration with Salesloft Drift and impacts other integrations” [2]. Google Threat Intelligence has since advised that any and all authentication tokens stored in or connected to the Drift platform be treated as potentially compromised [1].

This campaign demonstrates how attackers are increasingly exploiting trusted Software-as-a-Service (SaaS) integrations as a pathway into enterprise environment.

By abusing these integrations, threat actors were able to exfiltrate sensitive business data at scale, bypassing traditional security controls. Rather than relying on malware or obvious intrusion techniques, the adversaries leveraged legitimate credentials and API traffic that resembled legitimate Salesforce activity to achieve their goals. This type of activity is far harder to detect with conventional security tools, since it blends in with the daily noise of business operations.

The incident underscores the escalating significance of autonomous coverage within SaaS and third-party ecosystems. As businesses increasingly depend on interconnected platforms, visibility gaps become evident that cannot be managed by conventional perimeter and endpoint defenses.

By developing a behavioral comprehension of each organization's distinct use of cloud services, anomalies can be detected, such as logins from unexpected locations, unusually high volumes of API requests, or unusual document activity. These indications serve as an early alert system, even when intruders use legitimate tokens or accounts, enabling security teams to step in before extensive data exfiltration takes place

What happened?

The campaign is believed to have started on August 8, 2025, with malicious activity continuing until at least August 18. The threat actor, tracked as UNC6395, gained access via compromised OAuth tokens associated with Salesloft Drift integrations into Salesforce [1]. Once tokens were obtained, the attackers were able to issue large volumes of Salesforce API requests, exfiltrating sensitive customer and business data.

Initial Intrusion

The attackers first established access by abusing OAuth and refresh tokens from the Drift integration. These tokens gave them persistent access into Salesforce environments without requiring further authentication [1]. To expand their foothold, the threat actor also made use of TruffleHog [4], an open-source secrets scanner, to hunt for additional exposed credentials. Logs later revealed anomalous IAM updates, including unusual UpdateAccessKey activity, which suggested attempts to ensure long-term persistence and control within compromised accounts.

Internal Reconnaissance & Data Exfiltration

Once inside, the adversaries began exploring the Salesforce environments. They ran queries designed to pull sensitive data fields, focusing on objects such as Cases, Accounts, Users, and Opportunities [1]. At the same time, the attackers sifted through this information to identify secrets that could enable access to other systems, including AWS keys and Snowflake credentials [4]. This phase demonstrated the opportunistic nature of the campaign, with the actors looking for any data that could be repurposed for further compromise.

Lateral Movement

Salesloft and Mandiant investigations revealed that the threat actor also created at least one new user account in early September. Although follow-up activity linked to this account was limited, the creation itself suggested a persistence mechanism designed to survive remediation efforts. By maintaining a separate identity, the attackers ensured they could regain access even if their stolen OAuth tokens were revoked.

Accomplishing the mission

The data taken from Salesforce environments included valuable business records, which attackers used to harvest credentials and identify high-value targets. According to Mandiant, once the data was exfiltrated, the actors actively sifted through it to locate sensitive information that could be leveraged in future intrusions [1]. In response, Salesforce and Salesloft revoked OAuth tokens associated with Drift integrations on August 20 [1], a containment measure aimed at cutting off the attackers’ primary access channel and preventing further abuse.

How did the attack bypass the rest of the security stack?

The campaign effectively bypassed security measures by using legitimate credentials and OAuth tokens through the Salesloft Drift integration. This rendered traditional security defenses like endpoint protection and firewalls ineffective, as the activity appeared non-malicious [1]. The attackers blended into normal operations by using common user agents and making queries through the Salesforce API, which made their activity resemble legitimate integrations and scripts. This allowed them to operate undetected in the SaaS environment, exploiting the trust in third-party connections and highlighting the limitations of traditional detection controls.

Darktrace Coverage

Anomalous activities have been identified across multiple Darktrace deployments that appear associated with this campaign. This included two cases on customers based within the United States who had a Salesforce integration, where the pattern of activities was notably similar.

On August 17, Darktrace observed an account belonging to one of these customers logging in from the rare endpoint 208.68.36[.]90, while the user was seen active from another location. This IP is a known indicator of compromise (IoC) reported by open-source intelligence (OSINT) for the campaign [2].

Cyber AI Analyst Incident summarizing the suspicious login seen for the account.
Figure 1: Cyber AI Analyst Incident summarizing the suspicious login seen for the account.

The login event was associated with the application Drift, further connecting the events to this campaign.

Advanced Search logs showing the Application used to login.
Figure 2: Advanced Search logs showing the Application used to login.

Following the login, the actor initiated a high volume of Salesforce API requests using methods such as GET, POST, and DELETE. The GET requests targeted endpoints like /services/data/v57.0/query and /services/data/v57.0/sobjects/Case/describe, where the former is used to retrieve records based on a specific criterion, while the latter provides metadata for the Case object, including field names and data types [5,6].

Subsequently, a POST request to /services/data/v57.0/jobs/query was observed, likely to initiate a Bulk API query job for extracting large volumes of data from the Ingest Job endpoint [7,8].

Finally, a DELETE request to remove an ingestion job batch, possibly an attempt to obscure traces of prior data access or manipulation.

A case on another US-based customer took place a day later, on August 18. This again began with an account logging in from the rare IP 208.68.36[.]90 involving the application Drift. This was followed by Salesforce GET requests targeting the same endpoints as seen in the previous case, and then a POST to the Ingest Job endpoint and finally a DELETE request, all occurring within one minute of the initial suspicious login.

The chain of anomalous behaviors, including a suspicious login and delete request, resulted in Darktrace’s Autonomous Response capability suggesting a ‘Disable user’ action. However, the customer’s deployment configuration required manual confirmation for the action to take effect.

An example model alert for the user, triggered due to an anomalous API DELETE request.
Figure 3: An example model alert for the user, triggered due to an anomalous API DELETE request.
Figure 4: Model Alert Event Log showing various model alerts for the account that ultimately led to an Autonomous Response model being triggered.

Conclusion

In conclusion, this incident underscores the escalating risks of SaaS supply chain attacks, where third-party integrations can become avenues for attacks. It demonstrates how adversaries can exploit legitimate OAuth tokens and API traffic to circumvent traditional defenses. This emphasizes the necessity for constant monitoring of SaaS and cloud activity, beyond just endpoints and networks, while also reinforcing the significance of applying least privilege access and routinely reviewing OAuth permissions in cloud environments. Furthermore, it provides a wider perspective into the evolution of the threat landscape, shifting towards credential and token abuse as opposed to malware-driven compromise.

Credit to Emma Foulger (Global Threat Research Operations Lead), Calum Hall (Technical Content Researcher), Signe Zaharka (Principal Cyber Analyst), Min Kim (Senior Cyber Analyst), Nahisha Nobregas (Senior Cyber Analyst), Priya Thapa (Cyber Analyst)

Appendices

Darktrace Model Detections

·      SaaS / Access / Unusual External Source for SaaS Credential Use

·      SaaS / Compromise / Login From Rare Endpoint While User Is Active

·      SaaS / Compliance / Anomalous Salesforce API Event

·      SaaS / Unusual Activity / Multiple Unusual SaaS Activities

·      Antigena / SaaS / Antigena Unusual Activity Block

·      Antigena / SaaS / Antigena Suspicious Source Activity Block

Customers should consider integrating Salesforce with Darktrace where possible. These integrations allow better visibility and correlation to spot unusual behavior and possible threats.

IoC List

(IoC – Type)

·      208.68.36[.]90 – IP Address

References

1.     https://cloud.google.com/blog/topics/threat-intelligence/data-theft-salesforce-instances-via-salesloft-drift

2.     https://trust.salesloft.com/?uid=Drift+Security+Update%3ASalesforce+Integrations+%283%3A30PM+ET%29

3.     https://thehackernews.com/2025/08/salesloft-oauth-breach-via-drift-ai.html

4.     https://unit42.paloaltonetworks.com/threat-brief-compromised-salesforce-instances/

5.     https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_query.htm

6.     https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/resources_sobject_describe.htm

7.     https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/get_job_info.htm

8.     https://developer.salesforce.com/docs/atlas.en-us.api_asynch.meta/api_asynch/query_create_job.htm

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Emma Foulger
Global Threat Research Operations Lead
Written by
Calum Hall
Technical Content Researcher

More in this series

No items found.

Blog

/

Cloud

/

September 23, 2025

It’s Time to Rethink Cloud Investigations

cloud investigationsDefault blog imageDefault blog image

Cloud Breaches Are Surging

Cloud adoption has revolutionized how businesses operate, offering speed, scalability, and flexibility. But for security teams, this transformation has introduced a new set of challenges, especially when it comes to incident response (IR) and forensic investigations.

Cloud-related breaches are skyrocketing – 82% of breaches now involve cloud-stored data (IBM Cost of a Data Breach, 2023). Yet incidents often go unnoticed for days: according to a 2025 report by Cybersecurity Insiders, of the 65% of organizations experienced a cloud-related incident in the past year, only 9% detected it within the first hour, and 62% took more than 24 hours to remediate it (Cybersecurity Insiders, Cloud Security Report 2025).

Despite the shift to cloud, many investigation practices remain rooted in legacy on-prem approaches. According to a recent report, 65% of organizations spend approximately 3-5 days longer when investigating an incident in the cloud vs. on premises.

Cloud investigations must evolve, or risk falling behind attackers who are already exploiting the cloud’s speed and complexity.

4 Reasons Cloud Investigations Are Broken

The cloud’s dynamic nature – with its ephemeral workloads and distributed architecture – has outpaced traditional incident response methods. What worked in static, on-prem environments simply doesn’t translate.

Here’s why:

  1. Ephemeral workloads
    Containers and serverless functions can spin up and vanish in minutes. Attackers know this as well – they’re exploiting short-lived assets for “hit-and-run” attacks, leaving almost no forensic footprint. If you’re relying on scheduled scans or manual evidence collection, you’re already too late.
  2. Fragmented tooling
    Each cloud provider has its own logs, APIs, and investigation workflows. In addition, not all logs are enabled by default, cloud providers typically limit the scope of their logs (both in terms of what data they collect and how long they retain it), and some logs are only available through undocumented APIs. This creates siloed views of attacker activity, making it difficult to piece together a coherent timeline. Now layer in SaaS apps, Kubernetes clusters, and shadow IT — suddenly you’re stitching together 20+ tools just to find out what happened. Analysts call it the ‘swivel-chair Olympics,’ and it’s burning hours they don’t have.
  3. SOC overload
    Analysts spend the bulk of their time manually gathering evidence and correlating logs rather than responding to threats. This slows down investigations and increases burnout. SOC teams are drowning in noise; they receive thousands of alerts a day, the majority of which never get touched. False positives eat hundreds of hours a month, and consequently burnout is rife.  
  4. Cost of delay
    The longer an investigation takes, the higher its cost. Breaches contained in under 200 days save an average of over $1M compared to those that linger (IBM Cost of a Data Breach 2025).

These challenges create a dangerous gap for threat actors to exploit. By the time evidence is collected, attackers may have already accessed or exfiltrated data, or entrenched themselves deeper into your environment.

What’s Needed: A New Approach to Cloud Investigations

It’s time to ditch the manual, reactive grind and embrace investigations that are automated, proactive, and built for the world you actually defend. Here’s what the next generation of cloud forensics must deliver:

  • Automated evidence acquisition
    Capture forensic-level data the moment a threat is detected and before assets disappear.
  • Unified multi-cloud visibility
    Stitch together logs, timelines, and context across AWS, Azure, GCP, and hybrid environments into a single unified view of the investigation.
  • Accelerated investigation workflows
    Reduce time-to-insight from hours or days to minutes with automated analysis of forensic data, enabling faster containment and recovery.
  • Empowered SOC teams
    Fully contextualised data and collaboration workflows between teams in the SOC ensure seamless handover, freeing up analysts from manual collection tasks so they can focus on what matters: analysis and response.

Attackers are already leveraging the cloud’s agility. Defenders must do the same — adopting solutions that match the speed and scale of modern infrastructure.

Cloud Changed Everything. It’s Time to Change Investigations.  

The cloud fundamentally reshaped how businesses operate. It’s time for security teams to rethink how they investigate threats.

Forensics can no longer be slow, manual, and reactive. It must be instant, automated, and cloud-first — designed to meet the demands of ephemeral infrastructure and multi-cloud complexity.

The future of incident response isn’t just faster. It’s smarter, more scalable, and built for the environments we defend today, not those of ten years ago.  

On October 9th, Darktrace is revealing the next big thing in cloud security. Don’t miss it – sign up for the webinar.

darktrace live event launch
Continue reading
About the author
Kellie Regan
Director, Product Marketing - Cloud Security

Blog

/

Network

/

September 23, 2025

ShadowV2: An emerging DDoS for hire botnet

ShadowV2: An emerging DDoS for hire botnet Default blog imageDefault blog image

Introduction: ShadowV2 DDoS

Darktrace's latest investigation uncovered a novel campaign that blends traditional malware with modern devops technology.

At the center of this campaign is a Python-based command-and-control (C2) framework hosted on GitHub CodeSpaces. This campaign also utilizes a Python based spreader with a multi-stage Docker deployment as the initial access vector.

The campaign further makes use of a Go-based Remote Access Trojan (RAT) that implements a RESTful registration and polling mechanism, enabling command execution and communication with its operators.

ShadowV2 attack techniques

What sets this campaign apart is the sophistication of its attack toolkit.

The threat actors employ advanced methods such as HTTP/2 rapid reset, a Cloudflare under attack mode (UAM) bypass, and large-scale HTTP floods, demonstrating a capability to combine distributed denial-of-service (DDoS) techniques with targeted exploitation.

With the inclusion of an OpenAPI specification, implemented with FastAPI and Pydantic and a fully developed login panel and operator interface, the infrastructure seems to resemble a “DDoS-as-a-service” platform rather than a traditional botnet, showing the extent to which modern malware increasingly mirrors legitimate cloud-native applications in both design and usability.

Analysis of a SadowV2 attack

Initial access

The initial compromise originates from a Python script hosted on GitHub CodeSpaces. This can be inferred from the observed headers:

User-Agent: docker-sdk-python/7.1.0

X-Meta-Source-Client: github/codespaces

The user agent shows that the attacker is using the Python Docker SDK, a library for Python programs that allows them to interact with Docker to create containers. The X-Meta-Source-Client appears to have been injected by GitHub into the request to allow for attribution, although there is no documentation online about this header.

The IP the connections originate from is 23.97.62[.]139, which is a Microsoft IP based in Singapore. This aligns with expectations as GitHub is owned by Microsoft.

This campaign targets exposed Docker daemons, specifically those running on AWS EC2. Darktrace runs a number of honeypots across multiple cloud providers and has only observed attacks against honeypots running on AWS EC2. By default, Docker is not accessible to the Internet, however, can be configured to allow external access. This can be useful for managing complex deployments where remote access to the Docker API is needed.

Typically, most campaigns targeting Docker will either take an existing image from Docker Hub and deploy their tools within it, or upload their own pre-prepared image to deploy. This campaign works slightly differently; it first spawns a generic “setup” container and installs a number of tools within it. This container is then imaged and deployed as a live container with the malware arguments passed in via environmental variables.

Attacker creates a blank container from an Ubuntu image.
Figure 1: Attacker creates a blank container from an Ubuntu image.
Attacker sets up their tools for the attack.
Figure 2: Attacker sets up their tools for the attack.
 Attacker deploys a new container using the image from the setup container.
Figure 3: Attacker deploys a new container using the image from the setup container.

It is unclear why the attackers chose this approach - one possibility is that the actor is attempting to avoid inadvertently leaving forensic artifacts by performing the build on the victim machine, rather than building it themselves and uploading it.

Malware analysis

The Docker container acts as a wrapper around a single binary, dropped in /app/deployment. This is an ELF binary written in Go, a popular choice for modern malware. Helpfully, the binary is unstripped, making analysis significantly easier.

The current version of the malware has not been reported by OSINT providers such as VirusTotal. Using the domain name from the MASTER_ADDR variable and other IoCs, we were able to locate two older versions of the malware that were submitted to VirusTotal on the June 25 and July 30 respectively [1] [2].  Neither of these had any detections and were only submitted once each using the web portal from the US and Canada respectively. Darktrace first observed the attack against its honeypot on June 24, so it could be a victim of this campaign submitting the malware to VirusTotal. Due to the proximity of the start of the attacks, it could also be the attacker testing for detections, however it is not possible to know for certain.

The malware begins by phoning home, using the MASTER_ADDR and VPS_NAME identifiers passed in from the Docker run environmental variables. In addition, the malware derives a unique VPS_ID, which is the VPS_NAME concatenated with the current unix timestamp. The VPS_ID is used for all communications with the C2 server as the identifier for the specific implant. If the malware is restarted, or the victim is re-infected, the C2 server will inform the implant of its original VPS_ID to ensure continuity.

Snippet that performs the registration by sending a POST request to the C2 API with a JSON structure.
Figure 4: Snippet that performs the registration by sending a POST request to the C2 API with a JSON structure.

From there, the malware then spawns two main loops that will remain active for the lifetime of the implant. Every second, it sends a heartbeat to the C2 by sending the VPS_ID to hxxps://shadow.aurozacloud[.]xyz/api/vps/heartbeat via POST request. Every 5 seconds, it retrieves hxxps://shadow.aurozacloud[.]xyz/api/vps/poll/<VPS ID> via a GET request to poll for new commands.

The poll mechanism shadow v2
Figure 5: The poll mechanism.

At this stage, Darktrace security researchers wrote a custom client that ran on the server infected by the attacker that mimicked their implant. The goal was to intercept commands from the C2. Based on this, it was observed initiating an attack against chache08[.]werkecdn[.]me using a 120 thread HTTP2 rapid reset attack. This site appears to be hosted on an Amsterdam VPS provided by FDCServers, a server hosting company. It was not possible to identify what normally runs on this site, as it returns a 403 Forbidden error when visited.

Darktrace’s code analysis found that the returned commands contain the following fields:

  • Method (e.g. GET, POST)
  • A unique ID for the attack
  • A URL endpoint used to report attack statistics
  • The target URL & port
  • The duration of the attack
  • The number of threads to use
  • An optional proxy to send HTTP requests through

The malware then spins up several threads, each running a configurable number of HTTP clients using Valyala’s fasthttp library, an open source Go library for making high-performance HTTP requests. After this is complete, it uses these clients to perform an HTTP flood attack against the target.

A snippet showing the fasthttp client creation loop, as well as a function to report the worker count back to the C2.
Figure 6: A snippet showing the fasthttp client creation loop, as well as a function to report the worker count back to the C2.

In addition, it also features several flags to enable different bypass mechanisms to augment the malware:

  • WordPress bypass (does not appear to be implemented - the flag is not used anywhere)
  • Random query strings appended to the URL
  • Spoofed forwarding headers with random IP addresses
  • Cloudflare under-attack-mode (UAM) bypass
  • HTTP2 rapid reset

The most interesting of these is the Cloudflare UAM bypass mechanism. When this is enabled, the malware will attempt to use a bundled ChromeDP binary to solve the Cloudflare JavaScript challenge that is presented to new visitors. If this succeeds, the clearance cookie obtained is then included in subsequent requests. This is unlikely to work in most cases as headless Chrome browsers are often flagged, and a regular CAPTCHA is instead served.

The UAM bypass success snippet.
Figure 7: The UAM bypass success snippet.

Additionally, the malware has a flag to enable an HTTP2 rapid reset attack mode instead of a regular HTTP flood. In HTTP2, a client can create thousands of requests within a single connection using multiplexing, allowing sites to load faster. The number of request streams per connection is capped however, so in a rapid reset attack many requests are made and then immediately cancelled to allow more requests to be created. This allows a single client to execute vastly more requests per second and use more server resources than it otherwise would, allowing for more effective denial-of-service (DoS) attacks.

 The HTTP2 rapid reset snippet from the main attack function.
Figure 8: The HTTP2 rapid reset snippet from the main attack function.

API/C2 analysis

As mentioned throughout the malware analysis section, the malware communicates with a C2 server using HTTP. The server is behind Cloudflare, which obscures its hosting location and prevents analysis. However, based on analysis of the spreader, it's likely running on GitHub CodeSpaces.

When sending a malformed request to the API, an error generated by the Pydantic library is returned:

{"detail":[{"type":"missing","loc":["body","vps_id"],"msg":"Field required","input":{"vps_name":"xxxxx"},"url":"https://errors.pydantic.dev/2.11/v/missing"}]}

This shows they are using Python for the API, which is the same language that the spreader is written in.

One of the larger frameworks that ships with Pydantic is FastAPI, which also ships with Swagger. The malware author left this publicly exposed, and Darktrace’s researchers were able to obtain a copy of their API documentation. The author appears to have noticed this however, as subsequent attempts to access it now returns a HTTP 404 Not Found error.

Swagger UI view based on the obtained OpenAPI spec.
Figure 9: Swagger UI view based on the obtained OpenAPI spec.

This is useful to have as it shows all the API endpoints, including the exact fields they take and return, along with comments on each endpoint written by the attacker themselves.

It is very likely a DDoS for hire platform (or at the very least, designed for multi-tenant use) based on the extensive user API, which features authentication, distinctions between privilege level (admin vs user), and limitations on what types of attack a user can execute. The screenshot below shows the admin-only user create endpoint, with the default limits.

The admin-only user create endpoint shadow v2
Figure 10: The admin-only user create endpoint.

The endpoint used to launch attacks can also be seen, which lines up with the options previously seen in the malware itself. Interestingly, this endpoint requires a list of zombie systems to launch the attack from. This is unusual as most DDoS for hire services will decide this internally or just launch the attack from every infected host (zombie). No endpoints that returned a list of zombies were found, however, it’s possible one exists as the return types are not documented for all the API endpoints.

The attack start endpoint shadow v2
Figure 11: The attack start endpoint.

There is also an endpoint to manage a blacklist of hosts that cannot be attacked. This could be to stop users from launching attacks against sites operated by the malware author, however it’s also possible the author could be attempting to sell protection to victims, which has been seen previously with other DDoS for hire services.

Blacklist endpoints shadow v2 DDoS
Figure 12: Blacklist endpoints.

Attempting to visit shadow[.]aurozacloud[.]xyz results in a seizure notice. It is most likely fake the same backend is still in use and all of the API endpoints continue to work. Appending /login to the end of the path instead brings up the login screen for the DDoS platform. It describes itself as an “advanced attack platform”, which highlights that it is almost certainly a DDoS for hire service. The UI is high quality, written in Tailwind, and even features animations.

The fake seizure notice.
Figure 13: The fake seizure notice.
The login UI at /login.
Figure 14: The login UI at /login.

Conclusion

By leveraging containerization, an extensive API, and with a full user interface, this campaign shows the continued development of cybercrime-as-a-service. The ability to deliver modular functionality through a Go-based RAT and expose a structured API for operator interaction highlights how sophisticated some threat actors are.

For defenders, the implications are significant. Effective defense requires deep visibility into containerized environments, continuous monitoring of cloud workloads, and behavioral analytics capable of identifying anomalous API usage and container orchestration patterns. The presence of a DDoS-as-a-service panel with full user functionality further emphasizes the need for defenders to think of these campaigns not as isolated tools but as evolving platforms.

Appendices

References

1. https://www.virustotal.com/gui/file/1b552d19a3083572bc433714dfbc2b75eb6930a644696dedd600f9bd755042f6

2. https://www.virustotal.com/gui/file/1f70c78c018175a3e4fa2b3822f1a3bd48a3b923d1fbdeaa5446960ca8133e9c

IoCs

Malware hashes (SHA256)

●      2462467c89b4a62619d0b2957b21876dc4871db41b5d5fe230aa7ad107504c99

●      1b552d19a3083572bc433714dfbc2b75eb6930a644696dedd600f9bd755042f6

●      1f70c78c018175a3e4fa2b3822f1a3bd48a3b923d1fbdeaa5446960ca8133e9c

C2 domain

●      shadow.aurozacloud[.]xyz

Spreader IPs

●      23.97.62[.]139

●      23.97.62[.]136

Yara rule

rule ShadowV2 {

meta:

author = "nathaniel.bill@darktrace.com"

description = "Detects ShadowV2 botnet implant"

strings:

$string1 = "shadow-go"

$string2 = "shadow.aurozacloud.xyz"

$string3 = "[SHADOW-NODE]"

$symbol1 = "main.registerWithMaster"

$symbol2 = "main.handleStartAttack"

$symbol3 = "attacker.bypassUAM"

$symbol4 = "attacker.performHTTP2RapidReset"

$code1 = { 48 8B 05 ?? ?? ?? ?? 48 8B 1D ?? ?? ?? ?? E8 ?? ?? ?? ?? 48 8D 0D ?? ?? ?? ?? 48 89 8C 24 38 01 00 00 48 89 84 24 40 01 00 00 48 8B 4C 24 40 48 BA 00 09 6E 88 F1 FF FF FF 48 8D 04 0A E8 ?? ?? ?? ?? 48 8D 0D ?? ?? ?? ?? 48 89 8C 24 48 01 00 00 48 89 84 24 50 01 00 00 48 8D 05 ?? ?? ?? ?? BB 05 00 00 00 48 8D 8C 24 38 01 00 00 BF 02 00 00 00 48 89 FE E8 ?? ?? ?? ?? }

$code2 = { 48 89 35 ?? ?? ?? ?? 0F B6 94 24 80 02 00 00 88 15 ?? ?? ?? ?? 0F B6 94 24 81 02 00 00 88 15 ?? ?? ?? ?? 0F B6 94 24 82 02 00 00 88 15 ?? ?? ?? ?? 0F B6 94 24 83 02 00 00 88 15 ?? ?? ?? ?? 48 8B 05 ?? ?? ?? ?? }

$code3 = { 48 8D 15 ?? ?? ?? ?? 48 89 94 24 68 04 00 00 48 C7 84 24 78 04 00 00 15 00 00 00 48 8D 15 ?? ?? ?? ?? 48 89 94 24 70 04 00 00 48 8D 15 ?? ?? ?? ?? 48 89 94 24 80 04 00 00 48 8D 35 ?? ?? ?? ?? 48 89 B4 24 88 04 00 00 90 }

condition:

uint16(0) == 0x457f and (2 of ($string*) or 2 of ($symbol*) or any of ($code*))

}

The content provided in this blog is published by Darktrace for general informational purposes only and reflects our understanding of cybersecurity topics, trends, incidents, and developments at the time of publication. While we strive to ensure accuracy and relevance, the information is provided “as is” without any representations or warranties, express or implied. Darktrace makes no guarantees regarding the completeness, accuracy, reliability, or timeliness of any information presented and expressly disclaims all warranties.

Nothing in this blog constitutes legal, technical, or professional advice, and readers should consult qualified professionals before acting on any information contained herein. Any references to third-party organizations, technologies, threat actors, or incidents are for informational purposes only and do not imply affiliation, endorsement, or recommendation.

Darktrace, its affiliates, employees, or agents shall not be held liable for any loss, damage, or harm arising from the use of or reliance on the information in this blog.

The cybersecurity landscape evolves rapidly, and blog content may become outdated or superseded. We reserve the right to update, modify, or remove any content without notice.

Continue reading
About the author
Nate Bill
Threat Researcher
Your data. Our AI.
Elevate your network security with Darktrace AI