Blog
/
AI
/
July 16, 2025

Introducing the AI Maturity Model for Cybersecurity

The AI Maturity Model for Cybersecurity is the most detailed guide of its kind, grounded in real use cases and expert insight. It empowers CISOs to make strategic decisions, not just about what AI to adopt, but how to do it in a way that strengthens their organization over time and achieves successful outcomes.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Ashanka Iddya
Senior Director, Product Marketing
AI maturity model for cybersecurityDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
16
Jul 2025

AI adoption in cybersecurity: Beyond the hype

Security operations today face a paradox. On one hand, artificial intelligence (AI) promises sweeping transformation from automating routine tasks to augmenting threat detection and response. On the other hand, security leaders are under immense pressure to separate meaningful innovation from vendor hype.

To help CISOs and security teams navigate this landscape, we’ve developed the most in-depth and actionable AI Maturity Model in the industry. Built in collaboration with AI and cybersecurity experts, this framework provides a structured path to understanding, measuring, and advancing AI adoption across the security lifecycle.

Overview of AI maturity levels in cybersecurity

Why a maturity model? And why now?

In our conversations and research with security leaders, a recurring theme has emerged:

There’s no shortage of AI solutions, but there is a shortage of clarity and understanding of AI uses cases.

In fact, Gartner estimates that “by 2027, over 40% of Agentic AI projects will be canceled due to escalating costs, unclear business value, or inadequate risk controls. Teams are experimenting, but many aren’t seeing meaningful outcomes. The need for a standardized way to evaluate progress and make informed investments has never been greater.

That’s why we created the AI Security Maturity Model, a strategic framework that:

  • Defines five clear levels of AI maturity, from manual processes (L0) to full AI Delegation (L4)
  • Delineating the outcomes derived between Agentic GenAI and Specialized AI Agent Systems
  • Applies across core functions such as risk management, threat detection, alert triage, and incident response
  • Links AI maturity to real-world outcomes like reduced risk, improved efficiency, and scalable operations

[related-resource]

How is maturity assessed in this model?

The AI Maturity Model for Cybersecurity is grounded in operational insights from nearly 10,000 global deployments of Darktrace's Self-Learning AI and Cyber AI Analyst. Rather than relying on abstract theory or vendor benchmarks, the model reflects what security teams are actually doing, where AI is being adopted, how it's being used, and what outcomes it’s delivering.

This real-world foundation allows the model to offer a practical, experience-based view of AI maturity. It helps teams assess their current state and identify realistic next steps based on how organizations like theirs are evolving.

Why Darktrace?

AI has been central to Darktrace’s mission since its inception in 2013, not just as a feature, but the foundation. With over a decade of experience building and deploying AI in real-world security environments, we’ve learned where it works, where it doesn’t, and how to get the most value from it.

We've learned that modern businesses operate within a vast, interconnected ecosystem which introduces new complexities and vulnerabilities that make traditional cybersecurity approaches unsustainable. While many vendors use machine learning, not all AI tools are the same, and not all are created equal.

Darktrace’s Self-Learning AI uses a multi-layered AI approach, learning your unique organization to deliver proactive and resilient defense against today’s sophisticated threats. By strategically integrating a diverse set of AI techniques, such as machine learning, deep learning, LLMs, and natural language processing, both sequentially and hierarchically, our multi-layered AI approach provides a robust defense mechanism that is unique to your organization and adapts to the evolving threat landscape.

This model reflects that insight, helping security leaders find the right path forward for their people, processes, and tools.

Security teams today are asking big, important questions:

  • What should we actually use AI for?
  • How are other teams using it — and what’s working?
  • What are vendors offering, and what’s just hype?
  • Will AI ever replace people in the SOC?

These questions are valid, and they’re not always easy to answer. That’s why we created this model: to help security leaders move past buzzwords and build a clear, realistic plan for applying AI across the SOC.

The structure: From experimentation to autonomy

The model outlines five levels of maturity :

L0 – Manual Operations: Processes are mostly manual with limited automation of some tasks.

L1 – Automation Rules: Manually maintained or externally-sourced automation rules and logic are used wherever possible.

L2 – AI Assistance: AI assists research but is not trusted to make good decisions. This includes GenAI agents requiring manual oversight for errors.

L3 – AI Collaboration: Specialized cybersecurity AI agent systems  with business technology context are trusted with specific tasks and decisions. GenAI has limited uses where errors are acceptable.

L4 – AI Delegation: Specialized AI agent systems with far wider business operations and impact context perform most cybersecurity tasks and decisions independently, with only high-level oversight needed.

Each level reflects a shift, not only in technology, but in people and processes. As AI matures, analysts evolve from executors to strategic overseers.

Strategic benefits for security leaders

The maturity model isn’t just about technology adoption it’s about aligning AI investments with measurable operational outcomes. Here’s what it enables:

SOC fatigue is real, and AI can help

Most teams still struggle with alert volume, investigation delays, and reactive processes. AI adoption is inconsistent and often siloed. When integrated well, AI can make a meaningful difference in making security teams more effective

GenAI is error prone, requiring strong human oversight

While there is a lot of hype around GenAI agentic systems, teams will need to account for inaccuracy and hallucination in Agentic GenAI systems.

AI’s real value lies in progression

The biggest gains don’t come from isolated use cases, but from integrating AI across the lifecycle, from preparation through detection to containment and recovery.

Trust and oversight are key initially but evolves in later levels

Early-stage adoption keeps humans fully in control. By L3 and L4, AI systems act independently within defined bounds, freeing humans for strategic oversight.

People’s roles shift meaningfully

As AI matures, analyst roles consolidate and elevate from labor intensive task execution to high-value decision-making, focusing on critical, high business impact activities, improving processes and AI governance.

Outcome, not hype, defines maturity

AI maturity isn’t about tech presence, it’s about measurable impact on risk reduction, response time, and operational resilience.

[related-resource]

Outcomes across the AI Security Maturity Model

The Security Organization experiences an evolution of cybersecurity outcomes as teams progress from manual operations to AI delegation. Each level represents a step-change in efficiency, accuracy, and strategic value.

L0 – Manual Operations

At this stage, analysts manually handle triage, investigation, patching, and reporting manually using basic, non-automated tools. The result is reactive, labor-intensive operations where most alerts go uninvestigated and risk management remains inconsistent.

L1 – Automation Rules

At this stage, analysts manage rule-based automation tools like SOAR and XDR, which offer some efficiency gains but still require constant tuning. Operations remain constrained by human bandwidth and predefined workflows.

L2 – AI Assistance

At this stage, AI assists with research, summarization, and triage, reducing analyst workload but requiring close oversight due to potential errors. Detection improves, but trust in autonomous decision-making remains limited.

L3 – AI Collaboration

At this stage, AI performs full investigations and recommends actions, while analysts focus on high-risk decisions and refining detection strategies. Purpose-built agentic AI systems with business context are trusted with specific tasks, improving precision and prioritization.

L4 – AI Delegation

At this stage, Specialized AI Agent Systems performs most security tasks independently at machine speed, while human teams provide high-level strategic oversight. This means the highest time and effort commitment activities by the human security team is focused on proactive activities while AI handles routine cybersecurity tasks

Specialized AI Agent Systems operate with deep business context including impact context to drive fast, effective decisions.

Join the webinar

Get a look at the minds shaping this model by joining our upcoming webinar using this link. We’ll walk through real use cases, share lessons learned from the field, and show how security teams are navigating the path to operational AI safely, strategically, and successfully.

Find your place in the AI maturity model

Get the self-guided assessment designed to help you benchmark your current maturity level, identify key gaps, and prioritize next steps.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Ashanka Iddya
Senior Director, Product Marketing

More in this series

No items found.

Blog

/

Cloud

/

September 23, 2025

It’s Time to Rethink Cloud Investigations

cloud investigationsDefault blog imageDefault blog image

Cloud Breaches Are Surging

Cloud adoption has revolutionized how businesses operate, offering speed, scalability, and flexibility. But for security teams, this transformation has introduced a new set of challenges, especially when it comes to incident response (IR) and forensic investigations.

Cloud-related breaches are skyrocketing – 82% of breaches now involve cloud-stored data (IBM Cost of a Data Breach, 2023). Yet incidents often go unnoticed for days: according to a 2025 report by Cybersecurity Insiders, of the 65% of organizations experienced a cloud-related incident in the past year, only 9% detected it within the first hour, and 62% took more than 24 hours to remediate it (Cybersecurity Insiders, Cloud Security Report 2025).

Despite the shift to cloud, many investigation practices remain rooted in legacy on-prem approaches. According to a recent report, 65% of organizations spend approximately 3-5 days longer when investigating an incident in the cloud vs. on premises.

Cloud investigations must evolve, or risk falling behind attackers who are already exploiting the cloud’s speed and complexity.

4 Reasons Cloud Investigations Are Broken

The cloud’s dynamic nature – with its ephemeral workloads and distributed architecture – has outpaced traditional incident response methods. What worked in static, on-prem environments simply doesn’t translate.

Here’s why:

  1. Ephemeral workloads
    Containers and serverless functions can spin up and vanish in minutes. Attackers know this as well – they’re exploiting short-lived assets for “hit-and-run” attacks, leaving almost no forensic footprint. If you’re relying on scheduled scans or manual evidence collection, you’re already too late.
  2. Fragmented tooling
    Each cloud provider has its own logs, APIs, and investigation workflows. In addition, not all logs are enabled by default, cloud providers typically limit the scope of their logs (both in terms of what data they collect and how long they retain it), and some logs are only available through undocumented APIs. This creates siloed views of attacker activity, making it difficult to piece together a coherent timeline. Now layer in SaaS apps, Kubernetes clusters, and shadow IT — suddenly you’re stitching together 20+ tools just to find out what happened. Analysts call it the ‘swivel-chair Olympics,’ and it’s burning hours they don’t have.
  3. SOC overload
    Analysts spend the bulk of their time manually gathering evidence and correlating logs rather than responding to threats. This slows down investigations and increases burnout. SOC teams are drowning in noise; they receive thousands of alerts a day, the majority of which never get touched. False positives eat hundreds of hours a month, and consequently burnout is rife.  
  4. Cost of delay
    The longer an investigation takes, the higher its cost. Breaches contained in under 200 days save an average of over $1M compared to those that linger (IBM Cost of a Data Breach 2025).

These challenges create a dangerous gap for threat actors to exploit. By the time evidence is collected, attackers may have already accessed or exfiltrated data, or entrenched themselves deeper into your environment.

What’s Needed: A New Approach to Cloud Investigations

It’s time to ditch the manual, reactive grind and embrace investigations that are automated, proactive, and built for the world you actually defend. Here’s what the next generation of cloud forensics must deliver:

  • Automated evidence acquisition
    Capture forensic-level data the moment a threat is detected and before assets disappear.
  • Unified multi-cloud visibility
    Stitch together logs, timelines, and context across AWS, Azure, GCP, and hybrid environments into a single unified view of the investigation.
  • Accelerated investigation workflows
    Reduce time-to-insight from hours or days to minutes with automated analysis of forensic data, enabling faster containment and recovery.
  • Empowered SOC teams
    Fully contextualised data and collaboration workflows between teams in the SOC ensure seamless handover, freeing up analysts from manual collection tasks so they can focus on what matters: analysis and response.

Attackers are already leveraging the cloud’s agility. Defenders must do the same — adopting solutions that match the speed and scale of modern infrastructure.

Cloud Changed Everything. It’s Time to Change Investigations.  

The cloud fundamentally reshaped how businesses operate. It’s time for security teams to rethink how they investigate threats.

Forensics can no longer be slow, manual, and reactive. It must be instant, automated, and cloud-first — designed to meet the demands of ephemeral infrastructure and multi-cloud complexity.

The future of incident response isn’t just faster. It’s smarter, more scalable, and built for the environments we defend today, not those of ten years ago.  

On October 9th, Darktrace is revealing the next big thing in cloud security. Don’t miss it – sign up for the webinar.

darktrace live event launch
Continue reading
About the author
Kellie Regan
Director, Product Marketing - Cloud Security

Blog

/

Network

/

September 23, 2025

ShadowV2: An emerging DDoS for hire botnet

ShadowV2: An emerging DDoS for hire botnet Default blog imageDefault blog image

Introduction: ShadowV2 DDoS

Darktrace's latest investigation uncovered a novel campaign that blends traditional malware with modern devops technology.

At the center of this campaign is a Python-based command-and-control (C2) framework hosted on GitHub CodeSpaces. This campaign also utilizes a Python based spreader with a multi-stage Docker deployment as the initial access vector.

The campaign further makes use of a Go-based Remote Access Trojan (RAT) that implements a RESTful registration and polling mechanism, enabling command execution and communication with its operators.

ShadowV2 attack techniques

What sets this campaign apart is the sophistication of its attack toolkit.

The threat actors employ advanced methods such as HTTP/2 rapid reset, a Cloudflare under attack mode (UAM) bypass, and large-scale HTTP floods, demonstrating a capability to combine distributed denial-of-service (DDoS) techniques with targeted exploitation.

With the inclusion of an OpenAPI specification, implemented with FastAPI and Pydantic and a fully developed login panel and operator interface, the infrastructure seems to resemble a “DDoS-as-a-service” platform rather than a traditional botnet, showing the extent to which modern malware increasingly mirrors legitimate cloud-native applications in both design and usability.

Analysis of a SadowV2 attack

Initial access

The initial compromise originates from a Python script hosted on GitHub CodeSpaces. This can be inferred from the observed headers:

User-Agent: docker-sdk-python/7.1.0

X-Meta-Source-Client: github/codespaces

The user agent shows that the attacker is using the Python Docker SDK, a library for Python programs that allows them to interact with Docker to create containers. The X-Meta-Source-Client appears to have been injected by GitHub into the request to allow for attribution, although there is no documentation online about this header.

The IP the connections originate from is 23.97.62[.]139, which is a Microsoft IP based in Singapore. This aligns with expectations as GitHub is owned by Microsoft.

This campaign targets exposed Docker daemons, specifically those running on AWS EC2. Darktrace runs a number of honeypots across multiple cloud providers and has only observed attacks against honeypots running on AWS EC2. By default, Docker is not accessible to the Internet, however, can be configured to allow external access. This can be useful for managing complex deployments where remote access to the Docker API is needed.

Typically, most campaigns targeting Docker will either take an existing image from Docker Hub and deploy their tools within it, or upload their own pre-prepared image to deploy. This campaign works slightly differently; it first spawns a generic “setup” container and installs a number of tools within it. This container is then imaged and deployed as a live container with the malware arguments passed in via environmental variables.

Attacker creates a blank container from an Ubuntu image.
Figure 1: Attacker creates a blank container from an Ubuntu image.
Attacker sets up their tools for the attack.
Figure 2: Attacker sets up their tools for the attack.
 Attacker deploys a new container using the image from the setup container.
Figure 3: Attacker deploys a new container using the image from the setup container.

It is unclear why the attackers chose this approach - one possibility is that the actor is attempting to avoid inadvertently leaving forensic artifacts by performing the build on the victim machine, rather than building it themselves and uploading it.

Malware analysis

The Docker container acts as a wrapper around a single binary, dropped in /app/deployment. This is an ELF binary written in Go, a popular choice for modern malware. Helpfully, the binary is unstripped, making analysis significantly easier.

The current version of the malware has not been reported by OSINT providers such as VirusTotal. Using the domain name from the MASTER_ADDR variable and other IoCs, we were able to locate two older versions of the malware that were submitted to VirusTotal on the June 25 and July 30 respectively [1] [2].  Neither of these had any detections and were only submitted once each using the web portal from the US and Canada respectively. Darktrace first observed the attack against its honeypot on June 24, so it could be a victim of this campaign submitting the malware to VirusTotal. Due to the proximity of the start of the attacks, it could also be the attacker testing for detections, however it is not possible to know for certain.

The malware begins by phoning home, using the MASTER_ADDR and VPS_NAME identifiers passed in from the Docker run environmental variables. In addition, the malware derives a unique VPS_ID, which is the VPS_NAME concatenated with the current unix timestamp. The VPS_ID is used for all communications with the C2 server as the identifier for the specific implant. If the malware is restarted, or the victim is re-infected, the C2 server will inform the implant of its original VPS_ID to ensure continuity.

Snippet that performs the registration by sending a POST request to the C2 API with a JSON structure.
Figure 4: Snippet that performs the registration by sending a POST request to the C2 API with a JSON structure.

From there, the malware then spawns two main loops that will remain active for the lifetime of the implant. Every second, it sends a heartbeat to the C2 by sending the VPS_ID to hxxps://shadow.aurozacloud[.]xyz/api/vps/heartbeat via POST request. Every 5 seconds, it retrieves hxxps://shadow.aurozacloud[.]xyz/api/vps/poll/<VPS ID> via a GET request to poll for new commands.

The poll mechanism shadow v2
Figure 5: The poll mechanism.

At this stage, Darktrace security researchers wrote a custom client that ran on the server infected by the attacker that mimicked their implant. The goal was to intercept commands from the C2. Based on this, it was observed initiating an attack against chache08[.]werkecdn[.]me using a 120 thread HTTP2 rapid reset attack. This site appears to be hosted on an Amsterdam VPS provided by FDCServers, a server hosting company. It was not possible to identify what normally runs on this site, as it returns a 403 Forbidden error when visited.

Darktrace’s code analysis found that the returned commands contain the following fields:

  • Method (e.g. GET, POST)
  • A unique ID for the attack
  • A URL endpoint used to report attack statistics
  • The target URL & port
  • The duration of the attack
  • The number of threads to use
  • An optional proxy to send HTTP requests through

The malware then spins up several threads, each running a configurable number of HTTP clients using Valyala’s fasthttp library, an open source Go library for making high-performance HTTP requests. After this is complete, it uses these clients to perform an HTTP flood attack against the target.

A snippet showing the fasthttp client creation loop, as well as a function to report the worker count back to the C2.
Figure 6: A snippet showing the fasthttp client creation loop, as well as a function to report the worker count back to the C2.

In addition, it also features several flags to enable different bypass mechanisms to augment the malware:

  • WordPress bypass (does not appear to be implemented - the flag is not used anywhere)
  • Random query strings appended to the URL
  • Spoofed forwarding headers with random IP addresses
  • Cloudflare under-attack-mode (UAM) bypass
  • HTTP2 rapid reset

The most interesting of these is the Cloudflare UAM bypass mechanism. When this is enabled, the malware will attempt to use a bundled ChromeDP binary to solve the Cloudflare JavaScript challenge that is presented to new visitors. If this succeeds, the clearance cookie obtained is then included in subsequent requests. This is unlikely to work in most cases as headless Chrome browsers are often flagged, and a regular CAPTCHA is instead served.

The UAM bypass success snippet.
Figure 7: The UAM bypass success snippet.

Additionally, the malware has a flag to enable an HTTP2 rapid reset attack mode instead of a regular HTTP flood. In HTTP2, a client can create thousands of requests within a single connection using multiplexing, allowing sites to load faster. The number of request streams per connection is capped however, so in a rapid reset attack many requests are made and then immediately cancelled to allow more requests to be created. This allows a single client to execute vastly more requests per second and use more server resources than it otherwise would, allowing for more effective denial-of-service (DoS) attacks.

 The HTTP2 rapid reset snippet from the main attack function.
Figure 8: The HTTP2 rapid reset snippet from the main attack function.

API/C2 analysis

As mentioned throughout the malware analysis section, the malware communicates with a C2 server using HTTP. The server is behind Cloudflare, which obscures its hosting location and prevents analysis. However, based on analysis of the spreader, it's likely running on GitHub CodeSpaces.

When sending a malformed request to the API, an error generated by the Pydantic library is returned:

{"detail":[{"type":"missing","loc":["body","vps_id"],"msg":"Field required","input":{"vps_name":"xxxxx"},"url":"https://errors.pydantic.dev/2.11/v/missing"}]}

This shows they are using Python for the API, which is the same language that the spreader is written in.

One of the larger frameworks that ships with Pydantic is FastAPI, which also ships with Swagger. The malware author left this publicly exposed, and Darktrace’s researchers were able to obtain a copy of their API documentation. The author appears to have noticed this however, as subsequent attempts to access it now returns a HTTP 404 Not Found error.

Swagger UI view based on the obtained OpenAPI spec.
Figure 9: Swagger UI view based on the obtained OpenAPI spec.

This is useful to have as it shows all the API endpoints, including the exact fields they take and return, along with comments on each endpoint written by the attacker themselves.

It is very likely a DDoS for hire platform (or at the very least, designed for multi-tenant use) based on the extensive user API, which features authentication, distinctions between privilege level (admin vs user), and limitations on what types of attack a user can execute. The screenshot below shows the admin-only user create endpoint, with the default limits.

The admin-only user create endpoint shadow v2
Figure 10: The admin-only user create endpoint.

The endpoint used to launch attacks can also be seen, which lines up with the options previously seen in the malware itself. Interestingly, this endpoint requires a list of zombie systems to launch the attack from. This is unusual as most DDoS for hire services will decide this internally or just launch the attack from every infected host (zombie). No endpoints that returned a list of zombies were found, however, it’s possible one exists as the return types are not documented for all the API endpoints.

The attack start endpoint shadow v2
Figure 11: The attack start endpoint.

There is also an endpoint to manage a blacklist of hosts that cannot be attacked. This could be to stop users from launching attacks against sites operated by the malware author, however it’s also possible the author could be attempting to sell protection to victims, which has been seen previously with other DDoS for hire services.

Blacklist endpoints shadow v2 DDoS
Figure 12: Blacklist endpoints.

Attempting to visit shadow[.]aurozacloud[.]xyz results in a seizure notice. It is most likely fake the same backend is still in use and all of the API endpoints continue to work. Appending /login to the end of the path instead brings up the login screen for the DDoS platform. It describes itself as an “advanced attack platform”, which highlights that it is almost certainly a DDoS for hire service. The UI is high quality, written in Tailwind, and even features animations.

The fake seizure notice.
Figure 13: The fake seizure notice.
The login UI at /login.
Figure 14: The login UI at /login.

Conclusion

By leveraging containerization, an extensive API, and with a full user interface, this campaign shows the continued development of cybercrime-as-a-service. The ability to deliver modular functionality through a Go-based RAT and expose a structured API for operator interaction highlights how sophisticated some threat actors are.

For defenders, the implications are significant. Effective defense requires deep visibility into containerized environments, continuous monitoring of cloud workloads, and behavioral analytics capable of identifying anomalous API usage and container orchestration patterns. The presence of a DDoS-as-a-service panel with full user functionality further emphasizes the need for defenders to think of these campaigns not as isolated tools but as evolving platforms.

Appendices

References

1. https://www.virustotal.com/gui/file/1b552d19a3083572bc433714dfbc2b75eb6930a644696dedd600f9bd755042f6

2. https://www.virustotal.com/gui/file/1f70c78c018175a3e4fa2b3822f1a3bd48a3b923d1fbdeaa5446960ca8133e9c

IoCs

Malware hashes (SHA256)

●      2462467c89b4a62619d0b2957b21876dc4871db41b5d5fe230aa7ad107504c99

●      1b552d19a3083572bc433714dfbc2b75eb6930a644696dedd600f9bd755042f6

●      1f70c78c018175a3e4fa2b3822f1a3bd48a3b923d1fbdeaa5446960ca8133e9c

C2 domain

●      shadow.aurozacloud[.]xyz

Spreader IPs

●      23.97.62[.]139

●      23.97.62[.]136

Yara rule

rule ShadowV2 {

meta:

author = "nathaniel.bill@darktrace.com"

description = "Detects ShadowV2 botnet implant"

strings:

$string1 = "shadow-go"

$string2 = "shadow.aurozacloud.xyz"

$string3 = "[SHADOW-NODE]"

$symbol1 = "main.registerWithMaster"

$symbol2 = "main.handleStartAttack"

$symbol3 = "attacker.bypassUAM"

$symbol4 = "attacker.performHTTP2RapidReset"

$code1 = { 48 8B 05 ?? ?? ?? ?? 48 8B 1D ?? ?? ?? ?? E8 ?? ?? ?? ?? 48 8D 0D ?? ?? ?? ?? 48 89 8C 24 38 01 00 00 48 89 84 24 40 01 00 00 48 8B 4C 24 40 48 BA 00 09 6E 88 F1 FF FF FF 48 8D 04 0A E8 ?? ?? ?? ?? 48 8D 0D ?? ?? ?? ?? 48 89 8C 24 48 01 00 00 48 89 84 24 50 01 00 00 48 8D 05 ?? ?? ?? ?? BB 05 00 00 00 48 8D 8C 24 38 01 00 00 BF 02 00 00 00 48 89 FE E8 ?? ?? ?? ?? }

$code2 = { 48 89 35 ?? ?? ?? ?? 0F B6 94 24 80 02 00 00 88 15 ?? ?? ?? ?? 0F B6 94 24 81 02 00 00 88 15 ?? ?? ?? ?? 0F B6 94 24 82 02 00 00 88 15 ?? ?? ?? ?? 0F B6 94 24 83 02 00 00 88 15 ?? ?? ?? ?? 48 8B 05 ?? ?? ?? ?? }

$code3 = { 48 8D 15 ?? ?? ?? ?? 48 89 94 24 68 04 00 00 48 C7 84 24 78 04 00 00 15 00 00 00 48 8D 15 ?? ?? ?? ?? 48 89 94 24 70 04 00 00 48 8D 15 ?? ?? ?? ?? 48 89 94 24 80 04 00 00 48 8D 35 ?? ?? ?? ?? 48 89 B4 24 88 04 00 00 90 }

condition:

uint16(0) == 0x457f and (2 of ($string*) or 2 of ($symbol*) or any of ($code*))

}

The content provided in this blog is published by Darktrace for general informational purposes only and reflects our understanding of cybersecurity topics, trends, incidents, and developments at the time of publication. While we strive to ensure accuracy and relevance, the information is provided “as is” without any representations or warranties, express or implied. Darktrace makes no guarantees regarding the completeness, accuracy, reliability, or timeliness of any information presented and expressly disclaims all warranties.

Nothing in this blog constitutes legal, technical, or professional advice, and readers should consult qualified professionals before acting on any information contained herein. Any references to third-party organizations, technologies, threat actors, or incidents are for informational purposes only and do not imply affiliation, endorsement, or recommendation.

Darktrace, its affiliates, employees, or agents shall not be held liable for any loss, damage, or harm arising from the use of or reliance on the information in this blog.

The cybersecurity landscape evolves rapidly, and blog content may become outdated or superseded. We reserve the right to update, modify, or remove any content without notice.

Continue reading
About the author
Nate Bill
Threat Researcher
Your data. Our AI.
Elevate your network security with Darktrace AI