Blog
/
/
October 30, 2023

Exploring AI Threats: Package Hallucination Attacks

Learn how malicious actors exploit errors in generative AI tools to launch packet attacks. Read how Darktrace products detect and prevent these threats!
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
30
Oct 2023

AI tools open doors for threat actors

On November 30, 2022, the free conversational language generation model ChatGPT was launched by OpenAI, an artificial intelligence (AI) research and development company. The launch of ChatGPT was the culmination of development ongoing since 2018 and represented the latest innovation in the ongoing generative AI boom and made the use of generative AI tools accessible to the general population for the first time.

ChatGPT is estimated to currently have at least 100 million users, and in August 2023 the site reached 1.43 billion visits [1]. Darktrace data indicated that, as of March 2023, 74% of active customer environments have employees using generative AI tools in the workplace [2].

However, with new tools come new opportunities for threat actors to exploit and use them maliciously, expanding their arsenal.

Much consideration has been given to mitigating the impacts of the increased linguistic complexity in social engineering and phishing attacks resulting from generative AI tool use, with Darktrace observing a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email™ customers from January to February 2023, corresponding with the widespread adoption of ChatGPT and its peers [3].

Less overall consideration, however, has been given to impacts stemming from errors intrinsic to generative AI tools. One of these errors is AI hallucinations.

What is an AI hallucination?

AI “hallucination” is a term which refers to the predictive elements of generative AI and LLMs’ AI model gives an unexpected or factually incorrect response which does not align with its machine learning training data [4]. This differs from regular and intended behavior for an AI model, which should provide a response based on the data it was trained upon.  

Why are AI hallucinations a problem?

Despite the term indicating it might be a rare phenomenon, hallucinations are far more likely than accurate or factual results as the AI models used in LLMs are merely predictive and focus on the most probable text or outcome, rather than factual accuracy.

Given the widespread use of generative AI tools in the workplace employees are becoming significantly more likely to encounter an AI hallucination. Furthermore, if these fabricated hallucination responses are taken at face value, they could cause significant issues for an organization.

Use of generative AI in software development

Software developers may use generative AI for recommendations on how to optimize their scripts or code, or to find packages to import into their code for various uses. Software developers may ask LLMs for recommendations on specific pieces of code or how to solve a specific problem, which will likely lead to a third-party package. It is possible that packages recommended by generative AI tools could represent AI hallucinations and the packages may not have been published, or, more accurately, the packages may not have been published prior to the date at which the training data for the model halts. If these hallucinations result in common suggestions of a non-existent package, and the developer copies the code snippet wholesale, this may leave the exchanges vulnerable to attack.

Research conducted by Vulcan revealed the prevalence of AI hallucinations when ChatGPT is asked questions related to coding. After sourcing a sample of commonly asked coding questions from Stack Overflow, a question-and-answer website for programmers, researchers queried ChatGPT (in the context of Node.js and Python) and reviewed its responses. In 20% of the responses provided by ChatGPT pertaining to Node.js at least one un-published package was included, whilst the figure sat at around 35% for Python [4].

Hallucinations can be unpredictable, but would-be attackers are able to find packages to create by asking generative AI tools generic questions and checking whether the suggested packages exist already. As such, attacks using this vector are unlikely to target specific organizations, instead posing more of a widespread threat to users of generative AI tools.

Malicious packages as attack vectors

Although AI hallucinations can be unpredictable, and responses given by generative AI tools may not always be consistent, malicious actors are able to discover AI hallucinations by adopting the approach used by Vulcan. This allows hallucinated packages to be used as attack vectors. Once a malicious actor has discovered a hallucination of an un-published package, they are able to create a package with the same name and include a malicious payload, before publishing it. This is known as a malicious package.

Malicious packages could also be recommended by generative AI tools in the form of pre-existing packages. A user may be recommended a package that had previously been confirmed to contain malicious content, or a package that is no longer maintained and, therefore, is more vulnerable to hijack by malicious actors.

In such scenarios it is not necessary to manipulate the training data (data poisoning) to achieve the desired outcome for the malicious actor, thus a complex and time-consuming attack phase can easily be bypassed.

An unsuspecting software developer may incorporate a malicious package into their code, rendering it harmful. Deployment of this code could then result in compromise and escalation into a full-blown cyber-attack.

Figure 1: Flow diagram depicting the initial stages of an AI Package Hallucination Attack.

For providers of Software-as-a-Service (SaaS) products, this attack vector may represent an even greater risk. Such organizations may have a higher proportion of employed software developers than other organizations of comparable size. A threat actor, therefore, could utilize this attack vector as part of a supply chain attack, whereby a malicious payload becomes incorporated into trusted software and is then distributed to multiple customers. This type of attack could have severe consequences including data loss, the downtime of critical systems, and reputational damage.

How could Darktrace detect an AI Package Hallucination Attack?

In June 2023, Darktrace introduced a range of DETECT™ and RESPOND™ models designed to identify the use of generative AI tools within customer environments, and to autonomously perform inhibitive actions in response to such detections. These models will trigger based on connections to endpoints associated with generative AI tools, as such, Darktrace’s detection of an AI Package Hallucination Attack would likely begin with the breaching of one of the following DETECT models:

  • Compliance / Anomalous Upload to Generative AI
  • Compliance / Beaconing to Rare Generative AI and Generative AI
  • Compliance / Generative AI

Should generative AI tool use not be permitted by an organization, the Darktrace RESPOND model ‘Antigena / Network / Compliance / Antigena Generative AI Block’ can be activated to autonomously block connections to endpoints associated with generative AI, thus preventing an AI Package Hallucination attack before it can take hold.

Once a malicious package has been recommended, it may be downloaded from GitHub, a platform and cloud-based service used to store and manage code. Darktrace DETECT is able to identify when a device has performed a download from an open-source repository such as GitHub using the following models:

  • Device / Anomalous GitHub Download
  • Device / Anomalous Script Download Followed By Additional Packages

Whatever goal the malicious package has been designed to fulfil will determine the next stages of the attack. Due to their highly flexible nature, AI package hallucinations could be used as an attack vector to deliver a large variety of different malware types.

As GitHub is a commonly used service by software developers and IT professionals alike, traditional security tools may not alert customer security teams to such GitHub downloads, meaning malicious downloads may go undetected. Darktrace’s anomaly-based approach to threat detection, however, enables it to recognize subtle deviations in a device’s pre-established pattern of life which may be indicative of an emerging attack.

Subsequent anomalous activity representing the possible progression of the kill chain as part of an AI Package Hallucination Attack could then trigger an Enhanced Monitoring model. Enhanced Monitoring models are high-fidelity indicators of potential malicious activity that are investigated by the Darktrace analyst team as part of the Proactive Threat Notification (PTN) service offered by the Darktrace Security Operation Center (SOC).

Conclusion

Employees are often considered the first line of defense in cyber security; this is particularly true in the face of an AI Package Hallucination Attack.

As the use of generative AI becomes more accessible and an increasingly prevalent tool in an attacker’s toolbox, organizations will benefit from implementing company-wide policies to define expectations surrounding the use of such tools. It is simple, yet critical, for example, for employees to fact check responses provided to them by generative AI tools. All packages recommended by generative AI should also be checked by reviewing non-generated data from either external third-party or internal sources. It is also good practice to adopt caution when downloading packages with very few downloads as it could indicate the package is untrustworthy or malicious.

As of September 2023, ChatGPT Plus and Enterprise users were able to use the tool to browse the internet, expanding the data ChatGPT can access beyond the previous training data cut-off of September 2021 [5]. This feature will be expanded to all users soon [6]. ChatGPT providing up-to-date responses could prompt the evolution of this attack vector, allowing attackers to publish malicious packages which could subsequently be recommended by ChatGPT.

It is inevitable that a greater embrace of AI tools in the workplace will be seen in the coming years as the AI technology advances and existing tools become less novel and more familiar. By fighting fire with fire, using AI technology to identify AI usage, Darktrace is uniquely placed to detect and take preventative action against malicious actors capitalizing on the AI boom.

Credit to Charlotte Thompson, Cyber Analyst, Tiana Kelly, Analyst Team Lead, London, Cyber Analyst

References

[1] https://seo.ai/blog/chatgpt-user-statistics-facts

[2] https://darktrace.com/news/darktrace-addresses-generative-ai-concerns

[3] https://darktrace.com/news/darktrace-email-defends-organizations-against-evolving-cyber-threat-landscape

[4] https://vulcan.io/blog/ai-hallucinations-package-risk?nab=1&utm_referrer=https%3A%2F%2Fwww.google.com%2F

[5] https://twitter.com/OpenAI/status/1707077710047216095

[6] https://www.reuters.com/technology/openai-says-chatgpt-can-now-browse-internet-2023-09-27/

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst

More in this series

No items found.

Blog

/

Network

/

August 8, 2025

Ivanti Under Siege: Investigating the Ivanti Endpoint Manager Mobile Vulnerabilities (CVE-2025-4427 & CVE-2025-4428)

ivanti cve exploitation edge infrastructure Default blog imageDefault blog image

Ivanti & Edge infrastructure exploitation

Edge infrastructure exploitations continue to prevail in today’s cyber threat landscape; therefore, it was no surprise that recent Ivanti Endpoint Manager Mobile (EPMM) vulnerabilities CVE-2025-4427 and CVE-2025-4428 were exploited targeting organizations in critical sectors such as healthcare, telecommunications, and finance across the globe, including across the Darktrace customer base in May 2025.

Exploiting these types of vulnerabilities remains a popular choice for threat actors seeking to enter an organization’s network to perform malicious activity such as cyber espionage, data exfiltration and ransomware detonation.

Vulnerabilities in Ivanti EPMM

Ivanti EPMM allows organizations to manage and configure enterprise mobile devices. On May 13, 2025, Ivanti published a security advisory [1] for their Ivanti Endpoint Manager Mobile (EPMM) devices addressing a medium and high severity vulnerability:

  • CVE-2025-4427, CVSS: 5.6: An authentication bypass vulnerability
  • CVE-2025-4428, CVSS: 7.2: Remote code execution vulnerability

Successfully exploiting both vulnerabilities at the same time could lead to unauthenticated remote code execution from an unauthenticated threat actor, which could allow them to control, manipulate, and compromise managed devices on a network [2].

Shortly after the disclosure of these vulnerabilities, external researchers uncovered evidence that they were being actively exploited in the wild and identified multiple indicators of compromise (IoCs) related to post-exploitation activities for these vulnerabilities [2] [3]. Research drew particular attention to the infrastructure utilized in ongoing exploitation activity, such as leveraging the two vulnerabilities to eventually deliver malware contained within ELF files from Amazon Web Services (AWS) S3 bucket endpoints and to deliver KrustyLoader malware for persistence. KrustyLoader is a Rust based malware that was discovered being downloaded in compromised Ivanti Connect Secure systems back in January 2024 when the zero-day critical vulnerabilities; CVE-2024-21887 and CVE-2023-46805 [10].

This suggests the involvement of the threat actor UNC5221, a suspected China-nexus espionage actor [3].

In addition to exploring the post-exploit tactics, techniques, and procedures (TTPs) observed for these vulnerabilities across Darktrace’s customer base, this blog will also examine the subtle changes and similarities in the exploitation of earlier Ivanti vulnerabilities—specifically Ivanti Connect Secure (CS) and Policy Secure (PS) vulnerabilities CVE-2023-46805 and CVE-2024-21887 in early 2024, as well as CVE-2025-0282 and CVE-2025-0283, which affected CS, PS, and Zero Trust Access (ZTA) in January 2025.

Darktrace Coverage

In May 2025, shortly after Ivanti disclosed vulnerabilities in their EPMM product, Darktrace’s Threat Research team identified attack patterns potentially linked to the exploitation of these vulnerabilities across multiple customer environments. The most noteworthy attack chain activity observed included exploit validation, payload delivery via AWS S3 bucket endpoints, subsequent delivery of script-based payloads, and connections to dpaste[.]com, possibly for dynamic payload retrieval. In a limited number of cases, connections were also made to an IP address associated with infrastructure linked to SAP NetWeaver vulnerability CVE-2025-31324, which has been investigated by Darktrace in an earlier case.

Exploit Validation

Darktrace observed devices within multiple customer environments making connections related to Out-of-Band Application Security Testing (OAST). These included a range of DNS requests and connections, most of which featured a user agent associated with the command-line tool cURL, directed toward associated endpoints. The hostnames of these endpoints consisted of a string of randomly generated characters followed by an OAST domain, such as 'oast[.]live', 'oast[.]pro', 'oast[.]fun', 'oast[.]site', 'oast[.]online', or 'oast[.]me'. OAST endpoints can be leveraged by malicious actors to trigger callbacks from targeted systems, such as for exploit validation. This activity, likely representing the initial phase of the attack chain observed across multiple environments, was also seen in the early stages of previous investigations into the exploitation of Ivanti vulnerabilities [4]. Darktrace also observed similar exploit validation activity during investigations conducted in January 2024 into the Ivanti CS vulnerabilities CVE-2023-46805 and CVE-2024-21887.

Payload Delivery via AWS

Devices across multiple customer environments were subsequently observed downloading malicious ELF files—often with randomly generated filenames such as 'NVGAoZDmEe'—from AWS S3 bucket endpoints like 's3[.]amazonaws[.]com'. These downloads occurred over HTTP connections, typically using wget or cURL user agents. Some of the ELF files were later identified to be KrustyLoader payloads using open-source intelligence (OSINT). External researchers have reported that the KrustyLoader malware is executed in cases of Ivanti EPMM exploitation to gain and maintain a foothold in target networks [2].

In one customer environment, after connections were made to the endpoint fconnect[.]s3[.]amazonaws[.]com, Darktrace observed the target system downloading the ELF file mnQDqysNrlg via the user agent Wget/1.14 (linux-gnu). Further investigation of the file’s SHA1 hash (1dec9191606f8fc86e4ae4fdf07f09822f8a94f2) linked it to the KrustyLoader malware [5]. In another customer environment, connections were instead made to tnegadge[.]s3[.]amazonaws[.]com using the same user agent, from which the ELF file “/dfuJ8t1uhG” was downloaded. This file was also linked to KrustyLoader through its SHA1 hash (c47abdb1651f9f6d96d34313872e68fb132f39f5) [6].

The pattern of activity observed so far closely mirrors previous exploits associated with the Ivanti vulnerabilities CVE-2023-46805 and CVE-2024-21887 [4]. As in those cases, Darktrace observed exploit validation using OAST domains and services, along with the use of AWS endpoints to deliver ELF file payloads. However, in this instance, the delivered payload was identified as KrustyLoader malware.

Later-stage script file payload delivery

In addition to the ELF file downloads, Darktrace also detected other file downloads across several customer environments, potentially representing the delivery of later-stage payloads.

The downloaded files included script files with the .sh extension, featuring randomly generated alphanumeric filenames. One such example is “4l4md4r.sh”, which was retrieved during a connection to the IP address 15.188.246[.]198 using a cURL-associated user agent. This IP address was also linked to infrastructure associated with the SAP NetWeaver remote code execution vulnerability CVE-2025-31324, which enables remote code execution on NetWeaver Visual Composer. External reporting has attributed this infrastructure to a China-nexus state actor [7][8][9].

In addition to the script file downloads, devices on some customer networks were also observed making connections to pastebin[.]com and dpaste[.]com, two sites commonly used to host or share malicious payloads or exploitation instructions [2]. Exploits, including those targeting Ivanti EPMM vulnerabilities, can dynamically fetch malicious commands from sites like dpaste[.]com, enabling threat actors to update payloads. Unlike the previously detailed activity, this behavior was not identified in any prior Darktrace investigations into Ivanti-related vulnerabilities, suggesting a potential shift in the tactics used in post-exploitation stages of Ivanti attacks.

Conclusion

Edge infrastructure vulnerabilities, such as those found in Ivanti EPMM and investigated across customer environments with Darktrace / NETWORK, have become a key tool in the arsenal of attackers in today’s threat landscape. As highlighted in this investigation, while many of the tactics employed by threat actors following successful exploitation of vulnerabilities remain the same, subtle shifts in their methods can also be seen.

These subtle and often overlooked changes enable threat actors to remain undetected within networks, highlighting the critical need for organizations to maintain continuous extended visibility, leverage anomaly based behavioral analysis, and deploy machine speed intervention across their environments.

Credit to Nahisha Nobregas (Senior Cyber Analyst) and Anna Gilbertson (Senior Cyber Analyst)

Appendices

Mid-High Confidence IoCs

(IoC – Type - Description)

-       trkbucket.s3.amazonaws[.]com – Hostname – C2 endpoint

-       trkbucket.s3.amazonaws[.]com/NVGAoZDmEe – URL – Payload

-       tnegadge.s3.amazonaws[.]com – Hostname – C2 endpoint

-       tnegadge.s3.amazonaws[.]com/dfuJ8t1uhG – URL – Payload

-       c47abdb1651f9f6d96d34313872e68fb132f39f5 - SHA1 File Hash – Payload

-       4abfaeadcd5ab5f2c3acfac6454d1176 - MD5 File Hash - Payload

-       fconnect.s3.amazonaws[.]com – Hostname – C2 endpoint

-       fconnect.s3.amazonaws[.]com/mnQDqysNrlg – URL - Payload

-       15.188.246[.]198 – IP address – C2 endpoint

-       15.188.246[.]198/4l4md4r.sh?grep – URL – Payload

-       185.193.125[.]65 – IP address – C2 endpoint

-       185.193.125[.]65/c4qDsztEW6/TIGHT_UNIVERSITY – URL – C2 endpoint

-       d8d6fe1a268374088fb6a5dc7e5cbb54 – MD5 File Hash – Payload

-       64.52.80[.]21 – IP address – C2 endpoint

-       0d8da2d1.digimg[.]store – Hostname – C2 endpoint

-       134.209.107[.]209 – IP address – C2 endpoint

Darktrace Model Detections

-       Compromise / High Priority Tunnelling to Bin Services (Enhanced Monitoring Model)

-       Compromise / Possible Tunnelling to Bin Services

-       Anomalous Server Activity / New User Agent from Internet Facing System

-       Compliance / Pastebin

-       Device / Internet Facing Device with High Priority Alert

-       Anomalous Connection / Callback on Web Facing Device

-       Anomalous File / Script from Rare External Location

-       Anomalous File / Incoming ELF File

-       Device / Suspicious Domain

-       Device / New User Agent

-       Anomalous Connection / Multiple Connections to New External TCP Port

-       Anomalous Connection / New User Agent to IP Without Hostname

-       Anomalous File / EXE from Rare External Location

-       Anomalous File / Internet Facing System File Download

-       Anomalous File / Multiple EXE from Rare External Locations

-       Compromise / Suspicious HTTP and Anomalous Activity

-       Device / Attack and Recon Tools

-       Device / Initial Attack Chain Activity

-       Device / Large Number of Model Alerts

-       Device / Large Number of Model Alerts from Critical Network Device

References

1.     https://forums.ivanti.com/s/article/Security-Advisory-Ivanti-Endpoint-Manager-Mobile-EPMM?language=en_US

2.     https://blog.eclecticiq.com/china-nexus-threat-actor-actively-exploiting-ivanti-endpoint-manager-mobile-cve-2025-4428-vulnerability

3.     https://www.wiz.io/blog/ivanti-epmm-rce-vulnerability-chain-cve-2025-4427-cve-2025-4428

4.     https://www.darktrace.com/blog/the-unknown-unknowns-post-exploitation-activities-of-ivanti-cs-ps-appliances

5.     https://www.virustotal.com/gui/file/ac91c2c777c9e8638ec1628a199e396907fbb7dcf9c430ca712ec64a6f1fcbc9/community

6.     https://www.virustotal.com/gui/file/f3e0147d359f217e2aa0a3060d166f12e68314da84a4ecb5cb205bd711c71998/community

7.     https://www.virustotal.com/gui/ip-address/15.188.246.198

8.     https://blog.eclecticiq.com/china-nexus-nation-state-actors-exploit-sap-netweaver-cve-2025-31324-to-target-critical-infrastructures

9.     https://www.darktrace.com/blog/tracking-cve-2025-31324-darktraces-detection-of-sap-netweaver-exploitation-before-and-after-disclosure

10.  https://www.synacktiv.com/en/publications/krustyloader-rust-malware-linked-to-ivanti-connectsecure-compromises

The content provided in this blog is published by Darktrace for general informational purposes only and reflects our understanding of cybersecurity topics, trends, incidents, and developments at the time of publication. While we strive to ensure accuracy and relevance, the information is provided “as is” without any representations or warranties, express or implied. Darktrace makes no guarantees regarding the completeness, accuracy, reliability, or timeliness of any information presented and expressly disclaims all warranties.

Nothing in this blog constitutes legal, technical, or professional advice, and readers should consult qualified professionals before acting on any information contained herein.

Any references to third-party organizations, technologies, threat actors, or incidents are for informational purposes only and do not imply affiliation, endorsement, or recommendation.

Darktrace, its affiliates, employees, or agents shall not be held liable for any loss, damage, or harm arising from the use of or reliance on the information in this blog.

The cybersecurity landscape evolves rapidly, and blog content may become outdated or superseded. We reserve the right to update, modify, or remove any content without notice.

Continue reading
About the author
Nahisha Nobregas
SOC Analyst

Blog

/

/

August 7, 2025

How CDR & Automated Forensics Transform Cloud Incident Response

cloud security investigation guy on computer doing workDefault blog imageDefault blog image

Introduction: Cloud investigations

In cloud security, speed, automation and clarity are everything. However, for many SOC teams, responding to incidents in the cloud is often very difficult especially when attackers move fast, infrastructure is ephemeral, and forensic skills are scarce.

In this blog we will walk through an example that shows exactly how Darktrace Cloud Detection and Response (CDR) and automated cloud forensics together, solve these challenges, automating cloud detection, and deep forensic investigation in a way that’s fast, scalable, and deeply insightful.

The Problem: Cloud incidents are hard to investigate

Security teams often face three major hurdles when investigating cloud detections:

Lack of forensic expertise: Most SOCs and security teams aren’t natively staffed with forensics specialists.

Ephemeral infrastructure: Cloud assets spin up and down quickly, leaving little time to capture evidence.

Lack of existing automation: Gathering forensic-level data often requires manual effort and leaves teams scrambling around during incidents — accessing logs, snapshots, and system states before they disappear. This process is slow and often blocked by permissions, tooling gaps, or lack of visibility.

How Darktrace augments cloud investigations

1. Darktrace’s CDR finds anomalous activity in the cloud

An alert is generated for a large outbound data transfer from an externally facing EC2 instance to a rare external endpoint. It’s anomalous, unexpected, and potentially serious.

2. AI-led investigation stitches together the incident for a SOC analyst to look into

When a security incident unfolds, Darktrace’s Cyber AI Analyst TM is the first to surface it, automatically correlating behaviors, surfacing anomalies, and presenting a cohesive incident summary. It’s fast, detailed, and invaluable.

Once the incident is created, more questions are raised.

  • How were the impacted resources compromised?
  • How did the attack unfold over time – what tools and malware were used?
  • What data was accessed and exfiltrated?

What you’ll see as a SOC analyst: The incident begins in Darktrace’s Threat Visualizer, where a Cyber AI Analyst incident has been generated automatically highlighting large anomalous data transfer to a suspicious external IP. This isn’t just another alert, it’s a high-fidelity signal backed by Darktrace’s Self-Learning AI.

Cyber AI Analyst incident created for anomalous outbound data transfer
Figure 1: Cyber AI Analyst incident created for anomalous outbound data transfer

The analyst can then immediately pivot to Darktrace / CLOUD’s architecture view (see below), gaining context on the asset’s environment, ingress/egress points, connected systems, potential attack paths and whether there are any current misconfigurations detected on the asset.

Darktrace / CLOUD architecture view providing critical cloud context
Figure 2: Darktrace / CLOUD architecture view providing critical cloud context

3. Automated forensic capture — No expertise required

Then comes the game-changer, Darktrace’s recent acquisition of Cado enhances its cloud forensics capabilities. From the first alert triggered, Darktrace has already kicked in and automatically processed and analyzed a full volume capture of the EC2. Everything, past and present, is preserved. No need for manual snapshots, CLI commands, or specialist intervention.

Darktrace then provides a clear timeline highlighting the evidence and preserving it. In our example we identify:

  • A brute-force attempt on a file management app, followed by a successful login
  • A reverse shell used to gain unauthorized remote access to the EC2
  • A reverse TCP connection to the same suspicious IP flagged by Darktrace
  • Attacker commands showing how the data was split and prepared for exfiltration
  • A file (a.tar) created from two sensitive archives: product_plans.zip and research_data.zip

All of this is surfaced through the timeline view, ranked by significance using machine learning. The analyst can pivot through time, correlate events, and build a complete picture of the attack — without needing cloud forensics expertise.

Darktrace even gives the ability to:

  • Download and inspect gathered files in full detail, enabling teams to verify exactly what data was accessed or exfiltrated.
  • Interact with the file system as if it were live, allowing investigators to explore directories, uncover hidden artifacts, and understand attacker movement with precision.
Figure 3 Cado critical forensic investigation automated insights
Figure 3: Cado critical forensic investigation automated insights
Figure 4: Cado forensic file analysis of reverse shell and download option
Figure 5: a.tar created from two sensitive archives: product_plans.zip and research_data.zip
Figure 6: Traverse the full file system of the asset

Why this matters?

This workflow solves the hardest parts of cloud investigation:

  1. Capturing evidence before it disappears
  2. Understanding attacker behavior in detail - automatically
  3. Linking detections to impact with full incident visibility

This kind of insight is invaluable for organizations especially regulated industries, where knowing exactly what data was affected is critical for compliance and reporting. It’s also a powerful tool for detecting insider threats, not just external attackers.

Darktrace / CLOUD and Cado together acts as a force multiplier helping with:

  • Reducing investigation time from hours to minutes
  • Preserving ephemeral evidence automatically
  • Empowering analysts with forensic-level visibility

Cloud threats aren’t slowing down. Your response shouldn’t either. Darktrace / CLOUD + Cado gives your SOC the tools to detect, contain, and investigate cloud incidents — automatically, accurately, and at scale.

[related-resource]

Continue reading
About the author
Adam Stevens
Director of Product, Cloud Security
Your data. Our AI.
Elevate your network security with Darktrace AI