Blog
/
/
October 30, 2023

Exploring AI Threats: Package Hallucination Attacks

Learn how malicious actors exploit errors in generative AI tools to launch packet attacks. Read how Darktrace products detect and prevent these threats!
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
30
Oct 2023

AI tools open doors for threat actors

On November 30, 2022, the free conversational language generation model ChatGPT was launched by OpenAI, an artificial intelligence (AI) research and development company. The launch of ChatGPT was the culmination of development ongoing since 2018 and represented the latest innovation in the ongoing generative AI boom and made the use of generative AI tools accessible to the general population for the first time.

ChatGPT is estimated to currently have at least 100 million users, and in August 2023 the site reached 1.43 billion visits [1]. Darktrace data indicated that, as of March 2023, 74% of active customer environments have employees using generative AI tools in the workplace [2].

However, with new tools come new opportunities for threat actors to exploit and use them maliciously, expanding their arsenal.

Much consideration has been given to mitigating the impacts of the increased linguistic complexity in social engineering and phishing attacks resulting from generative AI tool use, with Darktrace observing a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email™ customers from January to February 2023, corresponding with the widespread adoption of ChatGPT and its peers [3].

Less overall consideration, however, has been given to impacts stemming from errors intrinsic to generative AI tools. One of these errors is AI hallucinations.

What is an AI hallucination?

AI “hallucination” is a term which refers to the predictive elements of generative AI and LLMs’ AI model gives an unexpected or factually incorrect response which does not align with its machine learning training data [4]. This differs from regular and intended behavior for an AI model, which should provide a response based on the data it was trained upon.  

Why are AI hallucinations a problem?

Despite the term indicating it might be a rare phenomenon, hallucinations are far more likely than accurate or factual results as the AI models used in LLMs are merely predictive and focus on the most probable text or outcome, rather than factual accuracy.

Given the widespread use of generative AI tools in the workplace employees are becoming significantly more likely to encounter an AI hallucination. Furthermore, if these fabricated hallucination responses are taken at face value, they could cause significant issues for an organization.

Use of generative AI in software development

Software developers may use generative AI for recommendations on how to optimize their scripts or code, or to find packages to import into their code for various uses. Software developers may ask LLMs for recommendations on specific pieces of code or how to solve a specific problem, which will likely lead to a third-party package. It is possible that packages recommended by generative AI tools could represent AI hallucinations and the packages may not have been published, or, more accurately, the packages may not have been published prior to the date at which the training data for the model halts. If these hallucinations result in common suggestions of a non-existent package, and the developer copies the code snippet wholesale, this may leave the exchanges vulnerable to attack.

Research conducted by Vulcan revealed the prevalence of AI hallucinations when ChatGPT is asked questions related to coding. After sourcing a sample of commonly asked coding questions from Stack Overflow, a question-and-answer website for programmers, researchers queried ChatGPT (in the context of Node.js and Python) and reviewed its responses. In 20% of the responses provided by ChatGPT pertaining to Node.js at least one un-published package was included, whilst the figure sat at around 35% for Python [4].

Hallucinations can be unpredictable, but would-be attackers are able to find packages to create by asking generative AI tools generic questions and checking whether the suggested packages exist already. As such, attacks using this vector are unlikely to target specific organizations, instead posing more of a widespread threat to users of generative AI tools.

Malicious packages as attack vectors

Although AI hallucinations can be unpredictable, and responses given by generative AI tools may not always be consistent, malicious actors are able to discover AI hallucinations by adopting the approach used by Vulcan. This allows hallucinated packages to be used as attack vectors. Once a malicious actor has discovered a hallucination of an un-published package, they are able to create a package with the same name and include a malicious payload, before publishing it. This is known as a malicious package.

Malicious packages could also be recommended by generative AI tools in the form of pre-existing packages. A user may be recommended a package that had previously been confirmed to contain malicious content, or a package that is no longer maintained and, therefore, is more vulnerable to hijack by malicious actors.

In such scenarios it is not necessary to manipulate the training data (data poisoning) to achieve the desired outcome for the malicious actor, thus a complex and time-consuming attack phase can easily be bypassed.

An unsuspecting software developer may incorporate a malicious package into their code, rendering it harmful. Deployment of this code could then result in compromise and escalation into a full-blown cyber-attack.

Figure 1: Flow diagram depicting the initial stages of an AI Package Hallucination Attack.

For providers of Software-as-a-Service (SaaS) products, this attack vector may represent an even greater risk. Such organizations may have a higher proportion of employed software developers than other organizations of comparable size. A threat actor, therefore, could utilize this attack vector as part of a supply chain attack, whereby a malicious payload becomes incorporated into trusted software and is then distributed to multiple customers. This type of attack could have severe consequences including data loss, the downtime of critical systems, and reputational damage.

How could Darktrace detect an AI Package Hallucination Attack?

In June 2023, Darktrace introduced a range of DETECT™ and RESPOND™ models designed to identify the use of generative AI tools within customer environments, and to autonomously perform inhibitive actions in response to such detections. These models will trigger based on connections to endpoints associated with generative AI tools, as such, Darktrace’s detection of an AI Package Hallucination Attack would likely begin with the breaching of one of the following DETECT models:

  • Compliance / Anomalous Upload to Generative AI
  • Compliance / Beaconing to Rare Generative AI and Generative AI
  • Compliance / Generative AI

Should generative AI tool use not be permitted by an organization, the Darktrace RESPOND model ‘Antigena / Network / Compliance / Antigena Generative AI Block’ can be activated to autonomously block connections to endpoints associated with generative AI, thus preventing an AI Package Hallucination attack before it can take hold.

Once a malicious package has been recommended, it may be downloaded from GitHub, a platform and cloud-based service used to store and manage code. Darktrace DETECT is able to identify when a device has performed a download from an open-source repository such as GitHub using the following models:

  • Device / Anomalous GitHub Download
  • Device / Anomalous Script Download Followed By Additional Packages

Whatever goal the malicious package has been designed to fulfil will determine the next stages of the attack. Due to their highly flexible nature, AI package hallucinations could be used as an attack vector to deliver a large variety of different malware types.

As GitHub is a commonly used service by software developers and IT professionals alike, traditional security tools may not alert customer security teams to such GitHub downloads, meaning malicious downloads may go undetected. Darktrace’s anomaly-based approach to threat detection, however, enables it to recognize subtle deviations in a device’s pre-established pattern of life which may be indicative of an emerging attack.

Subsequent anomalous activity representing the possible progression of the kill chain as part of an AI Package Hallucination Attack could then trigger an Enhanced Monitoring model. Enhanced Monitoring models are high-fidelity indicators of potential malicious activity that are investigated by the Darktrace analyst team as part of the Proactive Threat Notification (PTN) service offered by the Darktrace Security Operation Center (SOC).

Conclusion

Employees are often considered the first line of defense in cyber security; this is particularly true in the face of an AI Package Hallucination Attack.

As the use of generative AI becomes more accessible and an increasingly prevalent tool in an attacker’s toolbox, organizations will benefit from implementing company-wide policies to define expectations surrounding the use of such tools. It is simple, yet critical, for example, for employees to fact check responses provided to them by generative AI tools. All packages recommended by generative AI should also be checked by reviewing non-generated data from either external third-party or internal sources. It is also good practice to adopt caution when downloading packages with very few downloads as it could indicate the package is untrustworthy or malicious.

As of September 2023, ChatGPT Plus and Enterprise users were able to use the tool to browse the internet, expanding the data ChatGPT can access beyond the previous training data cut-off of September 2021 [5]. This feature will be expanded to all users soon [6]. ChatGPT providing up-to-date responses could prompt the evolution of this attack vector, allowing attackers to publish malicious packages which could subsequently be recommended by ChatGPT.

It is inevitable that a greater embrace of AI tools in the workplace will be seen in the coming years as the AI technology advances and existing tools become less novel and more familiar. By fighting fire with fire, using AI technology to identify AI usage, Darktrace is uniquely placed to detect and take preventative action against malicious actors capitalizing on the AI boom.

Credit to Charlotte Thompson, Cyber Analyst, Tiana Kelly, Analyst Team Lead, London, Cyber Analyst

References

[1] https://seo.ai/blog/chatgpt-user-statistics-facts

[2] https://darktrace.com/news/darktrace-addresses-generative-ai-concerns

[3] https://darktrace.com/news/darktrace-email-defends-organizations-against-evolving-cyber-threat-landscape

[4] https://vulcan.io/blog/ai-hallucinations-package-risk?nab=1&utm_referrer=https%3A%2F%2Fwww.google.com%2F

[5] https://twitter.com/OpenAI/status/1707077710047216095

[6] https://www.reuters.com/technology/openai-says-chatgpt-can-now-browse-internet-2023-09-27/

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst

More in this series

No items found.

Blog

/

/

October 24, 2025

Patch Smarter, Not Harder: Now Empowering Security Teams with Business-Aligned Threat Context Agents

Patch Smarter, Not Harder: Now Empowering Security Teams with Business-Aligned Threat Context Agents Default blog imageDefault blog image

Most risk management programs remain anchored in enumeration: scanning every asset, cataloging every CVE, and drowning in lists that rarely translate into action. Despite expensive scanners, annual pen tests, and countless spreadsheets, prioritization still falters at two critical points.

Context gaps at the device level: It’s hard to know which vulnerabilities actually matter to your business given existing privileges, what software it runs, and what controls already reduce risk.

Business translation: Even when the technical priority is clear, justifying effort and spend in financial terms—especially across many affected devices—can delay action. Especially if it means halting other areas of the business that directly generate revenue.

The result is familiar: alert fatigue, “too many highs,” and remediation that trails behind the threat landscape. Darktrace / Proactive Exposure Management addresses this by pairing precise, endpoint‑level context with clear, financial insight so teams can prioritize confidently and mobilize faster.

A powerful combination: No-Telemetry Endpoint Agent + Cost-Benefit Analysis

Darktrace / Proactive Exposure Management now uniquely combines technical precision with business clarity in a single workflow.  With this release, Darktrace / Proactive Exposure Management delivers a more holistic approach, uniting technical context and financial insight to drive proactive risk reduction. The result is a single solution that helps security teams stay ahead of threats while reducing noise, delays, and complexity.

  • No-Telemetry Endpoint: Collects installed software data and maps it to known CVEs—without network traffic—providing device-level vulnerability context and operational relevance.
  • Cost-Benefit Analysis for Patching: Calculates ROI by comparing patching effort with potential exploit impact, factoring in headcount time, device count, patch difficulty, and automation availability.

Introducing the No-Telemetry Endpoint Agent

Darktrace’s new endpoint agent inventories installed software on devices and maps it to known CVEs without collecting network data so you can prioritize using real device context and available security controls.

By grounding vulnerability findings in the reality of each endpoint, including its software footprint and existing controls, teams can cut through generic severity scores and focus on what matters most. The agent is ideal for remote devices, BYOD-adjacent fleets, or environments standardizing on Darktrace, and is available without additional licensing cost.

Darktrace / Proactive Exposure Management user interface
Figure 1: Darktrace / Proactive Exposure Management user interface

Built-In Cost-Benefit Analysis for Patching

Security teams often know what needs fixing but stakeholders need to understand why now. Darktrace’s new cost-benefit calculator compares the total cost to patch against the potential cost of exploit, producing an ROI for the patch action that expresses security action in clear financial terms.

Inputs like engineer time, number of affected devices, patch difficulty, and automation availability are factored in automatically. The result is a business-aligned justification for every patching decision—helping teams secure buy-in, accelerate approvals, and move work forward with one-click ticketing, CSV export, or risk acceptance.

Darktrace / Proactive Exposure Management Cost Benefit Analysis
Figure 2: Darktrace / Proactive Exposure Management Cost Benefit Analysis

A Smarter, Faster Approach to Exposure Management

Together, the no-telemetry endpoint and Cost–Benefit Analysis advance the CTEM motion from theory to practice. You gain higher‑fidelity discovery and validation signals at the device level, paired with business‑ready justification that accelerates mobilization. The result is fewer distractions, clearer priorities, and faster measurable risk reduction. This is not from chasing every alert, but by focusing on what moves the needle now.

  • Smarter Prioritization: Device‑level context trims noise and spotlights the exposures that matter for your business.
  • Faster Decisions: Built‑in ROI turns technical urgency into executive clarity—speeding approvals and action.
  • Practical Execution: Privacy‑conscious endpoint collection and ticketing/export options fit neatly into existing workflows.
  • Better Outcomes: Close the loop faster—discover, prioritize, validate, and mobilize—on the same operating surface.

Committed to innovation

These updates are part of the broader Darktrace release, which also included:

1. Major innovations in cloud security with the launch of the industry’s first fully automated cloud forensics solution, reinforcing Darktrace’s leadership in AI-native security.

2. Darktrace Network Endpoint eXtended Telemetry (NEXT) is revolutionizing NDR with the industry’s first mixed-telemetry agent using Self-Learning AI.

3. Improvements to our OT product, purpose built for industrial infrastructure, Darktrace / OT now brings dedicated OT dashboard, segmentation-aware risk modeling, and expanded visibility into edge assets and automation protocols.

Join our Live Launch Event

When? 

December 9, 2025

What will be covered?

Join our live broadcast to experience how Darktrace is eliminating blind spots for detection and response across your complete enterprise with new innovations in Agentic AI across our ActiveAI Security platform. Industry leaders from IDC will join Darktrace customers to discuss challenges in cross-domain security, with a live walkthrough reshaping the future of Network Detection & Response, Endpoint Detection & Response, Email Security, and SecOps in novel threat detection and autonomous investigations.

Continue reading
About the author

Blog

/

/

October 24, 2025

Darktrace Announces Extended Visibility Between Confirmed Assets and Leaked Credentials from the Deep and Dark Web

Darktrace Announces Extended Visibility Between Confirmed Assets and Leaked Credentials from the Deep and Dark Web Default blog imageDefault blog image

Why exposure management needs to evolve beyond scans and checklists

The modern attack surface changes faster than most security programs can keep up. New assets appear, environments change, and adversaries are increasingly aided by automation and AI. Traditional approaches like periodic scans, static inventories, or annual pen tests are no longer enough. Without a formal exposure program, many businesses are flying blind, unaware of where the next threat may emerge.

This is where Continuous Threat Exposure Management (CTEM) becomes essential. Introduced by Gartner, CTEM helps organizations continuously assess, validate, and improve their exposure to real-world threats. It reframes the problem: scope your true attack surface, prioritize based on business impact and exploitability, and validate what attackers can actually do today, not once a year.

With two powerful new capabilities, Darktrace / Attack Surface Management helps organizations evolve their CTEM programs to meet the demands of today’s threat landscape. These updates make CTEM a reality, not just a strategy.

Too much data, not enough direction

Modern Attack Surface Management tools excel at discovering assets such as cloud workloads, exposed APIs, and forgotten domains. But they often fall short when it comes to prioritization. They rely on static severity scores or generic CVSS ratings, which do not reflect real-world risk or business impact.

This leaves security teams with:

  • Alert fatigue from hundreds of “critical” findings
  • Patch paralysis due to unclear prioritization
  • Blind spots around attacker intent and external targeting

CISOs need more than visibility. They need confidence in what to fix first and context to justify those decisions to stakeholders.

Evolving Attack Surface Management

Attack Surface Management (ASM) must evolve from static lists and generic severity scores to actionable intelligence that helps teams make the right decision now.

Joining the recent addition of Exploit Prediction Assessment, which debuted in late June 2025, today we’re introducing two capabilities that push ASM into that next era:

  • Exploit Prediction Assessment: Continuously validates whether top-priority exposures are actually exploitable in your environment without waiting for patch cycles or formal pen tests.  
  • Deep & Dark Web Monitoring: Extends visibility across millions of sources in the deep and dark web to detect leaked credentials linked to your confirmed domains.
  • Confidence Score: our newly developed AI classification platform will compare newly discovered assets to assets that are known to belong to your organization. The more these newly discovered assets look similar to assets that belong to your organization, the higher the score will be.

Together, these features compress the window from discovery to decision, so your team can act with precision, not panic. The result is a single solution that helps teams stay ahead of attackers without introducing new complexities.

Exploit Prediction Assessment

Traditional penetration tests are invaluable, but they’re often a snapshot of that point-in-time, are potentially disruptive, and compliance frameworks still expect them. Not to mention, when vulnerabilities are present, teams can act immediately rather than relying solely on information from CVSS scores or waiting for patch cycles.  

Unlike full pen tests which can be obtrusive and are usually done only a couple times per year, Exploit Prediction Assessment is surgical, continuous, and focused only on top issues Instead of waiting for vendor patches or the next pen‑test window. It helps confirm whether a top‑priority exposure is actually exploitable in your environment right now.  

For more information on this visit our blog: Beyond Discovery: Adding Intelligent Vulnerability Validation to Darktrace / Attack Surface Management

Deep and Dark Web Monitoring: Extending the scope

Customers have been asking for this for years, and it is finally here. Defense against the dark web. Darktrace / Attack Surface Management’s reach now spans millions of sources across the deep and dark web including forums, marketplaces, breach repositories, paste sites, and other hard‑to‑reach communities to detect leaked credentials linked to your confirmed domains.  

Monitoring is continuous, so you’re alerted as soon as evidence of compromise appears. The surface web is only a fraction of the internet, and a sizable share of risk hides beyond it. Estimates suggest the surface web represents roughly ~10% of all online content, with the rest gated or unindexed—and the TOR-accessible dark web hosts a high proportion of illicit material (a King’s College London study found ~57% of surveyed onion sites contained illicit content), underscoring why credential leakage and brand abuse often appear in places traditional monitoring doesn’t reach. Making these spaces high‑value for early warning signals when credentials or brand assets appear. Most notably, this includes your company’s reputation, assets like servers and systems, and top executives and employees at risk.

What changes for your team

Before:

  • Hundreds of findings, unclear what to start with
  • Reactive investigations triggered by incidents

After:

  • A prioritized backlog based on confidence score or exploit prediction assessment verification
  • Proactive verification of exposure with real-world risk without manual efforts

Confidence Score: Prioritize based on the use-case you care most about

What is it?

Confidence Score is a metric that expresses similarity of newly discover assets compared to the confirmed asset inventory. Several self-learning algorithms compare features of assets to be able to calculate a score.

Why it matters

Traditional Attack Surface Management tools treat all new discovery equally, making it unclear to your team how to identify the most important newly discovered assets, potentially causing you to miss a spoofing domain or shadow IT that could impact your business.

How it helps your team

We’re dividing newly discovered assets into separate insight buckets that each cover a slightly different business case.

  • Low scoring assets: to cover phishing & spoofing domains (like domain variants) that are just being registered and don't have content yet.
  • Medium scoring assets: have more similarities to your digital estate, but have better matching to HTML, brand names, keywords. Can still be phishing but probably with content.
  • High scoring assets: These look most like the rest of your confirmed digital estate, either it's phishing that needs the highest attention, or the asset belongs to your attack surface and requires asset state confirmation to enable the platform to monitor it for risks.

Smarter Exposure Management for CTEM Programs

Recent updates to Darktrace / Attack Surface Management directly advance the core phases of Continuous Threat Exposure Management (CTEM): scope, discover, prioritize, validate, and mobilize. The new Exploit Prediction Assessment helps teams validate and prioritize vulnerabilities based on real-world exploitability, while Deep & Dark Web Monitoring extends discovery into hard-to-reach areas where stolen data and credentials often surface. Together, these capabilities reduce noise, accelerate remediation, and help organizations maintain continuous visibility over their expanding attack surface.

Building on these innovations, Darktrace / Attack Surface Management empowers security teams to focus on what truly matters. By validating exploitability, it cuts through the noise of endless vulnerability lists—helping defenders concentrate on exposures that represent genuine business risk. Continuous monitoring for leaked credentials across the deep and dark web further extends visibility beyond traditional asset discovery, closing critical blind spots where attackers often operate. Crucially, these capabilities complement, not replace, existing security controls such as annual penetration tests, providing continuous, low-friction validation between formal assessments. The result is a more adaptive, resilient security posture that keeps pace with an ever-evolving threat landscape.

If you’re building or maturing a CTEM program—and want fewer open exposures, faster remediation, and better outcomes, Darktrace / Attack Surface Management’s new Exploit Prediction Assessment and Deep & Dark Web Monitoring are ready to help.

  • Want a more in-depth look at how Exploit Prediction Assessment functions? Read more here

Committed to innovation

These updates are part of the broader Darktrace release, which also included:

1. Major innovations in cloud security with the launch of the industry’s first fully automated cloud forensics solution, reinforcing Darktrace’s leadership in AI-native security.

2. Darktrace Network Endpoint eXtended Telemetry (NEXT) is revolutionizing NDR with the industry’s first mixed-telemetry agent using Self-Learning AI.

3. Improvements to our OT product, purpose built for industrial infrastructure, Darktrace / OT now brings dedicated OT dashboard, segmentation-aware risk modeling, and expanded visibility into edge assets and automation protocols.

Join our Live Launch Event

When? 

December 9, 2025

What will be covered?

Join our live broadcast to experience how Darktrace is eliminating blind spots for detection and response across your complete enterprise with new innovations in Agentic AI across our ActiveAI Security platform. Industry leaders from IDC will join Darktrace customers to discuss challenges in cross-domain security, with a live walkthrough reshaping the future of Network Detection & Response, Endpoint Detection & Response, Email Security, and SecOps in novel threat detection and autonomous investigations.

Continue reading
About the author
Your data. Our AI.
Elevate your network security with Darktrace AI