Blog
/
/
October 30, 2023

Exploring AI Threats: Package Hallucination Attacks

Learn how malicious actors exploit errors in generative AI tools to launch packet attacks. Read how Darktrace products detect and prevent these threats!
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
30
Oct 2023

AI tools open doors for threat actors

On November 30, 2022, the free conversational language generation model ChatGPT was launched by OpenAI, an artificial intelligence (AI) research and development company. The launch of ChatGPT was the culmination of development ongoing since 2018 and represented the latest innovation in the ongoing generative AI boom and made the use of generative AI tools accessible to the general population for the first time.

ChatGPT is estimated to currently have at least 100 million users, and in August 2023 the site reached 1.43 billion visits [1]. Darktrace data indicated that, as of March 2023, 74% of active customer environments have employees using generative AI tools in the workplace [2].

However, with new tools come new opportunities for threat actors to exploit and use them maliciously, expanding their arsenal.

Much consideration has been given to mitigating the impacts of the increased linguistic complexity in social engineering and phishing attacks resulting from generative AI tool use, with Darktrace observing a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email™ customers from January to February 2023, corresponding with the widespread adoption of ChatGPT and its peers [3].

Less overall consideration, however, has been given to impacts stemming from errors intrinsic to generative AI tools. One of these errors is AI hallucinations.

What is an AI hallucination?

AI “hallucination” is a term which refers to the predictive elements of generative AI and LLMs’ AI model gives an unexpected or factually incorrect response which does not align with its machine learning training data [4]. This differs from regular and intended behavior for an AI model, which should provide a response based on the data it was trained upon.  

Why are AI hallucinations a problem?

Despite the term indicating it might be a rare phenomenon, hallucinations are far more likely than accurate or factual results as the AI models used in LLMs are merely predictive and focus on the most probable text or outcome, rather than factual accuracy.

Given the widespread use of generative AI tools in the workplace employees are becoming significantly more likely to encounter an AI hallucination. Furthermore, if these fabricated hallucination responses are taken at face value, they could cause significant issues for an organization.

Use of generative AI in software development

Software developers may use generative AI for recommendations on how to optimize their scripts or code, or to find packages to import into their code for various uses. Software developers may ask LLMs for recommendations on specific pieces of code or how to solve a specific problem, which will likely lead to a third-party package. It is possible that packages recommended by generative AI tools could represent AI hallucinations and the packages may not have been published, or, more accurately, the packages may not have been published prior to the date at which the training data for the model halts. If these hallucinations result in common suggestions of a non-existent package, and the developer copies the code snippet wholesale, this may leave the exchanges vulnerable to attack.

Research conducted by Vulcan revealed the prevalence of AI hallucinations when ChatGPT is asked questions related to coding. After sourcing a sample of commonly asked coding questions from Stack Overflow, a question-and-answer website for programmers, researchers queried ChatGPT (in the context of Node.js and Python) and reviewed its responses. In 20% of the responses provided by ChatGPT pertaining to Node.js at least one un-published package was included, whilst the figure sat at around 35% for Python [4].

Hallucinations can be unpredictable, but would-be attackers are able to find packages to create by asking generative AI tools generic questions and checking whether the suggested packages exist already. As such, attacks using this vector are unlikely to target specific organizations, instead posing more of a widespread threat to users of generative AI tools.

Malicious packages as attack vectors

Although AI hallucinations can be unpredictable, and responses given by generative AI tools may not always be consistent, malicious actors are able to discover AI hallucinations by adopting the approach used by Vulcan. This allows hallucinated packages to be used as attack vectors. Once a malicious actor has discovered a hallucination of an un-published package, they are able to create a package with the same name and include a malicious payload, before publishing it. This is known as a malicious package.

Malicious packages could also be recommended by generative AI tools in the form of pre-existing packages. A user may be recommended a package that had previously been confirmed to contain malicious content, or a package that is no longer maintained and, therefore, is more vulnerable to hijack by malicious actors.

In such scenarios it is not necessary to manipulate the training data (data poisoning) to achieve the desired outcome for the malicious actor, thus a complex and time-consuming attack phase can easily be bypassed.

An unsuspecting software developer may incorporate a malicious package into their code, rendering it harmful. Deployment of this code could then result in compromise and escalation into a full-blown cyber-attack.

Figure 1: Flow diagram depicting the initial stages of an AI Package Hallucination Attack.

For providers of Software-as-a-Service (SaaS) products, this attack vector may represent an even greater risk. Such organizations may have a higher proportion of employed software developers than other organizations of comparable size. A threat actor, therefore, could utilize this attack vector as part of a supply chain attack, whereby a malicious payload becomes incorporated into trusted software and is then distributed to multiple customers. This type of attack could have severe consequences including data loss, the downtime of critical systems, and reputational damage.

How could Darktrace detect an AI Package Hallucination Attack?

In June 2023, Darktrace introduced a range of DETECT™ and RESPOND™ models designed to identify the use of generative AI tools within customer environments, and to autonomously perform inhibitive actions in response to such detections. These models will trigger based on connections to endpoints associated with generative AI tools, as such, Darktrace’s detection of an AI Package Hallucination Attack would likely begin with the breaching of one of the following DETECT models:

  • Compliance / Anomalous Upload to Generative AI
  • Compliance / Beaconing to Rare Generative AI and Generative AI
  • Compliance / Generative AI

Should generative AI tool use not be permitted by an organization, the Darktrace RESPOND model ‘Antigena / Network / Compliance / Antigena Generative AI Block’ can be activated to autonomously block connections to endpoints associated with generative AI, thus preventing an AI Package Hallucination attack before it can take hold.

Once a malicious package has been recommended, it may be downloaded from GitHub, a platform and cloud-based service used to store and manage code. Darktrace DETECT is able to identify when a device has performed a download from an open-source repository such as GitHub using the following models:

  • Device / Anomalous GitHub Download
  • Device / Anomalous Script Download Followed By Additional Packages

Whatever goal the malicious package has been designed to fulfil will determine the next stages of the attack. Due to their highly flexible nature, AI package hallucinations could be used as an attack vector to deliver a large variety of different malware types.

As GitHub is a commonly used service by software developers and IT professionals alike, traditional security tools may not alert customer security teams to such GitHub downloads, meaning malicious downloads may go undetected. Darktrace’s anomaly-based approach to threat detection, however, enables it to recognize subtle deviations in a device’s pre-established pattern of life which may be indicative of an emerging attack.

Subsequent anomalous activity representing the possible progression of the kill chain as part of an AI Package Hallucination Attack could then trigger an Enhanced Monitoring model. Enhanced Monitoring models are high-fidelity indicators of potential malicious activity that are investigated by the Darktrace analyst team as part of the Proactive Threat Notification (PTN) service offered by the Darktrace Security Operation Center (SOC).

Conclusion

Employees are often considered the first line of defense in cyber security; this is particularly true in the face of an AI Package Hallucination Attack.

As the use of generative AI becomes more accessible and an increasingly prevalent tool in an attacker’s toolbox, organizations will benefit from implementing company-wide policies to define expectations surrounding the use of such tools. It is simple, yet critical, for example, for employees to fact check responses provided to them by generative AI tools. All packages recommended by generative AI should also be checked by reviewing non-generated data from either external third-party or internal sources. It is also good practice to adopt caution when downloading packages with very few downloads as it could indicate the package is untrustworthy or malicious.

As of September 2023, ChatGPT Plus and Enterprise users were able to use the tool to browse the internet, expanding the data ChatGPT can access beyond the previous training data cut-off of September 2021 [5]. This feature will be expanded to all users soon [6]. ChatGPT providing up-to-date responses could prompt the evolution of this attack vector, allowing attackers to publish malicious packages which could subsequently be recommended by ChatGPT.

It is inevitable that a greater embrace of AI tools in the workplace will be seen in the coming years as the AI technology advances and existing tools become less novel and more familiar. By fighting fire with fire, using AI technology to identify AI usage, Darktrace is uniquely placed to detect and take preventative action against malicious actors capitalizing on the AI boom.

Credit to Charlotte Thompson, Cyber Analyst, Tiana Kelly, Analyst Team Lead, London, Cyber Analyst

References

[1] https://seo.ai/blog/chatgpt-user-statistics-facts

[2] https://darktrace.com/news/darktrace-addresses-generative-ai-concerns

[3] https://darktrace.com/news/darktrace-email-defends-organizations-against-evolving-cyber-threat-landscape

[4] https://vulcan.io/blog/ai-hallucinations-package-risk?nab=1&utm_referrer=https%3A%2F%2Fwww.google.com%2F

[5] https://twitter.com/OpenAI/status/1707077710047216095

[6] https://www.reuters.com/technology/openai-says-chatgpt-can-now-browse-internet-2023-09-27/

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst

More in this series

No items found.

Blog

/

Email

/

December 3, 2025

Darktrace Named as a Leader in 2025 Gartner® Magic Quadrant™ for Email Security Platforms

Default blog imageDefault blog image

Darktrace is proud to be named as a Leader in the Gartner® Magic Quadrant™ for Email Security Platforms (ESP). We believe this recognition reflects what our customers already know: our product is exceptional – and so is the way we deliver it.

In July 2025, Darktrace was named a Customers’ Choice in the Gartner® Peer Insights™ Voice of the Customer for Email Security, a distinction given to vendors who have scores that meet or exceed the market average for both axes (User Interest and Adoption, and Overall Experience). To us, both achievements are testament to the customer-first approach that has fueled our rapid growth. We feel this new distinction from Gartner validates the innovation, efficacy, and customer-centric delivery that set Darktrace apart.

A Gartner Magic Quadrant is a culmination of research in a specific market, giving you a wide-angle view of the relative positions of the market’s competitors. CIOs and CISOs can use this research to make informed decisions about which email security platform can best accomplish their goals. We encourage our customers to read the full report to get the complete picture.

This acknowledgement follows the recent recognition of Darktrace / NETWORK, also designated a Leader in the Gartner Magic Quadrant for Network Detection & Response and named the only Customers’ Choice in its category.

Why do we believe Darktrace is leading in the email security market?

Our relentless innovation which drives proven results  

At Darktrace we continue to push the frontier of email security, with industry-first AI-native detection and response capabilities that go beyond traditional SEG approaches. How do we do it?

  • With a proven approach that gets results. Darktrace’s unique business-centric anomaly detection catches advanced phishing, supply chain compromises, and BEC attacks – detecting them on average 13 days earlier than attack-centric solutions. That’s why 75% of our customers have removed their SEG and now rely on their native email security provider combined with Darktrace.
  • By offering comprehensive protection beyond the inbox. Darktrace / EMAIL goes further than traditional inbound filtering, delivering account and messaging protection, DLP, and DMARC capabilities, ensuring best-in-class security across inbound, outbound, and domain protection scenarios.  
  • Continuous innovation. We are ranked second highest in the Gartner Critical Capabilities research for core email security function, likely thanks to our product strategy and rapid pace of innovation. We’ve release major capabilities twice a year for nearly five years, including advanced AI models and expanded coverage for collaboration platforms.

We deliver exceptional customer experiences worldwide

Darktrace’s leadership isn’t just about excelling in technology, it’s about delivering an outstanding experience that customers value. Let’s dig into what makes our customers tick.

  • Proven loyalty from our base. Recognition from Gartner Peer Insights as a Customers’ Choice, combined with a 4.8-star rating (based on 340 reviews as of November 2025), demonstrates for us the trust of thousands of organizations worldwide, not just the analysts.  
  • Customer-first support. Darktrace goes beyond ticket-only models with dedicated account teams and award-winning service, backed by significant headcount growth in technical support and analytics roles over the past year.
  • Local expertise. With offices spanning continents, Darktrace is able to provide regional language support and tailored engagement from teams on the ground, ensuring personalized service and a human-first experience.

Darktrace enhances security stacks with a partner-first architecture

There are plenty of tools out there than encourage a siloed approach. Darktrace / EMAIL plays well with others, enhancing your native security provider and allowing you to slim down your stack. It’s designed to set you up for future growth, with:

  • A best-in-breed platform approach. Natively built on Self-Learning AI, Darktrace / EMAIL delivers deep integration with our / NETWORK, / IDENTITY, and / CLOUD products as part of a unified platforms – that enables and enhances comprehensive enterprise-wise security.
  • Optimized workflows. Darktrace integrates tightly with an extended ecosystem of security tools – including a strategic partnership with Microsoft enabling unified threat response and quarantine capabilities – bringing constant innovation to all of your SOC workflows.  
  • A channel-first strategy. Darktrace is making significant investments in partner-driven architectures, enabling integrated ecosystems that deliver maximum value and future-ready security for our customers.

Analyst recognized. Customer approved.  

Darktrace / EMAIL is not just another inbound email security tool; it’s an advanced email security platform trusted by thousands of users to protect them against advanced phishing, messaging, and account-level attacks.  

As a Leader, we believe we owe our positioning to our customers and partners for supporting our growth. In the upcoming years we will continue to innovate to serve the organizations who depend on Darktrace for threat protection.  

To learn more about Darktrace’s position as a Leader, view a complimentary copy of the Magic Quadrant report, register for the Darktrace Innovation Webinar on 9 December, 2025, or simply request a demo.

Gartner, Gartner® Magic Quadrant™ for Email Security Platforms, Max Taggett, Nikul Patel, 3 December 2025

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved. Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates and is used herein with permission. All rights reserved.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Darktrace.

[related-resource]

Continue reading
About the author
Carlos Gray
Senior Product Marketing Manager, Email

Blog

/

/

December 2, 2025

Protecting the Experience: How a global hospitality brand stays resilient with Darktrace

Default blog imageDefault blog image

For the Global Chief Technology Officer (CTO) of a leading experiential leisure provider, security is mission critical to protecting a business built on reputation, digital innovation, and guest experience. The company operates large-scale immersive venues across the UK and US, blending activity-driven hospitality with premium dining and vibrant spaces designed for hundreds of guests. With a lean, centrally managed IT team responsible for securing locations worldwide, the challenge is balancing robust cybersecurity with operational efficiency and customer experience.

Brand buzz attracts attention – and attacks

Mid-sized, fast-growing hospitality organizations face a unique risk profile. When systems go down in a venue, the impact is immediate: hundreds of disrupted guest experiences, lost revenue during peak hours, and potential long-term reputation damage. Each time the organization opened a new venue, the surge of marketing buzz attracted attention in local markets and waves of sophisticated cyberattacks, including:

Phishing campaigns leveraging brand momentum to lure employees into clicking on malicious links.

AI-enhanced impersonation using advanced techniques to create AI-generated video calls and deep-researched, contextualized emails  

Fake domains targeting leadership with AI-generated messages that contained insider context gleaned from public information.

“Our endpoint security and antivirus tools were powerless against these sophisticated AI-powered campaigns. We didn’t want to manage incidents anymore. We wanted to prevent them from ever happening.”  - Global CTO

Proactive, preventative security with Darktrace AI

The company’s cybersecurity vision was clear: “Proactive, preventative – that was our mandate,” said the CTO. With a lean and busy IT group, the business evaluated several security solutions using deep-dive workshops. Darktrace proved the best fit for supporting the organization’s proactive mindset, offering:

  • Autonomy without added headcount: Darktrace provided powerful AI-driven detection and autonomous response functions with minimal manual oversight required.
  • Modular adoption: The company could start with core email and network protection and expand into cloud and endpoint coverage, aligning spend with growth.
  • Partnership and responsiveness: “We wanted people we trust, respect, and know will show up when we need them. Darktrace did just that,” said the CTO.
  • Affordability at scale: Darktrace offered reasonable upfront costs plus predictable, sustainable economics as the company and IT infrastructure expanded.  

“The combination of AI capabilities, a scalable model, and a strong engagement team tipped the balance in Darktrace’s favor, and we have not been disappointed,” said the CTO.

Phased deployment builds trust

To minimize disruption to critical hospitality systems like global Point of Sales (POS) terminals and Audio-Visual (AV) infrastructure, deployment was phased:

  1. Observation and human-led response: Initially, Darktrace was deployed in detection-only mode. Alerts were manually reviewed.
  2. Incremental autonomous response: Darktrace Autonomous Response was enabled on select models, taking action on low-risk scenarios. Higher-risk subnets and devices remained under human control.
  3. Full autonomous coverage: With tuning and reinforcement, autonomous response was expanded across domains, trusted to take decisive action in real time. Analysts retained the ability to review and contextualize incidents.

“Darktrace managed the rollout through detailed, professional, and responsive project management – ensuring a smooth, successful adoption and creating a standardized cybersecurity playbook for future venue launches,” said the CTO.  

AI delivers the outcomes that matter  

Measurable efficiency replaces endless alerts

Darktrace autonomous response significantly decreased false alerts and noise. “If it’s quiet, we’re confident there isn’t a problem,” said the CTO. Within six months, Darktrace conducted 3,599 total investigations, detected and contained 320 incidents indicative of an attack, resolved 91% of those events autonomously, and escalated only 9% to human analysts. The efficiency gains were enormous, saving analysts 740 hours on investigations within a single month.  

Precision AI turns inbox chaos into calm

Darktrace Self-Learning AI modeled sender/recipient norms, content/linguistic baselines, and communication patterns unique to the organization’s launch cadence, resulting in:

  • Automated holds and neutralizations of anomalous executive-style messages
  • Rapid detection of novel templates and tone shifts that deviated from the organization’s lived email graph, even when indicators were not yet on any feed
  • Downstream reduction in help-desk escalations tied to suspicious email

Full visibility fuels real-time response

Darktrace gives IT direct visibility without extra licensing, and it surfaces ground truth across every venue, including:

  • Device geolocation and placement drift: Darktrace exposed devices and users operating outside approved zones, prompting new segmentation and access-control policies.
  • Guest Wi-Fi realities: Darktrace AI uncovered high-risk activity on guest networks, like crypto-mining and dark-web traffic, driving stricter VLAN separation and access hygiene.
  • Lateral-movement containment: Autonomous response fenced suspicious activity in real time, buying time for human investigation while keeping POS and AV systems unaffected.

Smarter endpoints for a smarter network

Endpoints once relied on static agents effective only against known signatures. Darktrace’s behavioral models now detect subtle anomalies at the endpoint process level that EDRs often miss, such as misuse of legitimate applications (commonly used in living-off-the-land attacks), unapproved application usage and policy violations. This increases the accuracy and fidelity of network-based investigations by adding endpoint process context alongside existing EDR alerts.

Autonomous response for continuous compliance

Across PCI, GDPR, and cross-border privacy obligations, Darktrace’s native evidencing is helping the team demonstrate control rather than merely assert it:

  • Asset and flow awareness: Knowing “what is where” and “who talks to what” underpins PCI scoping and data-flow diagrams.
  • Layered safeguards: Showing autonomous prevention, network segmentation, and rapid containment supports risk registers and control attestations.
  • Audit-ready artifacts: Investigations and autonomous actions produce artifacts that “tick the box” without additional tooling.  

Defining the next era of resilience with AI

With rapid global expansion underway, the company is using its cybersecurity playbook to streamline and secure future venue launches. In the near term, IT is focused on strengthening prevention, using Darktrace insights to guide new policy updates and infrastructure changes like imposing stricter guest-network posture and refining venue device baselines.

For tech leaders charting their path to proactive cyber defense, the CTO stresses success won’t come from sidestepping AI, but from turning it into a core capability.

“AI isn’t optional – it’s operational. The real risk to your business is trying to out-scale automated adversaries with human speed alone. When applied to the right use case, AI becomes a catalyst for efficiency, resilience, and business growth.” - Global CTO
Continue reading
About the author
The Darktrace Community
Your data. Our AI.
Elevate your network security with Darktrace AI