Blog
/
/
October 30, 2023

Exploring AI Threats: Package Hallucination Attacks

Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
30
Oct 2023
Learn how malicious actors exploit errors in generative AI tools to launch packet attacks. Read how Darktrace products detect and prevent these threats!

AI tools open doors for threat actors

On November 30, 2022, the free conversational language generation model ChatGPT was launched by OpenAI, an artificial intelligence (AI) research and development company. The launch of ChatGPT was the culmination of development ongoing since 2018 and represented the latest innovation in the ongoing generative AI boom and made the use of generative AI tools accessible to the general population for the first time.

ChatGPT is estimated to currently have at least 100 million users, and in August 2023 the site reached 1.43 billion visits [1]. Darktrace data indicated that, as of March 2023, 74% of active customer environments have employees using generative AI tools in the workplace [2].

However, with new tools come new opportunities for threat actors to exploit and use them maliciously, expanding their arsenal.

Much consideration has been given to mitigating the impacts of the increased linguistic complexity in social engineering and phishing attacks resulting from generative AI tool use, with Darktrace observing a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email™ customers from January to February 2023, corresponding with the widespread adoption of ChatGPT and its peers [3].

Less overall consideration, however, has been given to impacts stemming from errors intrinsic to generative AI tools. One of these errors is AI hallucinations.

What is an AI hallucination?

AI “hallucination” is a term which refers to the predictive elements of generative AI and LLMs’ AI model gives an unexpected or factually incorrect response which does not align with its machine learning training data [4]. This differs from regular and intended behavior for an AI model, which should provide a response based on the data it was trained upon.  

Why are AI hallucinations a problem?

Despite the term indicating it might be a rare phenomenon, hallucinations are far more likely than accurate or factual results as the AI models used in LLMs are merely predictive and focus on the most probable text or outcome, rather than factual accuracy.

Given the widespread use of generative AI tools in the workplace employees are becoming significantly more likely to encounter an AI hallucination. Furthermore, if these fabricated hallucination responses are taken at face value, they could cause significant issues for an organization.

Use of generative AI in software development

Software developers may use generative AI for recommendations on how to optimize their scripts or code, or to find packages to import into their code for various uses. Software developers may ask LLMs for recommendations on specific pieces of code or how to solve a specific problem, which will likely lead to a third-party package. It is possible that packages recommended by generative AI tools could represent AI hallucinations and the packages may not have been published, or, more accurately, the packages may not have been published prior to the date at which the training data for the model halts. If these hallucinations result in common suggestions of a non-existent package, and the developer copies the code snippet wholesale, this may leave the exchanges vulnerable to attack.

Research conducted by Vulcan revealed the prevalence of AI hallucinations when ChatGPT is asked questions related to coding. After sourcing a sample of commonly asked coding questions from Stack Overflow, a question-and-answer website for programmers, researchers queried ChatGPT (in the context of Node.js and Python) and reviewed its responses. In 20% of the responses provided by ChatGPT pertaining to Node.js at least one un-published package was included, whilst the figure sat at around 35% for Python [4].

Hallucinations can be unpredictable, but would-be attackers are able to find packages to create by asking generative AI tools generic questions and checking whether the suggested packages exist already. As such, attacks using this vector are unlikely to target specific organizations, instead posing more of a widespread threat to users of generative AI tools.

Malicious packages as attack vectors

Although AI hallucinations can be unpredictable, and responses given by generative AI tools may not always be consistent, malicious actors are able to discover AI hallucinations by adopting the approach used by Vulcan. This allows hallucinated packages to be used as attack vectors. Once a malicious actor has discovered a hallucination of an un-published package, they are able to create a package with the same name and include a malicious payload, before publishing it. This is known as a malicious package.

Malicious packages could also be recommended by generative AI tools in the form of pre-existing packages. A user may be recommended a package that had previously been confirmed to contain malicious content, or a package that is no longer maintained and, therefore, is more vulnerable to hijack by malicious actors.

In such scenarios it is not necessary to manipulate the training data (data poisoning) to achieve the desired outcome for the malicious actor, thus a complex and time-consuming attack phase can easily be bypassed.

An unsuspecting software developer may incorporate a malicious package into their code, rendering it harmful. Deployment of this code could then result in compromise and escalation into a full-blown cyber-attack.

Figure 1: Flow diagram depicting the initial stages of an AI Package Hallucination Attack.

For providers of Software-as-a-Service (SaaS) products, this attack vector may represent an even greater risk. Such organizations may have a higher proportion of employed software developers than other organizations of comparable size. A threat actor, therefore, could utilize this attack vector as part of a supply chain attack, whereby a malicious payload becomes incorporated into trusted software and is then distributed to multiple customers. This type of attack could have severe consequences including data loss, the downtime of critical systems, and reputational damage.

How could Darktrace detect an AI Package Hallucination Attack?

In June 2023, Darktrace introduced a range of DETECT™ and RESPOND™ models designed to identify the use of generative AI tools within customer environments, and to autonomously perform inhibitive actions in response to such detections. These models will trigger based on connections to endpoints associated with generative AI tools, as such, Darktrace’s detection of an AI Package Hallucination Attack would likely begin with the breaching of one of the following DETECT models:

  • Compliance / Anomalous Upload to Generative AI
  • Compliance / Beaconing to Rare Generative AI and Generative AI
  • Compliance / Generative AI

Should generative AI tool use not be permitted by an organization, the Darktrace RESPOND model ‘Antigena / Network / Compliance / Antigena Generative AI Block’ can be activated to autonomously block connections to endpoints associated with generative AI, thus preventing an AI Package Hallucination attack before it can take hold.

Once a malicious package has been recommended, it may be downloaded from GitHub, a platform and cloud-based service used to store and manage code. Darktrace DETECT is able to identify when a device has performed a download from an open-source repository such as GitHub using the following models:

  • Device / Anomalous GitHub Download
  • Device / Anomalous Script Download Followed By Additional Packages

Whatever goal the malicious package has been designed to fulfil will determine the next stages of the attack. Due to their highly flexible nature, AI package hallucinations could be used as an attack vector to deliver a large variety of different malware types.

As GitHub is a commonly used service by software developers and IT professionals alike, traditional security tools may not alert customer security teams to such GitHub downloads, meaning malicious downloads may go undetected. Darktrace’s anomaly-based approach to threat detection, however, enables it to recognize subtle deviations in a device’s pre-established pattern of life which may be indicative of an emerging attack.

Subsequent anomalous activity representing the possible progression of the kill chain as part of an AI Package Hallucination Attack could then trigger an Enhanced Monitoring model. Enhanced Monitoring models are high-fidelity indicators of potential malicious activity that are investigated by the Darktrace analyst team as part of the Proactive Threat Notification (PTN) service offered by the Darktrace Security Operation Center (SOC).

Conclusion

Employees are often considered the first line of defense in cyber security; this is particularly true in the face of an AI Package Hallucination Attack.

As the use of generative AI becomes more accessible and an increasingly prevalent tool in an attacker’s toolbox, organizations will benefit from implementing company-wide policies to define expectations surrounding the use of such tools. It is simple, yet critical, for example, for employees to fact check responses provided to them by generative AI tools. All packages recommended by generative AI should also be checked by reviewing non-generated data from either external third-party or internal sources. It is also good practice to adopt caution when downloading packages with very few downloads as it could indicate the package is untrustworthy or malicious.

As of September 2023, ChatGPT Plus and Enterprise users were able to use the tool to browse the internet, expanding the data ChatGPT can access beyond the previous training data cut-off of September 2021 [5]. This feature will be expanded to all users soon [6]. ChatGPT providing up-to-date responses could prompt the evolution of this attack vector, allowing attackers to publish malicious packages which could subsequently be recommended by ChatGPT.

It is inevitable that a greater embrace of AI tools in the workplace will be seen in the coming years as the AI technology advances and existing tools become less novel and more familiar. By fighting fire with fire, using AI technology to identify AI usage, Darktrace is uniquely placed to detect and take preventative action against malicious actors capitalizing on the AI boom.

Credit to Charlotte Thompson, Cyber Analyst, Tiana Kelly, Analyst Team Lead, London, Cyber Analyst

References

[1] https://seo.ai/blog/chatgpt-user-statistics-facts

[2] https://darktrace.com/news/darktrace-addresses-generative-ai-concerns

[3] https://darktrace.com/news/darktrace-email-defends-organizations-against-evolving-cyber-threat-landscape

[4] https://vulcan.io/blog/ai-hallucinations-package-risk?nab=1&utm_referrer=https%3A%2F%2Fwww.google.com%2F

[5] https://twitter.com/OpenAI/status/1707077710047216095

[6] https://www.reuters.com/technology/openai-says-chatgpt-can-now-browse-internet-2023-09-27/

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Author
Charlotte Thompson
Cyber Analyst
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst
Book a 1-1 meeting with one of our experts
Share this article

More in this series

No items found.

Blog

/

Email

/

March 19, 2025

Global Technology Provider Transforms Email Threat Detection with Darktrace

Default blog imageDefault blog image

At a glance

  • Within just one month of using Darktrace / EMAIL, the volume of suspicious emails requiring analyst attention dropped by 75%, saving analysts 45 hours per month on analysis and investigation.
  • By offloading most manual, repetitive tasks to Darktrace / EMAIL, the company’s skilled security analysts can focus on developing new capabilities and tackling more complex, rewarding projects.
  • Darktrace recently detected and blocked a highly sophisticated and personalized phishing email that spoofed a Microsoft SharePoint and Teams website and used advanced engineering to impersonate the school of an employee’s family member.
  • The transition from the incumbent solution to Darktrace / EMAIL was seamless and undetectable to the company’s vast of customers and partners, reinforcing the security organization’s role as a business enabler—protecting the company and reducing risk without adding friction.

Securing a complex, distributed business without disruption

The company remains at the forefront of technological innovation and transformation; however, its success and ambitions come with the challenges of managing a distributed global business—balancing digital advancements, existing technology investments, and evolving compliance requirements.

Optimizing a complex tech stack for scalable growth

The organization operates a diverse technology stack spanning Windows, Mac, Linux, and multiple cloud environments, creating a complex and challenging IT landscape. The company’s Chief Information Security Officer (CISO) emphasizes the need for efficiency and agility. “Our goal is to scale and deliver new capabilities without increasing headcount, ensuring that costs remain proportionate to growth.”

Balancing security, governance, and business agility

Committed to responsible practices, this industry leader prioritizes secure and trustworthy technology for its customers who rely on its solutions. “Balancing business agility with governance is a constant challenge," said the CISO. "There’s always a natural push and pull, which I believe is healthy—but achieving the right balance is delicate.”

Protecting critical workflows without impacting productivity

For the organization, email is much more than just a communication tool. “Email plays a critical role in our engineering workflows and is fundamental to how we build our products.” Because of this, the company is extremely cautious about implementing any solution that could introduce friction or disrupt productivity. “There is zero tolerance for disruption, which is why we take a deliberate and methodical approach when evaluating, selecting, and deploying our tools and solutions,” he said.  

More than a vendor: A security partner invested in success

To ensure an optimal security infrastructure, the enterprise security team regularly evaluates market technologies to their existing solutions. With the rapidly evolving threat landscape, the CISO said they “wanted to validate whether we still had best-in-class protection and the right controls in place to secure our organization. It was about assessing whether we could do better in our ongoing effort to fine-tuning our approach to achieve the best possible outcome.”

The team evaluated 15 different email security vendors based on the following criteria:

  1. Efficacy to detect threats
  2. Ability to integrate with existing tooling
  3. Ease of use
  4. A vendor’s approach to partnership  

They initially narrowed the list to five vendors, conducting demo sessions for deeper evaluations before selecting three finalists for a proof of value (POV). We analyzed actual malicious emails with each vendor to assess the accuracy of their detections, allowing for an objective comparison,” said the CISO. Through this rigorous process, the Darktrace / EMAIL security solution emerged as the best fit for their business. “Darktrace’s product performed well and showed a genuine commitment to partnering with us in the long-term to ensure our success.”

The team objectively understood where there were gaps across the different vendors, where they were strong, and where they could use improvement. “Based on the analysis, we knew that Darktrace / EMAIL could deliver as the data supported it, in our specific use cases.  

Partnership, integrity and respect

Throughout the evaluation process, the importance of partnership and mutual respect remained an essential factor to the CISO. “I wanted a company we could develop a long-term strategic partnership with, one that could extend far deeper than just email.” A key factor in choosing Darktrace was the commitment and engagement of its team at every level of the organization. “Darktrace showed integrity, patience and a genuine investment in building a strong relationship with my team.  That's why we're here today.”

“Together, we've delivered some fantastic outcomes”

For the organization, Darktrace / EMAIL has played a crucial role in reducing risk, empowering analysts, and enabling a lean, effective security strategy. “Together, we've delivered some fantastic outcomes,” said the CISO.  

Reducing risk. Empowering analysts

“Within that first month, we saw a 75% drop in suspicious emails that that required manual review, which reduced the time my team spent analyzing and investigating by 45 hours per month,” said the CISO. The security team values Darktrace / EMAIL not only for its ease of use but also for the time it frees up for more meaningful work. “Giving my team the opportunity to tackle complex challenges they enjoy and find more stimulating is important to me.” As they continue to fine-tune and optimize balance levels within Darktrace / EMAIL, he expects even greater efficiency gains in the coming months.

Maximizing protection while staying lean

It’s important for the security group to be proportionate with their spending, said the CISO. “It's all about what is enough security to enable the business. And that means, as our organization grows, it's important that we are as lean and as efficient as possible to deliver the best outcomes for the business.”  Embracing an AI-powered automated approach is an essential component to achieving that goal. By offloading most manual, repetitive tasks to Darktrace / EMAIL, the company’s skilled security analysts can focus on more strategic and proactive initiatives that enable the business.  

Protecting employees from advanced social engineering threats

Recently, Darktrace detected a malicious email targeting an employee, disguised as a spoofed Microsoft SharePoint and Teams website. What made this attack particularly sophisticated was its personalization — it impersonated the school where the employee’s family member attended. Unlike mass malicious emails sent to thousands of people, this was a highly targeted attack, leveraging advanced social engineering tactics to exploit connections within the education system and between family members.  

Protecting without disrupting

A seamless migration is often overlooked but is critical to success for any organization, said the CISO. With a wide ecosystem of partners, email is a highly visible, business-critical function for the organization — "any friction or downtime would have an immediate impact and could throttle the entire business,” he said. However, the transition from their previous solution to Darktrace / EMAIL was exceptionally smooth. “No one realized we changed providers because there was no disruption — no incidents at all. I cannot emphasize just how important that is when I'm trying to position our security organization as an enabling function for the business that protects and reduces risk without adding friction.”

A security partnership for the future

“To survive as a business over the next few years, adopting AI is no longer optional—it’s essential,” said the CISO. However, with the cybersecurity market becoming increasingly saturated, selecting the right solutions and vendors can be overwhelming. He stresses the importance of choosing strategic partners who not only deliver the outcomes you need, but also deeply understand your organization’s unique environment. “You’re only as strong as your partners. Technology innovation and the cybersecurity market are always changing.  At some point every solution will face a challenge—it’s inevitable. The differentiator will be how people respond when that happens.”  

Continue reading
About the author
The Darktrace Community

Blog

/

AI

/

March 19, 2025

Survey findings: How is AI Impacting the SOC?

Default blog imageDefault blog image

There’s no question that AI is already impacting the SOC – augmenting, assisting, and filling the gaps left by staff and skills shortages. We surveyed over 1,500 cybersecurity professionals from around the world to uncover their attitudes to AI cybersecurity in 2025. Our findings revealed striking trends in how AI is changing the way security leaders think about hiring and SOC transformation. Download the full report for the big picture, available now.

Download the full report to explore these findings in depth

The AI-human conundrum

Let’s start with some context. As the cybersecurity sector has rapidly evolved to integrate AI into all elements of cyber defense, the pace of technological advancement is outstripping the development of necessary skills. Given the ongoing challenges in security operations, such as employee burnout, high turnover rates, and talent shortages, recruiting personnel to bridge these skills gaps remains an immense challenge in today’s landscape.

But here, our main findings on this topic seem to contradict each other.

There’s no question over the impact of AI-powered threats – nearly three-quarters (74%) agree that AI-powered threats now pose a significant challenge for their organization.  

When we look at how security leaders are defending against AI-powered threats, over 3 out of 5 (62%) see insufficient personnel to manage tools and alerts as the biggest barrier.  

Yet at the same time, increasing cyber security staff is at the bottom of the priority list for survey participants, with only 11% planning to increase cybersecurity staff in 2025 – less than in 2024. What 64% of stakeholders are committed to, however, is adding new AI-powered tools onto their existing security stacks.

With burnout pervasive, the talent deficit reaching a new peak, and growing numbers of companies unable to fill cybersecurity positions, it may be that stakeholders realize they simply cannot hire enough personnel to solve this problem, no matter how much they may want to. As a result, leaders are looking for methods beyond increasing staff to overcome security obstacles.

Meanwhile, the results show that defensive AI is becoming integral to the SOC as a means of augmenting understaffed teams.

How is AI plugging skills shortages in the SOC?

As explored in our recent white paper, the CISO’s Guide to Navigating the Cybersecurity Skills Shortage, 71% of organizations report unfilled cybersecurity positions, leading to the estimation that less than 10% of alerts are thoroughly vetted. In this scenario, AI has become an essential multiplier to relieve the burden on security teams.

95% of respondents agree that AI-powered solutions can significantly improve the speed and efficiency of their defenses. But how?

The area security leaders expect defensive AI to have the biggest impact is on improving threat detection, followed by autonomous response to threats and identifying exploitable vulnerabilities.

Interestingly, the areas that participants ranked less highly (reducing alert fatigue and running phishing simulation), are the tasks that AI already does well and can therefore be used already to relieve the burden of manual, repetitive work on the SOC.

Different perspectives from different sides of the SOC

CISOs and SecOps teams aren’t necessarily aligned on the AI defense question – while CISOs tend to see it as a strategic game-changer, SecOps teams on the front lines may be more sceptical, wary of its real-world reliability and integration into workflows.  

From the data, we see that while less than a quarter of execs doubt that AI-powered solutions will block and automatically respond to AI threats, about half of SecOps aren’t convinced. And only 17% of CISOs lack confidence in the ability of their teams to implement and use AI-powered solutions, whereas over 40% those in the team doubt their own ability to do so.

This gap feeds into the enthusiasm that executives share about adding AI-driven tools into the stack, while day-to-day users of the tools are more interested in improving security awareness training and improving cybersecurity tool integration.

Levels of AI understanding in the SOC

AI is only as powerful as the people who use it, and levels of AI expertise in the SOC can make or break its real-world impact. If security leaders want to unlock AI’s full potential, they must bridge the knowledge gap—ensuring teams understand not just the different types of AI, but where it can be applied for maximum value.

Only 42% of security professionals are confident that they fully understand all the types of AI in their organization’s security stack.

This data varies between job roles – executives report higher levels of understanding (60% say they know exactly which types of AI are being used) than participants in other roles. Despite having a working knowledge of using the tools day-to-day, SecOps practitioners were more likely to report having a “reasonable understanding” of the types of AI in use in their organization (42%).  

Whether this reflects a general confidence in executives rather than technical proficiency it’s hard to say, but it speaks to the importance of AI-human collaboration – introducing AI tools for cybersecurity to plug the gaps in human teams will only be effective if security professionals are supported with the correct education and training.  

Download the full report to explore these findings in depth

The full report for Darktrace’s State of AI Cybersecurity is out now. Download the paper to dig deeper into these trends, and see how results differ by industry, region, organization size, and job title.  

Continue reading
About the author
The Darktrace Community
Your data. Our AI.
Elevate your network security with Darktrace AI