Blog
/
AI
/
June 12, 2023

How Darktrace AI Protects 8,400 Customers

This blog describes how Darktrace DETECT and RESPOND can help organizations reduce privacy and security risks related to generative AI.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Jack Stockdale OBE FREng
Chief Technology Officer
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
12
Jun 2023

Generative AI and Large Language Model (LLM) tools have entered the mainstream of public consciousness this year, with people using the likes of OpenAI’s ChatGPT and Google Bard for everything from helping web searches to using the AI capabilities to drive efficiency in the workplace.

At Darktrace, we have long understood the potential for AI to be one of the most transformative technological opportunities of our time. Our Darktrace Cyber AI Research Centre in Cambridge has been researching and developing AI tools for over a decade – tools like Darktrace DETECT™ and RESPOND™ which use a variety of AI technology to keep 8,400 customers around the world safe from cyber disruption. 

As pioneers of AI and understanding its potential to change the world, we recognize that in 2023, the AI genie is out of the bottle. AI tools are rapidly becoming part of our day to day lives. 

74% of active customer deployments have employees using generative AI tools in the workplace [1]

While generative AI tools have the power to increase productivity and augment human creativity, businesses need to move quickly to keep up with the pace of innovation. These tools carry potential privacy and security risks if used incorrectly or without proper policies in place that match the unique needs of the business – creating challenges for CISOs.

Privacy and Security Risks with Generative AI 

Government agencies like the UK’s National Cyber Security Centre (NCSC) have already issued guidance about the need to manage risk when using generative AI tools and other LLMs in the workplace. In the United States, the Cybersecurity and Infrastructure Agency (CISA) has also expressed concerns about the security implications of generative AI.

One of the reasons for this is because LLMs can learn from your prompts, storing information entered and using it to train datasets. With that data in the system, it is possible that if someone enters the right prompt, the LLM could potentially use your company’s data in response to a query.

And if the information you entered contains sensitive files or data such as intellectual property or know-how, financial reports, confidential internal documents, or sales numbers, it could become part of the third-party AI model and potentially available to others, creating privacy, intellectual property, and security risks if the appropriate guardrails are not in place. 

How Darktrace Helps Manage Generative AI Use 

In response to the growing use of generative AI tools, Darktrace has announced new risk and compliance models to help Darktrace customers address concerns around the risk of IP loss and data leakage.

We’re excited about how immensely powerful these generative AI tools are, with the capability to help people and businesses work efficiently– but like any other technology, there’s the risk that they could be inadvertently misused if not managed or monitored correctly. That’s why the new risk and compliance models for Darktrace DETECT™ and RESPOND™ make it easier for customers to put guardrails in place to monitor, and when necessary, respond to activity and connections to generative AI and LLM tools such as AutoGPT, ChatGPT, Stable Diffusion, Claude, and more. 

Each business will have its own distinct policies and needs related to generative AI tools, so we’ve also made it easier for customers to add their own list of tools to monitor for. 

Darktrace’s Self-Learning AI makes it possible to detect generative AI activity that may deviate from company policies or best practices. We bring our AI to each customer’s data, and it learns the day-to-day workings of every user, asset, and device – building an understanding of your business’s unique ‘pattern of life’.  That’s why it can detect even subtle anomalies that could indicate a threat to your business and autonomously respond, containing the threat in seconds.  

In May 2023, Darktrace Self-Learning AI detected and prevented an upload of over 1GB of data to a generative AI tool at one of its customers. [2]

With these guardrails in place, Darktrace customers can take advantage of the opportunity using generative AI and LLMs provide, while remaining protected against the potential security, IP, and privacy risks.

Using AI Safely and Responsibly

At Darktrace, we believe that recent advances in generative AI and LLMs are an important addition to the growing arsenal of AI techniques that will transform cyber security. After all, we have been utilizing AI, including LLMs and generative AI, across all of our products for years – including in Cyber AI Analyst for real time analysis of incidents, helping Darktrace customers use the power of AI to stay protected from cyber threats.

But we also believe in the responsible development and deployment of different AI techniques, which is why we are providing the tools customers need to use AI safely and responsibly. 

Our Self-Learning AI is already helping more than 8,400 businesses fight back and protect themselves against cyber threats and disruptions for the past ten years – with these new tools, CISOs can ensure that productivity is boosted by generative AI, without needing to worry about the potential security risks. Our AI learns the business in real time, all the time. It’s a Self-Learning AI. And the impact we’ve seen on improved security outcomes has been enormous.

Self-Learning AI informs Darktrace’s Cyber AI Loop, an interconnected, comprehensive set of dynamically related capabilities working together autonomously to create a continuous feedback loop to prevent, detect, respond, and heal from cyber-attacks. Ensuring that data, people, and businesses stay protected from cyber threats.

Figure 1: Darktrace Cyber AI Loop

References

[1] Based on data obtained on June 2nd, 2023, from active customer deployments with Call Home enabled, where Darktrace detected generative AI activity at some point.

[2]  Based on data obtained on June 2nd, 2023, from active customer deployments with Call Home enabled, where Darktrace detected generative AI activity at some point.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Jack Stockdale OBE FREng
Chief Technology Officer

More in this series

No items found.

Blog

/

AI

/

March 2, 2026

What the Darktrace Annual Threat Report 2026 Means for Security Leaders

Image of the Earth from spaceDefault blog imageDefault blog image

The challenge for today’s CISOs

At the broadest level, the defining characteristic of cybersecurity in 2026 is the sheer pace of change shaping the environments we protect. Organizations are operating in ecosystems that are larger, more interconnected, and more automated than ever before – spanning cloud platforms, distributed identities, AI-driven systems, and continuous digital workflows.  

The velocity of this expansion has outstripped the slower, predictable patterns security teams once relied on. What used to be a stable backdrop is now a living, shifting landscape where technology, risk, and business operations evolve simultaneously. From this vantage point, the central challenge for security leaders isn’t reacting to individual threats, but maintaining strategic control and clarity as the entire environment accelerates around them.

Strategic takeaways from the Annual Threat Report

The Darktrace Annual Threat Report 2026 reinforces a reality every CISO feels: the center of gravity isn’t the perimeter, vulnerability management, or malware, but trust abused via identity. For example, our analysis found that nearly 70% of incidents in the Americas region begin with stolen or misused accounts, reflecting the global shift toward identity‑led intrusions.

Mass adoption of AI agents, cloud-native applications, and machine decision-making means CISOs now oversee systems that act on their own. This creates an entirely new responsibility: ensuring those systems remain safe, predictable, and aligned to business intent, even under adversarial pressure.

Attackers increasingly exploit trust boundaries, not firewalls – leveraging cloud entitlements, SaaS identity transitions, supply-chain connectivity, and automation frameworks. The rise of non-human identities intensifies this: credentials, tokens, and agent permissions now form the backbone of operational risk.

Boards are now evaluating CISOs on business continuity, operational recovery, and whether AI systems and cloud workloads can fail safely without cascading or causing catastrophic impact.

In this environment, detection accuracy, autonomous response, and blast radius minimization matter far more than traditional control coverage or policy checklists.

Every organization will face setbacks; resilience is measured by how quickly security teams can rise, respond, and resume momentum. In 2026, success will belong to those that adapt fastest.

Managing business security in the age of AI

CISO accountability in 2026 has expanded far beyond controls and tooling. Whether we asked for it or not, we now own outcomes tied to business resilience, AI trust, cloud assurance, and continuous availability. The role is less about certainty and more about recovering control in an environment that keeps accelerating.

Every major 2026 initiative – AI agents, third-party risk, cloud, or comms protection – connects to a single board-level question: Are we still in control as complexity and automation scale faster than humans?

Attackers are not just getting more sophisticated; they are becoming more automated. AI changes the economics of attack, lowering cost and increasing speed. That asymmetry is what CISOs are being measured against.

CISOs are no longer evaluated on tool coverage, but on the ability to assure outcomes – trust in AI adoption, resilience across cloud and identity, and being able to respond to unknown and unforeseen threats.

Boards are now explicitly asking whether we can defend against AI-driven threats. No one can predict every new behavior – survival depends on detecting malicious deviations from normal fast and responding autonomously.  

Agents introduce decision-making at machine speed. Governance, CI/CD scanning, posture management, red teaming, and runtime detection are no longer differentiators but the baseline.

Cloud security is no longer architectural, it is operational. Identity, control planes, and SaaS exposure now sit firmly with the CISO.

AI-speed threats already reshaping security in 2026

We’re already seeing clear examples of how quickly the threat landscape has shifted in 2026. Darktrace’s work on React2Shell exposed just how unforgiving the new tempo is: a honeypot stood up with an exposed React was hit in under two minutes. There was no recon phase, no gradual probing – just immediate, automated exploitation the moment the code appeared publicly. Exposure now equals compromise unless defenses can detect, interpret, and act at machine speed. Traditional operational rhythms simply don’t map to this reality.

We’re also facing the first wave of AI-authored malware, where LLMs generate code that mutates on demand. This removes the historic friction from the attacker side: no skill barrier, no time cost, no limit on iteration. Malware families can regenerate themselves, shift structure, and evade static controls without a human operator behind the keyboard. This forces CISOs to treat adversarial automation as a core operational risk and ensure that autonomous systems inside the business remain predictable under pressure.

The CVE-2026-1731 BeyondTrust exploitation wave reinforced the same pattern. The gap between disclosure and active, global exploitation compressed into hours. Automated scanning, automated payload deployment, coordinated exploitation campaigns, all spinning up faster than most organizations can push an emergency patch through change control. The vulnerability-to-exploit window has effectively collapsed, making runtime visibility, anomaly detection, and autonomous containment far more consequential than patching speed alone.

These cases aren’t edge scenarios; they represent the emerging norm. Complexity and automation have outpaced human-scale processes, and attackers are weaponizing that asymmetry.  

The real differentiator for CISOs in 2026 is less about knowing everything and more about knowing immediately when something shifts – and having systems that can respond at the same speed.

[related-resource]

Continue reading
About the author
Mike Beck
Global CISO

Blog

/

Network

/

March 2, 2026

CVE-2026-1731: How Darktrace Sees the BeyondTrust Exploitation Wave Unfolding

Default blog imageDefault blog image

Note: Darktrace's Threat Research team is publishing now to help defenders. We will continue updating this blog as our investigations unfold.

Background

On February 6, 2026, the Identity & Access Management solution BeyondTrust announced patches for a vulnerability, CVE-2026-1731, which enables unauthenticated remote code execution using specially crafted requests.  This vulnerability affects BeyondTrust Remote Support (RS) and particular older versions of Privileged Remote Access (PRA) [1].

A Proof of Concept (PoC) exploit for this vulnerability was released publicly on February 10, and open-source intelligence (OSINT) reported exploitation attempts within 24 hours [2].

Previous intrusions against Beyond Trust technology have been cited as being affiliated with nation-state attacks, including a 2024 breach targeting the U.S. Treasury Department. This incident led to subsequent emergency directives from  the Cybersecurity and Infrastructure Security Agency (CISA) and later showed attackers had chained previously unknown vulnerabilities to achieve their goals [3].

Additionally, there appears to be infrastructure overlap with React2Shell mass exploitation previously observed by Darktrace, with command-and-control (C2) domain  avg.domaininfo[.]top seen in potential post-exploitation activity for BeyondTrust, as well as in a React2Shell exploitation case involving possible EtherRAT deployment.

Darktrace Detections

Darktrace’s Threat Research team has identified highly anomalous activity across several customers that may relate to exploitation of BeyondTrust since February 10, 2026. Observed activities include:

Outbound connections and DNS requests for endpoints associated with Out-of-Band Application Security Testing; these services are commonly abused by threat actors for exploit validation.  Associated Darktrace models include:

  • Compromise / Possible Tunnelling to Bin Services

Suspicious executable file downloads. Associated Darktrace models include:

  • Anomalous File / EXE from Rare External Location

Outbound beaconing to rare domains. Associated Darktrace models include:

  • Compromise / Agent Beacon (Medium Period)
  • Compromise / Agent Beacon (Long Period)
  • Compromise / Sustained TCP Beaconing Activity To Rare Endpoint
  • Compromise / Beacon to Young Endpoint
  • Anomalous Server Activity / Rare External from Server
  • Compromise / SSL Beaconing to Rare Destination

Unusual cryptocurrency mining activity. Associated Darktrace models include:

  • Compromise / Monero Mining
  • Compromise / High Priority Crypto Currency Mining

And model alerts for:

  • Compromise / Rare Domain Pointing to Internal IP

IT Defenders: As part of best practices, we highly recommend employing an automated containment solution in your environment. For Darktrace customers, please ensure that Autonomous Response is configured correctly. More guidance regarding this activity and suggested actions can be found in the Darktrace Customer Portal.  

Appendices

Potential indicators of post-exploitation behavior:

·      217.76.57[.]78 – IP address - Likely C2 server

·      hXXp://217.76.57[.]78:8009/index.js - URL -  Likely payload

·      b6a15e1f2f3e1f651a5ad4a18ce39d411d385ac7  - SHA1 - Likely payload

·      195.154.119[.]194 – IP address – Likely C2 server

·      hXXp://195.154.119[.]194/index.js - URL – Likely payload

·      avg.domaininfo[.]top – Hostname – Likely C2 server

·      104.234.174[.]5 – IP address - Possible C2 server

·      35da45aeca4701764eb49185b11ef23432f7162a – SHA1 – Possible payload

·      hXXp://134.122.13[.]34:8979/c - URL – Possible payload

·      134.122.13[.]34 – IP address – Possible C2 server

·      28df16894a6732919c650cc5a3de94e434a81d80 - SHA1 - Possible payload

References:

1.        https://nvd.nist.gov/vuln/detail/CVE-2026-1731

2.        https://www.securityweek.com/beyondtrust-vulnerability-targeted-by-hackers-within-24-hours-of-poc-release/

3.        https://www.rapid7.com/blog/post/etr-cve-2026-1731-critical-unauthenticated-remote-code-execution-rce-beyondtrust-remote-support-rs-privileged-remote-access-pra/

Continue reading
About the author
Emma Foulger
Global Threat Research Operations Lead
Your data. Our AI.
Elevate your network security with Darktrace AI