Blog
/
Network
/
July 4, 2024

A Busy Agenda: Darktrace's Detection of Qilin Ransomware as a Service Operator

This blog breaks down how Darktrace detected and analyzed Qilin, a Ransomware-as-a-Service group behind recent high-impact attacks. You’ll see how Qilin affiliates customize attacks with flexible encryption, process termination, and double-extortion techniques, as well as why its cross-platform builds in Rust and Golang make it especially evasive. Darktrace highlights three real-world cases where its AI identified likely Qilin activity across customer environments, offering insights into how behavioral detection can spot novel ransomware before disruption occurs. Readers will gain a clear view of Qilin’s toolkit, tactics, and how self-learning defense adapts to these evolving threats.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Alexandra Sentenac
Cyber Analyst
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
04
Jul 2024

What is Qilin Ransomware and what's its impact?

Qilin ransomware has recently dominated discussions across the cyber security landscape following its deployment in an attack on Synnovis, a UK-based medical laboratory company. The ransomware attack ultimately affected patient services at multiple National Health Service (NHS) hospitals that rely on Synnovis diagnostic and pathology services. Qilin’s origins, however, date back further to October 2022 when the group was observed seemingly posting leaked data from its first known victim on its Dedicated Leak Site (DLS) under the name Agenda[1].

The Darktrace Threat Research team investigated network artifacts related to Qilin and identified three probable cases of the ransomware across the Darktrace customer base between June 2022 and May 2024.

How Qilin Ransowmare Operates as RaaS

Qilin operates as a Ransomware-as-a-Service (RaaS) that employs double extortion tactics, whereby harvested data is exfiltrated and threatened of publication on the group's DLS, which is hosted on Tor. Qilin ransomware has samples written in both the Golang and Rust programming languages, making it compilable with various operating systems, and is highly customizable.

Techniques Qilin Ransomware uses to avoid detection

When building Qilin ransomware variants to be used on their target(s), affiliates can configure settings such as:

  • Encryption modes (skip-step, percent, or speed)
  • File extensions, directories, or processes to exclude
  • Unique company IDs used as extensions on encrypted files
  • Services or processes to terminate during execution [1] [2].
  • Trend Micro analysts, who were the first to discover Qilin samples in August 2022, when the name "Agenda" was still used in ransom notes, found that each analyzed sample was customized for the intended victims and that "unique company IDs were used as extensions of encrypted files" [3]. This information is configurable from within the Qilin's affiliate panel's 'Targets' section, shown below.

    Qilin's affiliate panel and branding

    The panel's background image features the eponym Chinese legendary chimerical creature Qilin (pronounced “Ke Lin”). Despite this Chinese mythology reference, Russian language was observed being used by a Qilin operator in an underground forum post aimed at hiring affiliates and advertising their RaaS operation[2].

    Figure 1: Qilin ransomware’s affiliate panel.

    Qilin’s affiliate payment model

    Qilin's RaaS program purportedly has an attractive affiliates' payment structure,

    • Affiliates earn 80% of ransom payments under USD 3 million
    • Affiliates earn 85% of ransom payments above USD 3 million [2]

    Publication of stolen data and ransom payment negotiations are purportedly handled by Qilin operators. Qilin affiliates have been known to target companies located around the world and within a variety of industries, including critical sectors such as healthcare and energy.

    Qilin target industries and victims

    As Qilin is a RaaS operation, the choice of targets does not necessarily reflect Qilin operators' intentions, but rather that of its affiliates.  

    Similarly, the tactics, techniques, procedures (TTPs) and indicators of compromise (IoC) identified by Darktrace are associated with the given affiliate deploying Qilin ransomware for their own purpose, rather than TTPs and IoCs of the Qilin group. Likewise, initial vectors of infection may vary from affiliate to affiliate.

    Previous studies show that initial access to networks were gained via spear phishing emails or by leveraging exposed applications and interfaces.

    Differences have been observed in terms of data exfiltration and potential C2 external endpoints, suggesting the below investigations are not all related to the same group or actor(s).

    [related-resource]

    Darktrace’s threat research investigation

    Qlin ransomware attack breakdown

    June 2022: Qilin ransomware attack exploiting VPN and SCCM servers

    Key findings:

    • Initial access: VPN and compromised admin account
    • Lateral movement: SCCM and VMware ESXi hosts
    • Malware observed: SystemBC, Tofsee
    • Ransom notes: Linked to Qilin naming conventions
    • Darktrace visibility: Analysts worked with customer via Ask the Expert (ATE) to expand coverage, revealing unusual scanning, rare external connections, and malware indicators tied to Qilin

    Full story:

    Darktrace first detected an instance of Qilin ransomware back in June 2022, when an attacker was observed successfully accessing a customer’s Virtual Private Network (VPN) and compromising an administrative account, before using RDP to gain access to the customer’s Microsoft System Center Configuration Manager (SCCM) server.

    From there, an attack against the customer's VMware ESXi hosts was launched. Fortunately, a reboot of their virtual machines (VM) caught the attention of the security team who further uncovered that custom profiles had been created and remote scripts executed to change root passwords on their VM hosts. Three accounts were found to have been compromised and three systems encrypted by ransomware.  

    Unfortunately, Darktrace was not configured to monitor the affected subnets at the time of the attack. Despite this, the customer was able to work directly with Darktrace analysts via the Ask the Expert (ATE) service to add the subnets in question to Darktrace’s visibility, allowing it to monitor for any further unusual behavior.

    Once visibility over the compromised SCCM server was established, Darktrace observed:

    • A series of unusual network scanning activities  
    • The use of Kali (a Linux distribution designed for digital forensics and penetration testing).
    • Connections to multiple rare external hosts. Many of which were using the “[.]ru” Top Level Domain (TLD).

    One of the external destinations the server was attempting to connect was found to be related to SystemBC, a malware that turns infected hosts into SOCKS5 proxy bots and provides command-and-control (C2) functionality.

    Additionally, the server was observed making external connections over ports 993 and 143 (typically associated with the use of the Interactive Message Access Protocol (IMAP) to multiple rare external endpoints. This was likely due to the presence of Tofsee malware on the device.

    After the compromise had been contained, Darktrace identified several ransom notes following the naming convention “README-RECOVER-<extension/company_id>.txt”” on the network. This naming convention, as well as the similar “<company_id>-RECOVER-README.txt” have been referenced by open-source intelligence (OSINT) providers as associated with Qilin ransom notes[5] [6] [7].

    April 2023: Manufacturing sector breach with large-scale exfiltration

    Key findings:

    • Initial access & movement: Extensive scanning and lateral movement via SMB, RDP, and WMI
    • Credential abuse: Use of default credentials (admin, administrator)
    • Malware/Indicators: Evidence of Cobalt Strike; suspicious WebDAV user agent and JA3 fingerprint
    • Data exfiltration: ~30 GB stolen via SSL to MEGA cloud storage
    • Darktrace analysis: Detected anomalous SMB and DCE-RPC traffic from domain controller, high-volume RDP activity, and rare external connectivity to IPs tied to command-and-control (C2). Confirmed ransom notes followed Qilin naming conventions.

    Full story:

    The next case of Qilin ransomware observed by Darktrace took place in April 2023 on the network of a customer in the manufacturing sector in APAC. Unfortunately for the customer in this instance, Darktrace's Autonomous Response was not active on their environment and no autonomous actions were taken to contain the compromise.

    Over the course of two days, Darktrace identified a wide range of malicious activity ranging from extensive initial scanning and lateral movement attempts to the writing of ransom notes that followed the aforementioned naming convention (i.e., “README-RECOVER-<extension/company_id>.txt”).

    Darktrace observed two affected devices attempting to move laterally through the SMB, DCE-RPC and RDP network protocols. Default credentials (e.g., UserName, admin, administrator) were also observed in the large volumes of SMB sessions initiated by these devices. One of the target devices of these SMB connections was a domain controller, which was subsequently seen making suspicious WMI requests to multiple devices over DCE-RPC and enumerating SMB shares by binding to the ‘server service’ (srvsvc) named pipe to a high number of internal devices within a short time frame. The domain controller was further detected establishing an anomalously high number of connections to several internal devices, notably using the RDP administrative protocol via a default admin cookie.  

    Repeated connections over the HTTP and SSL protocol to multiple newly observed IPs located in the 184.168.123.0/24 range were observed, indicating C2 connectivity.  WebDAV user agent and a JA3 fingerprint potentially associated with Cobalt Strike were notably observed in these connections. A few hours later, Darktrace detected additional suspicious external connections, this time to IPs associated with the MEGA cloud storage solution. Storage solutions such as MEGA are often abused by attackers to host stolen data post exfiltration. In this case, the endpoints were all rare for the network, suggesting this solution was not commonly used by legitimate users. Around 30 GB of data was exfiltrated over the SSL protocol.

    Darktrace did not observe any encryption-related activity on this customer’s network, suggesting that encryption may have taken place locally or within network segments not monitored by Darktrace.

    May 2024: US enterprise compromise

    Key findings:

    • Initial access & movement: Abuse of administrative and default credentials; lateral movement via DCE-RPC and RDP
    • Malware/Indicators: Suspicious executables (‘a157496.exe’, ‘83b87b2.exe’); abuse of RPC service LSM_API_service
    • Data exfiltration: Large amount of data exfiltrated via FTP and other channels to rare external endpoint (194.165.16[.]13)
    • C2 communications: HTTP/SSL traffic linked to Cobalt Strike, including PowerShell request for sihost64.dll
    • Darktrace analysis: Flagged unusual SMB writes, malicious file transfers, and large-scale exfiltration as highly anomalous. Confirmed widespread encryption activity targeting numerous devices and shares.

    Full story:

    The most recent instance of Qilin observed by Darktrace took place in May 2024 and involved a customer in the US.

    In this case, Darktrace initially detected affected devices using unusual administrative and default credentials. Then Darktrace observed additional Internal systems conducting abnormal activity such as:

    • Making extensive suspicious DCE-RPC requests to a range of internal locations
    • Performing network scanning
    • Making unusual internal RDP connections
    • And transferring suspicious executable files like 'a157496.exe' and '83b87b2.exe'.  

    SMB writes of the file "LSM_API_service" were also observed, activity which was considered 100% unusual by Darktrace; this is an RPC service that can be abused to enumerate logged-in users and steal their tokens. Various repeated connections likely representative of C2 communications were detected via both HTTP and SSL to rare external endpoints linked in OSINT to Cobalt Strike use. During these connections, HTTP GET requests for the following URIs were observed:

    /asdffHTTPS

    /asdfgdf

    /asdfgHTTP

    /download/sihost64.dll

    Notably, this included a GET request a DLL file named "sihost64.dll" from a domain controller using PowerShell.  

    Over 102 GB of data may have been transferred to another previously unseen endpoint, 194.165.16[.]13, via the unencrypted File Transfer Protocol (FTP). Additionally, many non-FTP connections to the endpoint could be observed, over which more than 783 GB of data was exfiltrated. Regarding file encryption activity, a wide range of destination devices and shares were targeted.

    Figure 2: Advanced Search graph displaying the total volume of data transferred over FTP to a malicious IP.

    During investigations, Darktrace’s Threat Research team identified an additional customer, also based in the United States, where similar data exfiltration activity was observed in April 2024. Although no indications of ransomware encryption were detected on the network, multiple similarities were observed with the case discussed just prior. Notably, the same exfiltration IP and protocol (194.165.16[.]13 and FTP, respectively) were identified in both cases. Additional HTTP connectivity was further observed to another IP using a self-signed certificate (i.e., CN=ne[.]com,OU=key operations,O=1000,L=,ST=,C=KM) located within the same ASN (i.e., AS48721 Flyservers S.A.). Some of the URIs seen in the GET requests made to this endpoint were the same as identified in that same previous case.

    Information regarding another device also making repeated connections to the same IP was described in the second event of the same Cyber AI Analyst incident. Following this C2 connectivity, network scanning was observed from a compromised domain controller, followed by additional reconnaissance and lateral movement over the DCE-RPC and SMB protocols. Darktrace again observed SMB writes of the file "LSM_API_service", as in the previous case, activity which was also considered 100% unusual for the network. These similarities suggest the same actor or affiliate may have been responsible for activity observed, even though no encryption was observed in the latter case.

    Figure 3: First event of the Cyber AI Analyst investigation following the compromise activity.

    According to researchers at Microsoft, some of the IoCs observed on both affected accounts are associated with Pistachio Tempest, a threat actor reportedly associated with ransomware distribution. The Microsoft threat actor naming convention uses the term "tempest" to reference criminal organizations with motivations of financial gain that are not associated with high confidence to a known non-nation state or commercial entity. While Pistachio Tempest’s TTPs have changed over time, their key elements still involve ransomware, exfiltration, and extortion. Once they've gained access to an environment, Pistachio Tempest typically utilizes additional tools to complement their use of Cobalt Strike; this includes the use of the SystemBC RAT and the SliverC2 framework, respectively. It has also been reported that Pistacho Tempest has experimented with various RaaS offerings, which recently included Qilin ransomware[4].

    Conclusion

    Qilin is a RaaS group that has gained notoriety recently due to high-profile attacks perpetrated by its affiliates. Despite this, the group likely includes affiliates and actors who were previously associated with other ransomware groups. These individuals bring their own modus operandi and utilize both known and novel TTPs and IoCs that differ from one attack to another.

    Darktrace’s anomaly-based technology is inherently threat-agnostic, treating all RaaS variants equally regardless of the attackers’ tools and infrastructure. Deviations from a device’s ‘learned’ pattern of behavior during an attack enable Darktrace to detect and contain potentially disruptive ransomware attacks.

    [related-resource]

    Credit to: Alexandra Sentenac, Emma Foulger, Justin Torres, Min Kim, Signe Zaharka for their contributions.

    References

    [1] https://www.sentinelone.com/anthology/agenda-qilin/  

    [2] https://www.group-ib.com/blog/qilin-ransomware/

    [3] https://www.trendmicro.com/en_us/research/22/h/new-golang-ransomware-agenda-customizes-attacks.html

    [4] https://www.microsoft.com/en-us/security/security-insider/pistachio-tempest

    [5] https://www.trendmicro.com/en_us/research/22/h/new-golang-ransomware-agenda-customizes-attacks.html

    [6] https://www.bleepingcomputer.com/forums/t/790240/agenda-qilin-ransomware-id-random-10-char;-recover-readmetxt-support/

    [7] https://github.com/threatlabz/ransomware_notes/tree/main/qilin

    Darktrace Model Detections

    Internal Reconnaissance

    Device / Suspicious SMB Scanning Activity

    Device / Network Scan

    Device / RDP Scan

    Device / ICMP Address Scan

    Device / Suspicious Network Scan Activity

    Anomalous Connection / SMB Enumeration

    Device / New or Uncommon WMI Activity

    Device / Attack and Recon Tools

    Lateral Movement

    Device / SMB Session Brute Force (Admin)

    Device / Large Number of Model Breaches from Critical Network Device

    Device / Multiple Lateral Movement Model Breaches

    Anomalous Connection / Unusual Admin RDP Session

    Device / SMB Lateral Movement

    Compliance / SMB Drive Write

    Anomalous Connection / New or Uncommon Service Control

    Anomalous Connection / Anomalous DRSGetNCChanges Operation

    Anomalous Server Activity / Domain Controller Initiated to Client

    User / New Admin Credentials on Client

    C2 Communication

    Anomalous Server Activity / Outgoing from Server

    Anomalous Connection / Multiple Connections to New External TCP Port

    Anomalous Connection / Anomalous SSL without SNI to New External

    Anomalous Connection / Rare External SSL Self-Signed

    Device / Increased External Connectivity

    Unusual Activity / Unusual External Activity

    Compromise / New or Repeated to Unusual SSL Port

    Anomalous Connection / Multiple Failed Connections to Rare Endpoint

    Device / Suspicious Domain

    Device / Increased External Connectivity

    Compromise / Sustained SSL or HTTP Increase

    Compromise / Botnet C2 Behaviour

    Anomalous Connection / POST to PHP on New External Host

    Anomalous Connection / Multiple HTTP POSTs to Rare Hostname

    Anomalous File / EXE from Rare External Location

    Exfiltration

    Unusual Activity / Enhanced Unusual External Data Transfer

    Anomalous Connection / Data Sent to Rare Domain

    Unusual Activity / Unusual External Data Transfer

    Anomalous Connection / Uncommon 1 GiB Outbound

    Unusual Activity / Unusual External Data to New Endpoint

    Compliance / FTP / Unusual Outbound FTP

    File Encryption

    Compromise / Ransomware / Suspicious SMB Activity

    Anomalous Connection / Sustained MIME Type Conversion

    Anomalous File / Internal / Additional Extension Appended to SMB File

    Compromise / Ransomware / Possible Ransom Note Write

    Compromise / Ransomware / Possible Ransom Note Read

    Anomalous Connection / Suspicious Read Write Ratio

    IoC List

    IoC – Type – Description + Confidence

    93.115.25[.]139 IP C2 Server, likely associated with SystemBC

    194.165.16[.]13 IP Probable Exfiltration Server

    91.238.181[.]230 IP C2 Server, likely associated with Cobalt Strike

    ikea0[.]com Hostname C2 Server, likely associated with Cobalt Strike

    lebondogicoin[.]com Hostname C2 Server, likely associated with Cobalt Strike

    184.168.123[.]220 IP Possible C2 Infrastructure

    184.168.123[.]219 IP Possible C2 Infrastructure

    184.168.123[.]236 IP Possible C2 Infrastructure

    184.168.123[.]241 IP Possible C2 Infrastructure

    184.168.123[.]247 IP Possible C2 Infrastructure

    184.168.123[.]251 IP Possible C2 Infrastructure

    184.168.123[.]252 IP Possible C2 Infrastructure

    184.168.123[.]229 IP Possible C2 Infrastructure

    184.168.123[.]246 IP Possible C2 Infrastructure

    184.168.123[.]230 IP Possible C2 Infrastructure

    gfs440n010.userstorage.me ga.co[.]nz Hostname Possible Exfiltration Server. Not inherently malicious; associated with MEGA file storage.

    gfs440n010.userstorage.me ga.co[.]nz Hostname Possible Exfiltration Server. Not inherently malicious; associated with MEGA file storage.

    Get the latest insights on emerging cyber threats

    This report explores the latest trends shaping the cybersecurity landscape and what defenders need to know in 2025

    Inside the SOC
    Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
    Written by
    Alexandra Sentenac
    Cyber Analyst

    More in this series

    No items found.

    Blog

    /

    AI

    /

    December 23, 2025

    How to Secure AI in the Enterprise: A Practical Framework for Models, Data, and Agents

    How to secure AI in the enterprise: A practical framework for models, data, and agents Default blog imageDefault blog image

    Introduction: Why securing AI is now a security priority

    AI adoption is at the forefront of the digital movement in businesses, outpacing the rate at which IT and security professionals can set up governance models and security parameters. Adopting Generative AI chatbots, autonomous agents, and AI-enabled SaaS tools promises efficiency and speed but also introduces new forms of risk that traditional security controls were never designed to manage. For many organizations, the first challenge is not whether AI should be secured, but what “securing AI” actually means in practice. Is it about protecting models? Governing data? Monitoring outputs? Or controlling how AI agents behave once deployed?  

    While demand for adoption increases, securing AI use in the enterprise is still an abstract concept to many and operationalizing its use goes far beyond just having visibility. Practitioners need to also consider how AI is sourced, built, deployed, used, and governed across the enterprise.

    The goal for security teams: Implement a clear, lifecycle-based AI security framework. This blog will demonstrate the variety of AI use cases that should be considered when developing this framework and how to frame this conversation to non-technical audiences.  

    What does “securing AI” actually mean?

    Securing AI is often framed as an extension of existing security disciplines. In practice, this assumption can cause confusion.

    Traditional security functions are built around relatively stable boundaries. Application security focuses on code and logic. Cloud security governs infrastructure and identity. Data security protects sensitive information at rest and in motion. Identity security controls who can access systems and services. Each function has clear ownership, established tooling, and well-understood failure modes.

    AI does not fit neatly into any of these categories. An AI system is simultaneously:

    • An application that executes logic
    • A data processor that ingests and generates sensitive information
    • A decision-making layer that influences or automates actions
    • A dynamic system that changes behavior over time

    As a result, the security risks introduced by AI cuts across multiple domains at once. A single AI interaction can involve identity misuse, data exposure, application logic abuse, and supply chain risk all within the same workflow. This is where the traditional lines between security functions begin to blur.

    For example, a malicious prompt submitted by an authorized user is not a classic identity breach, yet it can trigger data leakage or unauthorized actions. An AI agent calling an external service may appear as legitimate application behavior, even as it violates data sovereignty or compliance requirements. AI-generated code may pass standard development checks while introducing subtle vulnerabilities or compromised dependencies.

    In each case, no single security team “owns” the risk outright.

    This is why securing AI cannot be reduced to model safety, governance policies, or perimeter controls alone. It requires a shared security lens that spans development, operations, data handling, and user interaction. Securing AI means understanding not just whether systems are accessed securely, but whether they are being used, trained, and allowed to act in ways that align with business intent and risk tolerance.

    At its core, securing AI is about restoring clarity in environments where accountability can quickly blur. It is about knowing where AI exists, how it behaves, what it is allowed to do, and how its decisions affect the wider enterprise. Without this clarity, AI becomes a force multiplier for both productivity and risk.

    The five categories of AI risk in the enterprise

    A practical way to approach AI security is to organize risk around how AI is used and where it operates. The framework below defines five categories of AI risk, each aligned to a distinct layer of the enterprise AI ecosystem  

    How to Secure AI in the Enterprise:

    • Defending against misuse and emergent behaviors
    • Monitoring and controlling AI in operation
    • Protecting AI development and infrastructure
    • Securing the AI supply chain
    • Strengthening readiness and oversight

    Together, these categories provide a structured lens for understanding how AI risk manifests and where security teams should focus their efforts.

    1. Defending against misuse and emergent AI behaviors

    Generative AI systems and agents can be manipulated in ways that bypass traditional controls. Even when access is authorized, AI can be misused, repurposed, or influenced through carefully crafted prompts and interactions.

    Key risks include:

    • Malicious prompt injection designed to coerce unwanted actions
    • Unauthorized or unintended use cases that bypass guardrails
    • Exposure of sensitive data through prompt histories
    • Hallucinated or malicious outputs that influence human behavior

    Unlike traditional applications, AI systems can produce harmful outcomes without being explicitly compromised. Securing this layer requires monitoring intent, not just access. Security teams need visibility into how AI systems are being prompted, how outputs are consumed, and whether usage aligns with approved business purposes

    2. Monitoring and controlling AI in operation

    Once deployed, AI agents operate at machine speed and scale. They can initiate actions, exchange data, and interact with other systems with little human oversight. This makes runtime visibility critical.

    Operational AI risks include:

    • Agents using permissions in unintended ways
    • Uncontrolled outbound connections to external services or agents
    • Loss of forensic visibility into ephemeral AI components
    • Non-compliant data transmission across jurisdictions

    Securing AI in operation requires real-time monitoring of agent behavior, centralized control points such as AI gateways, and the ability to capture agent state for investigation. Without these capabilities, security teams may be blind to how AI systems behave once live, particularly in cloud-native or regulated environments.

    3. Protecting AI development and infrastructure

    Many AI risks are introduced long before deployment. Development pipelines, infrastructure configurations, and architectural decisions all influence the security posture of AI systems.

    Common risks include:

    • Misconfigured permissions and guardrails
    • Insecure or overly complex agent architectures
    • Infrastructure-as-Code introducing silent misconfigurations
    • Vulnerabilities in AI-generated code and dependencies

    AI-generated code adds a new dimension of risk, as hallucinated packages or insecure logic may be harder to detect and debug than human-written code. Securing AI development means applying security controls early, including static analysis, architectural review, and continuous configuration monitoring throughout the build process.

    4. Securing the AI supply chain

    AI supply chains are often opaque. Models, datasets, dependencies, and services may come from third parties with varying levels of transparency and assurance.

    Key supply chain risks include:

    • Shadow AI tools used outside approved controls
    • External AI agents granted internal access
    • Suppliers applying AI to enterprise data without disclosure
    • Compromised models, training data, or dependencies

    Securing the AI supply chain requires discovering where AI is used, validating the provenance and licensing of models and data, and assessing how suppliers process and protect enterprise information. Without this visibility, organizations risk data leakage, regulatory exposure, and downstream compromise through trusted integrations.

    5. Strengthening readiness and oversight

    Even with strong technical controls, AI security fails without governance, testing, and trained teams. AI introduces new incident scenarios that many security teams are not yet prepared to handle.

    Oversight risks include:

    • Lack of meaningful AI risk reporting
    • Untested AI systems in production
    • Security teams untrained in AI-specific threats

    Organizations need AI-aware reporting, red and purple team exercises that include AI systems, and ongoing training to build operational readiness. These capabilities ensure AI risks are understood, tested, and continuously improved, rather than discovered during a live incident.

    Reframing AI security for the boardroom

    AI security is not just a technical issue. It is a trust, accountability, and resilience issue. Boards want assurance that AI-driven decisions are reliable, explainable, and protected from tampering.

    Effective communication with leadership focuses on:

    • Trust: confidence in data integrity, model behavior, and outputs
    • Accountability: clear ownership across teams and suppliers
    • Resilience: the ability to operate, audit, and adapt under attack or regulation

    Mapping AI security efforts to recognized frameworks such as ISO/IEC 42001 and the NIST AI Risk Management Framework helps demonstrate maturity and aligns AI security with broader governance objectives.

    Conclusion: Securing AI is a lifecycle challenge

    The same characteristics that make AI transformative also make it difficult to secure. AI systems blur traditional boundaries between software, users, and decision-making, expanding the attack surface in subtle but significant ways.

    Securing AI requires restoring clarity. Knowing where AI exists, how it behaves, who controls it, and how it is governed. A framework-based approach allows organizations to innovate with AI while maintaining trust, accountability, and control.

    The journey to secure AI is ongoing, but it begins with understanding the risks across the full AI lifecycle and building security practices that evolve alongside the technology.

    Continue reading
    About the author
    Brittany Woodsmall
    Product Marketing Manager, AI & Attack Surface

    Blog

    /

    AI

    /

    December 22, 2025

    The Year Ahead: AI Cybersecurity Trends to Watch in 2026

    2026 cyber threat trendsDefault blog imageDefault blog image

    Introduction: 2026 cyber trends

    Each year, we ask some of our experts to step back from the day-to-day pace of incidents, vulnerabilities, and headlines to reflect on the forces reshaping the threat landscape. The goal is simple:  to identify and share the trends we believe will matter most in the year ahead, based on the real-world challenges our customers are facing, the technology and issues our R&D teams are exploring, and our observations of how both attackers and defenders are adapting.  

    In 2025, we saw generative AI and early agentic systems moving from limited pilots into more widespread adoption across enterprises. Generative AI tools became embedded in SaaS products and enterprise workflows we rely on every day, AI agents gained more access to data and systems, and we saw glimpses of how threat actors can manipulate commercial AI models for attacks. At the same time, expanding cloud and SaaS ecosystems and the increasing use of automation continued to stretch traditional security assumptions.

    Looking ahead to 2026, we’re already seeing the security of AI models, agents, and the identities that power them becoming a key point of tension – and opportunity -- for both attackers and defenders. Long-standing challenges and risks such as identity, trust, data integrity, and human decision-making will not disappear, but AI and automation will increase the speed and scale of the cyber risk.  

    Here's what a few of our experts believe are the trends that will shape this next phase of cybersecurity, and the realities organizations should prepare for.  

    Agentic AI is the next big insider risk

    In 2026, organizations may experience their first large-scale security incidents driven by agentic AI behaving in unintended ways—not necessarily due to malicious intent, but because of how easily agents can be influenced. AI agents are designed to be helpful, lack judgment, and operate without understanding context or consequence. This makes them highly efficient—and highly pliable. Unlike human insiders, agentic systems do not need to be socially engineered, coerced, or bribed. They only need to be prompted creatively, misinterpret legitimate prompts, or be vulnerable to indirect prompt injection. Without strong controls around access, scope, and behavior, agents may over-share data, misroute communications, or take actions that introduce real business risk. Securing AI adoption will increasingly depend on treating agents as first-class identities—monitored, constrained, and evaluated based on behavior, not intent.

    -- Nicole Carignan, SVP of Security & AI Strategy

    Prompt Injection moves from theory to front-page breach

    We’ll see the first major story of an indirect prompt injection attack against companies adopting AI either through an accessible chatbot or an agentic system ingesting a hidden prompt. In practice, this may result in unauthorized data exposure or unintended malicious behavior by AI systems, such as over-sharing information, misrouting communications, or acting outside their intended scope. Recent attention on this risk—particularly in the context of AI-powered browsers and additional safety layers being introduced to guide agent behavior—highlights a growing industry awareness of the challenge.  

    -- Collin Chapleau, Senior Director of Security & AI Strategy

    Humans are even more outpaced, but not broken

    When it comes to cyber, people aren’t failing; the system is moving faster than they can. Attackers exploit the gap between human judgment and machine-speed operations. The rise of deepfakes and emotion-driven scams that we’ve seen in the last few years reduce our ability to spot the familiar human cues we’ve been taught to look out for. Fraud now spans social platforms, encrypted chat, and instant payments in minutes. Expecting humans to be the last line of defense is unrealistic.

    Defense must assume human fallibility and design accordingly. Automated provenance checks, cryptographic signatures, and dual-channel verification should precede human judgment. Training still matters, but it cannot close the gap alone. In the year ahead, we need to see more of a focus on partnership: systems that absorb risk so humans make decisions in context, not under pressure.

    -- Margaret Cunningham, VP of Security & AI Strategy

    AI removes the attacker bottleneck—smaller organizations feel the impact

    One factor that is currently preventing more companies from breaches is a bottleneck on the attacker side: there’s not enough human hacker capital. The number of human hands on a keyboard is a rate-determining factor in the threat landscape. Further advancements of AI and automation will continue to open that bottleneck. We are already seeing that. The ostrich approach of hoping that one’s own company is too obscure to be noticed by attackers will no longer work as attacker capacity increases.  

    -- Max Heinemeyer, Global Field CISO

    SaaS platforms become the preferred supply chain target

    Attackers have learned a simple lesson: compromising SaaS platforms can have big payouts. As a result, we’ll see more targeting of commercial off-the-shelf SaaS providers, which are often highly trusted and deeply integrated into business environments. Some of these attacks may involve software with unfamiliar brand names, but their downstream impact will be significant. In 2026, expect more breaches where attackers leverage valid credentials, APIs, or misconfigurations to bypass traditional defenses entirely.

    -- Nathaniel Jones, VP of Security & AI Strategy

    Increased commercialization of generative AI and AI assistants in cyber attacks

    One trend we’re watching closely for 2026 is the commercialization of AI-assisted cybercrime. For example, cybercrime prompt playbooks sold on the dark web—essentially copy-and-paste frameworks that show attackers how to misuse or jailbreak AI models. It’s an evolution of what we saw in 2025, where AI lowered the barrier to entry. In 2026, those techniques become productized, scalable, and much easier to reuse.  

    -- Toby Lewis, Global Head of Threat Analysis

    Conclusion

    Taken together, these trends underscore that the core challenges of cybersecurity are not changing dramatically -- identity, trust, data, and human decision-making still sit at the core of most incidents. What is changing quickly is the environment in which these challenges play out. AI and automation are accelerating everything: how quickly attackers can scale, how widely risk is distributed, and how easily unintended behavior can create real impact. And as technology like cloud services and SaaS platforms become even more deeply integrated into businesses, the potential attack surface continues to expand.  

    Predictions are not guarantees. But the patterns emerging today suggest that 2026 will be a year where securing AI becomes inseparable from securing the business itself. The organizations that prepare now—by understanding how AI is used, how it behaves, and how it can be misused—will be best positioned to adopt these technologies with confidence in the year ahead.

    Learn more about how to secure AI adoption in the enterprise without compromise by registering to join our live launch webinar on February 3, 2026.  

    Continue reading
    About the author
    The Darktrace Community
    Your data. Our AI.
    Elevate your network security with Darktrace AI