Blog
/
Network
/
March 19, 2024

Pikabot Malware: Insights, Impact, & Attack Analysis

Learn about Pikabot malware and its rapid evolution in the wild, impacting organizations and how to defend against this growing threat.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Brianna Leddy
Director of Analyst Operations
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
19
Mar 2024

How does Loader Malware work?

Throughout 2023, the Darktrace Threat Research team identified and investigated multiple strains of loader malware affecting customers across its fleet. These malicious programs typically serve as a gateway for threat actors to gain initial access to an organization’s network, paving the way for subsequent attacks, including additional malware infections or disruptive ransomware attacks.

How to defend against loader malware

The prevalence of such initial access threats highlights the need for organizations to defend against multi-phase compromises, where modular malware swiftly progresses from one stage of an attack to the next. One notable example observed in 2023 was Pikabot, a versatile loader malware used for initial access and often accompanied by secondary compromises like Cobalt Strike and Black Basta ransomware.

While Darktrace initially investigated multiple instances of campaign-like activity associated with Pikabot during the summer of 2023, a new campaign emerged in October which was observed targeting a Darktrace customer in Europe. Thanks to the timely detection by Darktrace DETECT™ and the support of Darktrace’s Security Operations Center (SOC), the Pikabot compromise was quickly shut down before it could escalate into a more disruptive attack.

What is Pikabot?

Pikabot is one of the latest modular loader malware strains that has been active since the first half of 2023, with several evolutions in its methodology observed in the months since. Initial researchers noted similarities to the Qakbot aka Qbot or Pinkslipbot and Mantanbuchus malware families, and while Pikabot appears to be a new malware in early development, it shares multiple commonalities with Qakbot [1].

First, both Pikabot and Qakbot have similar distribution methods, can be used for multi-stage attacks, and are often accompanied by downloads of Cobalt Strike and other malware strains. The threat actor known as TA577, which has also been referred to as Water Curupira, has been seen to use both types of malware in spam campaigns which can lead to Black Basta ransomware attacks [2] [3].Notably, a rise in Pikabot campaigns were observed in September and October 2023, shortly after the takedown of Qakbot in Operation Duck Hunt, suggesting that Pikabot may be serving as a replacement for initial access to target network [4].

How does Pikabot malware work?

Many Pikabot infections start with a malicious email, particularly using email thread hijacking; however, other cases have been distributed via malspam and malvertising [5]. Once downloaded, Pikabot runs anti-analysis techniques and checks the system’s language, self-terminating if the language matches that of a Commonwealth of Independent States (CIS) country, such as Russian or Ukrainian. It will then gather key information to send to a command-and-control (C2) server, at which point additional payload downloads may be observed [2]. Early response to a Pikabot infection is important for organizations to prevent escalation to a significant compromise such as ransomware.

Darktrace’s Coverage of Pikabot malware

Between April and July 2023, the Darktrace Threat Research team investigated Pikabot infections affected more than 15 customer environments; these attacks primarily targeted US and European organizations spanning multiple industries, and most followed the below lifecycle:

  1. Initial access via malspam or email, often outside of Darktrace’s scope
  2. Suspicious executable download from a URI in the format /\/[a-z0-9A-Z]{3,}\/[a-z0-9A-Z]{5,}/ and using a Windows PowerShell user agent
  3. C2 connections to IP addresses on uncommon ports including 1194 and 2078
  4. Some cases involved further C2 activity to Cobalt Strike endpoints

In October 2023, a second campaign emerged that largely followed the same attack pattern, with a notable difference that cURL was used for the initial payload download as opposed to PowerShell. All the Pikabot cases that Darktrace has observed since October 2023 have used cURL, which could indicate a shift in approach from targeting Windows devices to multi-operating system environments.

Figure 1: Timeline of the Pikabot infection over a 2-hour period.

On October 17, 2023, Darktrace observed a Pikabot infection on the network of a European customer after an internal user seemingly clicked a malicious link in a phishing email, thereby compromising their device. As the customer did not have Darktrace/Email™ deployed on their network, Darktrace did not have visibility over the email. Despite this, DETECT was still able to provide full visibility over the network-based activity that ensued.

Darktrace observed the device using a cURL user agent when initiating the download of an unusual executable (.exe) file from an IP address that had never previously been observed on the network. Darktrace further recognized that the executable file was attempting to masquerade as a different file type, likely to evade the detection of security teams and their security tools. Within one minute, the device began to communicate with additional unusual IP addresses on uncommon ports (185.106.94[.]174:5000 and 80.85.140[.]152:5938), both of which have been noted by open-source intelligence (OSINT) vendors as Pikabot C2 servers [6] [7].

Figure 2: Darktrace model breach Event Log showing the initial file download, immediately followed by a connection attempt to a Pikabot C2 server.

Around 40 minutes after the initial download, Darktrace detected the device performing suspicious DNS tunneling using a pattern that resembled the Cobalt Strike Beacon. This was accompanied by beaconing activity to a rare domain, ‘wordstt182[.]com’, which was registered only 4 days prior to this activity [8]. Darktrace observed additional DNS connections to the endpoint, ‘building4business[.]net’, which had been linked to Black Basta ransomware [2].

Figure 3: The affected device making successful TXT DNS requests to known Black Basta endpoints.

As this customer had integrated Darktrace with the Microsoft Defender, Defender was able to contextualize the DETECT model breaches with endpoint insights, such as known threats and malware, providing customers with unparalleled visibility of the host-level detections surrounding network-level anomalies.

In this case, the behavior of the affected device triggered multiple Microsoft Defender alerts, including one alert which linked the activity to the threat actor Storm-0464, another name for TA577 and Water Curupira. These insights were presented to the customer in the form of a Security Integration alert, allowing them to build a full picture of the ongoing incident.

Figure 4: Security Integration alert from Microsoft Defender in Darktrace, linking the observed activity to the threat group Storm-0464.

As the customer had subscribed to Darktrace’s Proactive Threat Notification (PTN) service, the customer received timely alerts from Darktrace’s SOC notifying them of the suspicious activity associated with Pikabot. This allowed the customer’s security team to quickly identify the affected device and remove it from their environment for remediation.

Although the customer did have Darktrace RESPOND™ enabled on their network, it was configured in human confirmation mode, requiring manual application for any RESPOND actions. RESPOND had suggested numerous actions to interrupt and contain the attack, including blocking connections to the observed Pikabot C2 addresses, which were manually actioned by the customer’s security team after the fact. Had RESPOND been enabled in autonomous response mode during the attack, it would have autonomously blocked these C2 connections and prevented the download of any suspicious files, effectively halting the escalation of the attack.

Nonetheless, Darktrace DETECT’s prompt identification and alerting of this incident played a crucial role in enabling the customer to mitigate the threat of Pikabot, preventing it from progressing into a disruptive ransomware attack.

Figure 5: Darktrace RESPOND actions recommended from the initial file download and throughout the C2 traffic, ranging from blocking specific connections to IP addresses and ports to enforcing a normal pattern of life for the source device.

Conclusion

Pikabot is just one recent example of a modular strain of loader known for its adaptability and speed, seamlessly changing tactics from one campaign to the next and utilizing new infrastructure to initiate multi-stage attacks. Leveraging commonly used tools and services like Windows PowerShell and cURL, alongside anti-analysis techniques, this malware can evade the detection and often bypass traditional security tools.

In this incident, Darktrace detected a Pikabot infection in its early stages, identifying an anomalous file download using a cURL user agent, a new tactic for this particular strain of malware. This timely detection, coupled with the support of Darktrace’s SOC, empowered the customer to quickly identify the compromised device and act against it, thwarting threat actors attempting to connect to malicious Cobalt Strike and Black Basta servers. By preventing the escalation of the attack, including potential ransomware deployment, the customer’s environment remained safeguarded.

Had Darktrace RESPOND been enabled in autonomous response mode at the time of this attack, it would have been able to further support the customer by applying targeted mitigative actions to contain the threat of Pikabot at its onset, bolstering their defenses even more effectively.

Credit to Brianna Leddy, Director of Analysis, Signe Zaharka, Senior Cyber Security Analyst

Appendix

Darktrace DETECT Models

Anomalous Connection / Anomalous SSL without SNI to New External

Anomalous Connection / Application Protocol on Uncommon Port

Anomalous Connection / Multiple Connections to New External TCP Port

Anomalous Connection / New User Agent to IP Without Hostname

Anomalous Connection / Powershell to Rare External

Anomalous Connection / Rare External SSL Self-Signed

Anomalous Connection / Repeated Rare External SSL Self-Signed

Anomalous File / EXE from Rare External Location

Anomalous File / Masqueraded File Transfer

Anomalous File / Multiple EXE from Rare External Locations

Compromise / Agent Beacon to New Endpoint

Compromise / Beacon to Young Endpoint

Compromise / Beaconing Activity To External Rare

Compromise / DNS / DNS Tunnel with TXT Records

Compromise / New or Repeated to Unusual SSL Port

Compromise / SSL Beaconing to Rare Destination

Compromise / Suspicious Beaconing Behaviour

Compromise / Suspicious File and C2

Device / Initial Breach Chain Compromise

Device / Large Number of Model Breaches

Device / New PowerShell User Agent

Device / New User Agent

Device / New User Agent and New IP

Device / Suspicious Domain

Security Integration / C2 Activity and Integration Detection

Security Integration / Egress and Integration Detection

Security Integration / High Severity Integration Detection

Security Integration / High Severity Integration Incident

Security Integration / Low Severity Integration Detection

Security Integration / Low Severity Integration Incident

Antigena / Network / External Threat / Antigena File then New Outbound Block

Antigena / Network / External Threat / Antigena Suspicious Activity Block

Antigena / Network / External Threat / Antigena Suspicious File Block

Antigena / Network / Significant Anomaly / Antigena Breaches Over Time Block

Antigena / Network / Significant Anomaly / Antigena Controlled and Model Breach

Antigena / Network / Significant Anomaly / Antigena Enhanced Monitoring from Client Block

Antigena / Network / Significant Anomaly / Antigena Significant Anomaly from Client Block

Antigena / Network / Significant Anomaly / Antigena Significant Security Integration and Network Activity Block

List of Indicators of Compromise (IoC)

IOC - TYPE - DESCRIPTION + CONFIDENCE

128.140.102[.]132 - IP Address - Pikabot Download

185.106.94[.]174:5000 - IP Address: Port - Pikabot C2 Endpoint

80.85.140[.]152:5938 - IP Address: Port - Pikabot C2 Endpoint

building4business[.]net - Hostname - Cobalt Strike DNS Beacon

wordstt182[.]com - Hostname - Cobalt Strike Server

167.88.166[.]109 - IP Address - Cobalt Strike Server

192.9.135[.]73 - IP - Pikabot C2 Endpoint

192.121.17[.]68 - IP - Pikabot C2 Endpoint

185.87.148[.]132 - IP - Pikabot C2 Endpoint

129.153.22[.]231 - IP - Pikabot C2 Endpoint

129.153.135[.]83 - IP - Pikabot C2 Endpoint

154.80.229[.]76 - IP - Pikabot C2 Endpoint

192.121.17[.]14 - IP - Pikabot C2 Endpoint

162.252.172[.]253 - IP - Pikabot C2 Endpoint

103.124.105[.]147 - IP - Likely Pikabot Download

178.18.246[.]136 - IP - Pikabot C2 Endpoint

86.38.225[.]106 - IP - Pikabot C2 Endpoint

198.44.187[.]12 - IP - Pikabot C2 Endpoint

154.12.233[.]66 - IP - Pikabot C2 Endpoint

MITRE ATT&CK Mapping

TACTIC - TECHNIQUE

Defense Evasion - Masquerading: Masquerade File Type (T1036.008)

Command and Control - Application Layer Protocol: Web Protocols (T1071.001)

Command and Control - Non-Standard Port (T1571)

Command and Control - Application Layer Protocol: DNS (T1071.004)

Command and Control - Protocol Tunneling (T1572)

References

[1] https://news.sophos.com/en-us/2023/06/12/deep-dive-into-the-pikabot-cyber-threat/?&web_view=true  

[2] https://www.trendmicro.com/en_be/research/24/a/a-look-into-pikabot-spam-wave-campaign.html

[3] https://thehackernews.com/2024/01/alert-water-curupira-hackers-actively.html

[4] https://www.darkreading.com/cyberattacks-data-breaches/pikabot-malware-qakbot-replacement-black-basta-attacks

[5] https://www.redpacketsecurity.com/pikabot-distributed-via-malicious-ads-6/

[6] https://www.virustotal.com/gui/ip-address/185.106.94.174/detection

[7] https://www.virustotal.com/gui/ip-address/80.85.140.152/detection

[8] https://www.domainiq.com/domain?wordstt182.com

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Brianna Leddy
Director of Analyst Operations

More in this series

No items found.

Blog

/

AI

/

December 23, 2025

How to Secure AI in the Enterprise: A Practical Framework for Models, Data, and Agents

How to secure AI in the enterprise: A practical framework for models, data, and agents Default blog imageDefault blog image

Introduction: Why securing AI is now a security priority

AI adoption is at the forefront of the digital movement in businesses, outpacing the rate at which IT and security professionals can set up governance models and security parameters. Adopting Generative AI chatbots, autonomous agents, and AI-enabled SaaS tools promises efficiency and speed but also introduces new forms of risk that traditional security controls were never designed to manage. For many organizations, the first challenge is not whether AI should be secured, but what “securing AI” actually means in practice. Is it about protecting models? Governing data? Monitoring outputs? Or controlling how AI agents behave once deployed?  

While demand for adoption increases, securing AI use in the enterprise is still an abstract concept to many and operationalizing its use goes far beyond just having visibility. Practitioners need to also consider how AI is sourced, built, deployed, used, and governed across the enterprise.

The goal for security teams: Implement a clear, lifecycle-based AI security framework. This blog will demonstrate the variety of AI use cases that should be considered when developing this framework and how to frame this conversation to non-technical audiences.  

What does “securing AI” actually mean?

Securing AI is often framed as an extension of existing security disciplines. In practice, this assumption can cause confusion.

Traditional security functions are built around relatively stable boundaries. Application security focuses on code and logic. Cloud security governs infrastructure and identity. Data security protects sensitive information at rest and in motion. Identity security controls who can access systems and services. Each function has clear ownership, established tooling, and well-understood failure modes.

AI does not fit neatly into any of these categories. An AI system is simultaneously:

  • An application that executes logic
  • A data processor that ingests and generates sensitive information
  • A decision-making layer that influences or automates actions
  • A dynamic system that changes behavior over time

As a result, the security risks introduced by AI cuts across multiple domains at once. A single AI interaction can involve identity misuse, data exposure, application logic abuse, and supply chain risk all within the same workflow. This is where the traditional lines between security functions begin to blur.

For example, a malicious prompt submitted by an authorized user is not a classic identity breach, yet it can trigger data leakage or unauthorized actions. An AI agent calling an external service may appear as legitimate application behavior, even as it violates data sovereignty or compliance requirements. AI-generated code may pass standard development checks while introducing subtle vulnerabilities or compromised dependencies.

In each case, no single security team “owns” the risk outright.

This is why securing AI cannot be reduced to model safety, governance policies, or perimeter controls alone. It requires a shared security lens that spans development, operations, data handling, and user interaction. Securing AI means understanding not just whether systems are accessed securely, but whether they are being used, trained, and allowed to act in ways that align with business intent and risk tolerance.

At its core, securing AI is about restoring clarity in environments where accountability can quickly blur. It is about knowing where AI exists, how it behaves, what it is allowed to do, and how its decisions affect the wider enterprise. Without this clarity, AI becomes a force multiplier for both productivity and risk.

The five categories of AI risk in the enterprise

A practical way to approach AI security is to organize risk around how AI is used and where it operates. The framework below defines five categories of AI risk, each aligned to a distinct layer of the enterprise AI ecosystem  

How to Secure AI in the Enterprise:

  • Defending against misuse and emergent behaviors
  • Monitoring and controlling AI in operation
  • Protecting AI development and infrastructure
  • Securing the AI supply chain
  • Strengthening readiness and oversight

Together, these categories provide a structured lens for understanding how AI risk manifests and where security teams should focus their efforts.

1. Defending against misuse and emergent AI behaviors

Generative AI systems and agents can be manipulated in ways that bypass traditional controls. Even when access is authorized, AI can be misused, repurposed, or influenced through carefully crafted prompts and interactions.

Key risks include:

  • Malicious prompt injection designed to coerce unwanted actions
  • Unauthorized or unintended use cases that bypass guardrails
  • Exposure of sensitive data through prompt histories
  • Hallucinated or malicious outputs that influence human behavior

Unlike traditional applications, AI systems can produce harmful outcomes without being explicitly compromised. Securing this layer requires monitoring intent, not just access. Security teams need visibility into how AI systems are being prompted, how outputs are consumed, and whether usage aligns with approved business purposes

2. Monitoring and controlling AI in operation

Once deployed, AI agents operate at machine speed and scale. They can initiate actions, exchange data, and interact with other systems with little human oversight. This makes runtime visibility critical.

Operational AI risks include:

  • Agents using permissions in unintended ways
  • Uncontrolled outbound connections to external services or agents
  • Loss of forensic visibility into ephemeral AI components
  • Non-compliant data transmission across jurisdictions

Securing AI in operation requires real-time monitoring of agent behavior, centralized control points such as AI gateways, and the ability to capture agent state for investigation. Without these capabilities, security teams may be blind to how AI systems behave once live, particularly in cloud-native or regulated environments.

3. Protecting AI development and infrastructure

Many AI risks are introduced long before deployment. Development pipelines, infrastructure configurations, and architectural decisions all influence the security posture of AI systems.

Common risks include:

  • Misconfigured permissions and guardrails
  • Insecure or overly complex agent architectures
  • Infrastructure-as-Code introducing silent misconfigurations
  • Vulnerabilities in AI-generated code and dependencies

AI-generated code adds a new dimension of risk, as hallucinated packages or insecure logic may be harder to detect and debug than human-written code. Securing AI development means applying security controls early, including static analysis, architectural review, and continuous configuration monitoring throughout the build process.

4. Securing the AI supply chain

AI supply chains are often opaque. Models, datasets, dependencies, and services may come from third parties with varying levels of transparency and assurance.

Key supply chain risks include:

  • Shadow AI tools used outside approved controls
  • External AI agents granted internal access
  • Suppliers applying AI to enterprise data without disclosure
  • Compromised models, training data, or dependencies

Securing the AI supply chain requires discovering where AI is used, validating the provenance and licensing of models and data, and assessing how suppliers process and protect enterprise information. Without this visibility, organizations risk data leakage, regulatory exposure, and downstream compromise through trusted integrations.

5. Strengthening readiness and oversight

Even with strong technical controls, AI security fails without governance, testing, and trained teams. AI introduces new incident scenarios that many security teams are not yet prepared to handle.

Oversight risks include:

  • Lack of meaningful AI risk reporting
  • Untested AI systems in production
  • Security teams untrained in AI-specific threats

Organizations need AI-aware reporting, red and purple team exercises that include AI systems, and ongoing training to build operational readiness. These capabilities ensure AI risks are understood, tested, and continuously improved, rather than discovered during a live incident.

Reframing AI security for the boardroom

AI security is not just a technical issue. It is a trust, accountability, and resilience issue. Boards want assurance that AI-driven decisions are reliable, explainable, and protected from tampering.

Effective communication with leadership focuses on:

  • Trust: confidence in data integrity, model behavior, and outputs
  • Accountability: clear ownership across teams and suppliers
  • Resilience: the ability to operate, audit, and adapt under attack or regulation

Mapping AI security efforts to recognized frameworks such as ISO/IEC 42001 and the NIST AI Risk Management Framework helps demonstrate maturity and aligns AI security with broader governance objectives.

Conclusion: Securing AI is a lifecycle challenge

The same characteristics that make AI transformative also make it difficult to secure. AI systems blur traditional boundaries between software, users, and decision-making, expanding the attack surface in subtle but significant ways.

Securing AI requires restoring clarity. Knowing where AI exists, how it behaves, who controls it, and how it is governed. A framework-based approach allows organizations to innovate with AI while maintaining trust, accountability, and control.

The journey to secure AI is ongoing, but it begins with understanding the risks across the full AI lifecycle and building security practices that evolve alongside the technology.

Continue reading
About the author
Brittany Woodsmall
Product Marketing Manager, AI & Attack Surface

Blog

/

AI

/

December 22, 2025

The Year Ahead: AI Cybersecurity Trends to Watch in 2026

2026 cyber threat trendsDefault blog imageDefault blog image

Introduction: 2026 cyber trends

Each year, we ask some of our experts to step back from the day-to-day pace of incidents, vulnerabilities, and headlines to reflect on the forces reshaping the threat landscape. The goal is simple:  to identify and share the trends we believe will matter most in the year ahead, based on the real-world challenges our customers are facing, the technology and issues our R&D teams are exploring, and our observations of how both attackers and defenders are adapting.  

In 2025, we saw generative AI and early agentic systems moving from limited pilots into more widespread adoption across enterprises. Generative AI tools became embedded in SaaS products and enterprise workflows we rely on every day, AI agents gained more access to data and systems, and we saw glimpses of how threat actors can manipulate commercial AI models for attacks. At the same time, expanding cloud and SaaS ecosystems and the increasing use of automation continued to stretch traditional security assumptions.

Looking ahead to 2026, we’re already seeing the security of AI models, agents, and the identities that power them becoming a key point of tension – and opportunity -- for both attackers and defenders. Long-standing challenges and risks such as identity, trust, data integrity, and human decision-making will not disappear, but AI and automation will increase the speed and scale of the cyber risk.  

Here's what a few of our experts believe are the trends that will shape this next phase of cybersecurity, and the realities organizations should prepare for.  

Agentic AI is the next big insider risk

In 2026, organizations may experience their first large-scale security incidents driven by agentic AI behaving in unintended ways—not necessarily due to malicious intent, but because of how easily agents can be influenced. AI agents are designed to be helpful, lack judgment, and operate without understanding context or consequence. This makes them highly efficient—and highly pliable. Unlike human insiders, agentic systems do not need to be socially engineered, coerced, or bribed. They only need to be prompted creatively, misinterpret legitimate prompts, or be vulnerable to indirect prompt injection. Without strong controls around access, scope, and behavior, agents may over-share data, misroute communications, or take actions that introduce real business risk. Securing AI adoption will increasingly depend on treating agents as first-class identities—monitored, constrained, and evaluated based on behavior, not intent.

-- Nicole Carignan, SVP of Security & AI Strategy

Prompt Injection moves from theory to front-page breach

We’ll see the first major story of an indirect prompt injection attack against companies adopting AI either through an accessible chatbot or an agentic system ingesting a hidden prompt. In practice, this may result in unauthorized data exposure or unintended malicious behavior by AI systems, such as over-sharing information, misrouting communications, or acting outside their intended scope. Recent attention on this risk—particularly in the context of AI-powered browsers and additional safety layers being introduced to guide agent behavior—highlights a growing industry awareness of the challenge.  

-- Collin Chapleau, Senior Director of Security & AI Strategy

Humans are even more outpaced, but not broken

When it comes to cyber, people aren’t failing; the system is moving faster than they can. Attackers exploit the gap between human judgment and machine-speed operations. The rise of deepfakes and emotion-driven scams that we’ve seen in the last few years reduce our ability to spot the familiar human cues we’ve been taught to look out for. Fraud now spans social platforms, encrypted chat, and instant payments in minutes. Expecting humans to be the last line of defense is unrealistic.

Defense must assume human fallibility and design accordingly. Automated provenance checks, cryptographic signatures, and dual-channel verification should precede human judgment. Training still matters, but it cannot close the gap alone. In the year ahead, we need to see more of a focus on partnership: systems that absorb risk so humans make decisions in context, not under pressure.

-- Margaret Cunningham, VP of Security & AI Strategy

AI removes the attacker bottleneck—smaller organizations feel the impact

One factor that is currently preventing more companies from breaches is a bottleneck on the attacker side: there’s not enough human hacker capital. The number of human hands on a keyboard is a rate-determining factor in the threat landscape. Further advancements of AI and automation will continue to open that bottleneck. We are already seeing that. The ostrich approach of hoping that one’s own company is too obscure to be noticed by attackers will no longer work as attacker capacity increases.  

-- Max Heinemeyer, Global Field CISO

SaaS platforms become the preferred supply chain target

Attackers have learned a simple lesson: compromising SaaS platforms can have big payouts. As a result, we’ll see more targeting of commercial off-the-shelf SaaS providers, which are often highly trusted and deeply integrated into business environments. Some of these attacks may involve software with unfamiliar brand names, but their downstream impact will be significant. In 2026, expect more breaches where attackers leverage valid credentials, APIs, or misconfigurations to bypass traditional defenses entirely.

-- Nathaniel Jones, VP of Security & AI Strategy

Increased commercialization of generative AI and AI assistants in cyber attacks

One trend we’re watching closely for 2026 is the commercialization of AI-assisted cybercrime. For example, cybercrime prompt playbooks sold on the dark web—essentially copy-and-paste frameworks that show attackers how to misuse or jailbreak AI models. It’s an evolution of what we saw in 2025, where AI lowered the barrier to entry. In 2026, those techniques become productized, scalable, and much easier to reuse.  

-- Toby Lewis, Global Head of Threat Analysis

Conclusion

Taken together, these trends underscore that the core challenges of cybersecurity are not changing dramatically -- identity, trust, data, and human decision-making still sit at the core of most incidents. What is changing quickly is the environment in which these challenges play out. AI and automation are accelerating everything: how quickly attackers can scale, how widely risk is distributed, and how easily unintended behavior can create real impact. And as technology like cloud services and SaaS platforms become even more deeply integrated into businesses, the potential attack surface continues to expand.  

Predictions are not guarantees. But the patterns emerging today suggest that 2026 will be a year where securing AI becomes inseparable from securing the business itself. The organizations that prepare now—by understanding how AI is used, how it behaves, and how it can be misused—will be best positioned to adopt these technologies with confidence in the year ahead.

Learn more about how to secure AI adoption in the enterprise without compromise by registering to join our live launch webinar on February 3, 2026.  

Continue reading
About the author
The Darktrace Community
Your data. Our AI.
Elevate your network security with Darktrace AI