Blog
/
AI
/
February 10, 2025

From Hype to Reality: How AI is Transforming Cybersecurity Practices

AI hype is everywhere, but not many vendors are getting specific. Darktrace’s multi-layered AI combines various machine learning techniques for behavioral analytics, real-time threat detection, investigation, and autonomous response.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Nicole Carignan
SVP, Security & AI Strategy, Field CISO
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
10
Feb 2025

AI is everywhere, predominantly because it has changed the way humans interact with data. AI is a powerful tool for data analytics, predictions, and recommendations, but accuracy, safety, and security are paramount for operationalization.

In cybersecurity, AI-powered solutions are becoming increasingly necessary to keep up with modern business complexity and this new age of cyber-threat, marked by attacker innovation, use of AI, speed, and scale. The emergence of these new threats calls for a varied and layered approach in AI security technology to anticipate asymmetric threats.

While many cybersecurity vendors are adding AI to their products, they are not always communicating the capabilities or data used clearly. This is especially the case with Large Language Models (LLMs). Many products are adding interactive and generative capabilities which do not necessarily increase the efficacy of detection and response but rather are aligned with enhancing the analyst and security team experience and data retrieval.

Consequently, many  people erroneously conflate generative AI with other types of AI. Similarly, only 31% of security professionals report that they are “very familiar” with supervised machine learning, the type of AI most often applied in today’s cybersecurity solutions to identify threats using attack artifacts and facilitate automated responses. This confusion around AI and its capabilities can result in suboptimal cybersecurity measures, overfitting, inaccuracies due to ineffective methods/data, inefficient use of resources, and heightened exposure to advanced cyber threats.

Vendors must cut through the AI market and demystify the technology in their products for safe, secure, and accurate adoption. To that end, let’s discuss common AI techniques in cybersecurity as well as how Darktrace applies them.

Modernizing cybersecurity with AI

Machine learning has presented a significant opportunity to the cybersecurity industry, and many vendors have been using it for years. Despite the high potential benefit of applying machine learning to cybersecurity, not every AI tool or machine learning model is equally effective due to its technique, application, and data it was trained on.

Supervised machine learning and cybersecurity

Supervised machine models are trained on labeled, structured data to facilitate automation of a human-led trained tasks. Some cybersecurity vendors have been experimenting with supervised machine learning for years, with most automating threat detection based on reported attack data using big data science, shared cyber-threat intelligence, known or reported attack behavior, and classifiers.

In the last several years, however, more vendors have expanded into the behavior analytics and anomaly detection side. In many applications, this method separates the learning, when the behavioral profile is created (baselining), from the subsequent anomaly detection. As such, it does not learn continuously and requires periodic updating and re-training to try to stay up to date with dynamic business operations and new attack techniques. Unfortunately, this opens the door for a high rate of daily false positives and false negatives.

Unsupervised machine learning and cybersecurity

Unlike supervised approaches, unsupervised machine learning does not require labeled training data or human-led training. Instead, it independently analyzes data to detect compelling patterns without relying on knowledge of past threats. This removes the dependency of human input or involvement to guide learning.

However, it is constrained by input parameters, requiring a thoughtful consideration of technique and feature selection to ensure the accuracy of the outputs. Additionally, while it can discover patterns in data as they are anomaly-focused, some of those patterns may be irrelevant and distracting.

When using models for behavior analytics and anomaly detection, the outputs come in the form of anomalies rather than classified threats, requiring additional modeling for threat behavior context and prioritization. Anomaly detection performed in isolation can render resource-wasting false positives.

LLMs and cybersecurity

LLMs are a major aspect of mainstream generative AI, and they can be used in both supervised and unsupervised ways. They are pre-trained on massive volumes of data and can be applied to human language, machine language, and more.

With the recent explosion of LLMs in the market, many vendors are rushing to add generative AI to their products, using it for chatbots, Retrieval-Augmented Generation (RAG) systems, agents, and embeddings. Generative AI in cybersecurity can optimize data retrieval for defenders, summarize reporting, or emulate sophisticated phishing attacks for preventative security.

But, since this is semantic analysis, LLMs can struggle with the reasoning necessary for security analysis and detection consistently. If not applied responsibly, generative AI can cause confusion by “hallucinating,” meaning referencing invented data, without additional post-processing to decrease the impact or by providing conflicting responses due to confirmation bias in the prompts written by different security team members.

Combining techniques in a multi-layered AI approach

Each type of machine learning technique has its own set of strengths and weaknesses, so a multi-layered, multi-method approach is ideal to enhance functionality while overcoming the shortcomings of any one method.

Darktrace’s Self-Learning AI is a multi-layered engine is powered by multiple machine learning approaches, which operate in combination for cyber defense. This allows Darktrace to protect the entire digital estates of the organizations it secures, including corporate networks, cloud computing services, SaaS applications, IoT, Industrial Control Systems (ICS), and email systems.

Plugged into the organization’s infrastructure and services, our AI engine ingests and analyzes the raw data and its interactions within the environment and forms an understanding of the normal behavior, right down to the granular details of specific users and devices. The system continually revises its understanding about what is normal based on evolving evidence, continuously learning as opposed to baselining techniques.

This dynamic understanding of normal partnered with dozens of anomaly detection models means that the AI engine can identify, with a high degree of precision, events or behaviors that are both anomalous and unlikely to be benign. Understanding anomalies through the lens of many models as well as autonomously fine-tuning the models’ performances gives us a higher understanding and confidence in anomaly detection.

The next layer provides event correlation and threat behavior context to understand the risk level of an anomalous event(s). Every anomalous event is investigated by Cyber AI Analyst that uses a combination of unsupervised machine learning models to analyze logs with supervised machine learning trained on how to investigate. This provides anomaly and risk context along with investigation outcomes with explainability.

The ability to identify activity that represents the first footprints of an attacker, without any prior knowledge or intelligence, lies at the heart of the AI system’s efficacy in keeping pace with threat actor innovations and changes in tactics and techniques. It helps the human team detect subtle indicators that can be hard to spot amid the immense noise of legitimate, day-to-day digital interactions. This enables advanced threat detection with full domain visibility.

Digging deeper into AI: Mapping specific machine learning techniques to cybersecurity functions

Visibility and control are vital for the practical adoption of AI solutions, as it builds trust between human security teams and their AI tools. That is why we want to share some specific applications of AI across our solutions, moving beyond hype and buzzwords to provide grounded, technical explanations.

Darktrace’s technology helps security teams cover every stage of the incident lifecycle with a range of comprehensive analysis and autonomous investigation and response capabilities.

  1. Behavioral prediction: Our AI understands your unique organization by learning normal patterns of life. It accomplishes this with multiple clustering algorithms, anomaly detection models, Bayesian meta-classifier for autonomous fine-tuning, graph theory, and more.
  2. Real-time threat detection: With a true understanding of normal, our AI engine connects anomalous events to risky behavior using probabilistic models. 
  3. Investigation: Darktrace performs in-depth analysis and investigation of anomalies, in particular automating Level 1 of a SOC team and augmenting the rest of the SOC team through prioritization for human-led investigations. Some of these methods include supervised and unsupervised machine learning models, semantic analysis models, and graph theory.
  4. Response: Darktrace calculates the proportional action to take in order to neutralize in-progress attacks at machine speed. As a result, organizations are protected 24/7, even when the human team is out of the office. Through understanding the normal pattern of life of an asset or peer group, the autonomous response engine can isolate the anomalous/risky behavior and surgically block. The autonomous response engine also has the capability to enforce the peer group’s pattern of life when rare and risky behavior continues.
  5. Customizable model editor: This layer of customizable logic models tailors our AI’s processing to give security teams more visibility as well as the opportunity to adapt outputs, therefore increasing explainability, interpretability, control, and the ability to modify the operationalization of the AI output with auditing.

See the complete AI architecture in the paper “The AI Arsenal: Understanding the Tools Shaping Cybersecurity.”

Figure 1. Alerts can be customized in the model editor in many ways like editing the thresholds for rarity and unusualness scores above.

Machine learning is the fundamental ally in cyber defense

Traditional security methods, even those that use a small subset of machine learning, are no longer sufficient, as these tools can neither keep up with all possible attack vectors nor respond fast enough to the variety of machine-speed attacks, given their complexity compared to known and expected patterns.

Security teams require advanced detection capabilities, using multiple machine learning techniques to understand the environment, filter the noise, and take action where threats are identified.

Darktrace’s Self-Learning AI comes together to achieve behavioral prediction, real-time threat detection and response, and incident investigation, all while empowering your security team with visibility and control.

Learn how AI is Applied in Cybersecurity

Discover specifically how Darktrace applies different types of AI to improve cybersecurity efficacy and operations in this technical paper.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Nicole Carignan
SVP, Security & AI Strategy, Field CISO

More in this series

No items found.

Blog

/

Network

/

March 26, 2026

Phantom Footprints: Tracking GhostSocks Malware

Default blog imageDefault blog image

Why are attackers using residential proxies?

In today's threat landscape, blending in to normal activity is the key to success for attackers and the growing reliance on residential proxies shows a significant shift in how threat actors are attempting to bypass IP detection tools.

The increasing dependency on residential proxies has exposed how prevalent proxy services are and how reliant a diverse range of threat actors are on them. From cybercriminal groups to state‑sponsored actors, the need to bypass IP detection tools is fundamental to the success of these groups. One malware that has quietly become notorious for its ability to avoid anomaly detection is GhostSocks, a malware that turns compromised devices into residential proxies.

What is GhostSocks?

Originally marketed on the Russian underground forum xss[.]is as a Malware‑as‑a‑Service (MaaS), GhostSocks enables threat actors to turn compromised devices into residential proxies, leveraging the victim's internet bandwidth to route malicious traffic through it.

How does Ghostsocks malware work? 

The malware offers the threat actor a “clean” IP address, making it look like it is coming from a household user. This enables the bypassing of geographic restrictions and IP detection tools, a perfect tool for avoiding anomaly detection. It wasn’t until 2024, when a partnership was announced with the infamous information stealer Lumma Stealer, that GhostSocks surged into widespread adoption and alluded to who may be the author of the proxy malware.

Written in GoLang, GhostSocks utilizes the SOCKS5 proxy protocol, creating a SOCKS5 connection on infected devices. It uses a relay‑based C2 implementation, where an intermediary server sits in between the real command-and-control (C2) server and the infected device.

How does Ghostsocks malware evade detection?

To further increase evasion, the Ghostsocks malware wraps its SOCKS5 tunnels in TLS encryption, allowing its malicious traffic to blend into normal network traffic.

Early variants of GhostSocks do not implement a persistence mechanism; however, later versions achieve persistence via registry run keys, ensuring sustained proxy operational time [1].

While proxying is its primary purpose, GhostSocks also incorporates backdoor functionality, enabling malicious actors to run arbitrary commands and download and deploy additional malicious payloads. This was evident with the well‑known ransomware group Black Basta, which reportedly used GhostSocks as a way of maintaining long‑term access to victims’ networks [1].

Darktrace’s detection of GhostSocks Malware

Darktrace observed a steady increase in GhostSocks activity across its customer base from late 2025, with its Threat Research team identifying multiple incidents involving the malware. In one notable case from December 2025, Darktrace detected GhostSocks operating alongside Lumma Stealer, reinforcing that the partnership between Lumma and GhostSocks remains active despite recent attempts to disrupt Lumma’s infrastructure.

Darktrace’s first detection of GhostSocks‑related activity came when a device on the network of a customer in the education sector began making connections to an endpoint with a suspicious self‑signed certificate that had never been seen on the network before.

The endpoint in question, 159.89.46[.]92 with the hostname retreaw[.]click, has been flagged by multiple open‑source intelligence (OSINT) sources as being associated with Lumma Stealer’s C2 infrastructure [2], indicating its likely role in the delivery of malicious payloads.

Darktrace’s detection of suspicious SSL connections to retreaw[.]click, indicating an attempted link to Lumma C2 infrastructure.
Figure 1: Darktrace’s detection of suspicious SSL connections to retreaw[.]click, indicating an attempted link to Lumma C2 infrastructure.

Less than two minutes later, Darktrace observed the same device downloading the executable (.exe) file “Renewable.exe” from the IP 86.54.24[.]29, which Darktrace recognized as 100% rare for this network.

Darktrace’s detection of a device downloading the unusual executable file “Renewable.exe”.
Figure 2: Darktrace’s detection of a device downloading the unusual executable file “Renewable.exe”.

Both the file MD5 hash and the executable itself have been identified by multiple OSINT vendors as being associated with the GhostSocks malware [3], with the executable likely the backdoor component of the GhostSocks malware, facilitating the distribution of additional malicious payloads [4].

Following this detection, Darktrace’s Autonomous Response capability recommended a blocking action for the device in an early attempt to stop the malicious file download. In this instance, Darktrace was configured in Human Confirmation Mode, meaning the customer’s security team was required to manually apply any mitigative response actions. Had Autonomous Response been fully enabled at the time of the attack, the connections to 86.54.24[.]29 would have been blocked, rendering the malware ineffective at reaching its C2 infrastructure and halting any further malicious communication.

 Darktrace’s Autonomous Response capability suggesting blocking the suspicious connections to the unusual endpoint from which the malicious executable was downloaded.
Figure 3: Darktrace’s Autonomous Response capability suggesting blocking the suspicious connections to the unusual endpoint from which the malicious executable was downloaded.

As the attack was able to progress, two days later the device was detected downloading additional payloads from the endpoint www.lbfs[.]site (23.106.58[.]48), including “Setup.exe”, “,.exe”, and “/vp6c63yoz.exe”.

Darktrace’s detection of a malicious payload being downloaded from the endpoint www.lbfs[.]site.
Figure 4: Darktrace’s detection of a malicious payload being downloaded from the endpoint www.lbfs[.]site.

Once again, Darktrace recognized the anomalous nature of these downloads and suggested that a “group pattern of life” be enforced on the offending device in an attempt to contain the activity. By enforcing a pattern of life on a device, Darktrace restricts its activity to connections and behaviors similar to those performed by peer devices within the same group, while still allowing it to carry out its expected activity, effectively preventing deviations indicative of compromise while minimizing disruption. As mentioned earlier, these mitigative actions required manual implementation, so the activity was able to continue. Darktrace proceeded to suggest further actions to contain subsequent malicious downloads, including an attempt to block all outbound traffic to stop the attack from progressing.

An overview of download activity and the Autonomous Response actions recommended by Darktrace to block the downloads.
Figure 5: An overview of download activity and the Autonomous Response actions recommended by Darktrace to block the downloads.

Around the same time, a third executable download was detected, this time from the hostname hxxp[://]d2ihv8ymzp14lr.cloudfront.net/2021-08-19/udppump[.]exe, along with the file “udppump.exe”.While GhostSocks may have been present only to facilitate the delivery of additional payloads, there is no indication that these CloudFront endpoints or files are functionally linked to GhostSocks. Rather, the evidence points to broader malicious file‑download activity.

Shortly after the multiple executable files had been downloaded, Darktrace observed the device initiating a series of repeated successful connections to several rare external endpoints, behavior consistent with early-stage C2 beaconing activity.

Cyber AI Analyst’s investigation

Darktrace’s detection of additional malicious file downloads from malicious CloudFront endpoints.
Figure 7: Darktrace’s detection of additional malicious file downloads from malicious CloudFront endpoints.

Throughout the course of this attack, Darktrace’s Cyber AI Analyst carried out its own autonomous investigation, piecing together seemingly separate events into one wider incident encompassing the first suspicious downloads beginning on December 4, the unusual connectivity to many suspicious IPs that followed, and the successful beaconing activity observed two days later. By analyzing these events in real-time and viewing them as part of the bigger picture, Cyber AI Analyst was able to construct an in‑depth breakdown of the attack to aid the customer’s investigation and remediation efforts.

Cyber AI Analyst investigation detailing the sequence of events on the compromised device, highlighting its extensive connectivity to rare endpoints, the related malicious file‑download activity, and finally the emergence of C2 beaconing behavior.
Figure 8: Cyber AI Analyst investigation detailing the sequence of events on the compromised device, highlighting its extensive connectivity to rare endpoints, the related malicious file‑download activity, and finally the emergence of C2 beaconing behavior.

Conclusion

The versatility offered by GhostSocks is far from new, but its ability to convert compromised devices into residential proxy nodes, while enabling long‑term, covert network access—illustrates how threat actors continue to maximise the value of their victims’ infrastructure. Its growing popularity, coupled with its ongoing partnership with Lumma, demonstrates that infrastructure takedowns alone are insufficient; as long as threat actors remain committed to maintaining anonymity and can rapidly rebuild their ecosystems, related malware activity is likely to persist in some form.

Credit to Isabel Evans (Cyber Analyst), Gernice Lee (Associate Principal Analyst & Regional Consultancy Lead – APJ)
Edited by Ryan Traill (Content Manager)

Appendices

References

1.    https://bloo.io/research/malware/ghostsocks

2.    https://www.virustotal.com/gui/domain/retreaw.click/community

3.    https://synthient.com/blog/ghostsocks-from-initial-access-to-residential-proxy

4.    https://www.joesandbox.com/analysis/1810568/0/html

5. https://www.virustotal.com/gui/url/fab6525bf6e77249b74736cb74501a9491109dc7950688b3ae898354eb920413

Darktrace Model Detections

Real-time Detection Models

Anomalous Connection / Suspicious Self-Signed SSL

Anomalous Connection / Rare External SSL Self-Signed

Anomalous File / EXE from Rare External Location

Anomalous File / Multiple EXE from Rare External Locations

Compromise / Possible Fast Flux C2 Activity

Compromise / Large Number of Suspicious Successful Connections

Compromise / Large Number of Suspicious Failed Connections

Compromise / Sustained SSL or HTTP Increase

Autonomous Response Models

Antigena / Network / Significant Anomaly / Antigena Significant Anomaly from Client Block

Antigena / Network / External Threat / Antigena Suspicious File Block

Antigena / Network / Significant Anomaly / Antigena Controlled and Model Alert

Antigena / Network / External Threat / Antigena File then New Outbound Block

Antigena / Network / Significant Anomaly / Antigena Alerts Over Time Block

Antigena / Network / External Threat / Antigena Suspicious Activity Block

MITRE ATT&CK Mapping

Tactic – Technique – Sub-Technique

Resource Development – T1588 - Malware

Initial Access - T1189 - Drive-by Compromise

Persistence – T1112 – Modify Registry

Command and Control – T1071 – Application Layer Protocol

Command and Control – T1095 – Non-application Layer Protocol

Command and Control – T1071 – Web Protocols

Command and Control – T1571 – Non-Standard Port

Command and Control – T1102 – One-Way Communication

List of Indicators of Compromise (IoCs)

86.54.24[.]29 - IP - Likely GhostSocks C2

http[://]86.54.24[.]29/Renewable[.]exe - Hostname - GhostSocks Distribution Endpoint

http[://]d2ihv8ymzp14lr.cloudfront[.]net/2021-08-19/udppump[.]exe - CDN - Payload Distribution Endpoint

www.lbfs[.]site - Hostname - Likely C2 Endpoint

retreaw[.]click - Hostname - Lumma C2 Endpoint

alltipi[.]com - Hostname - Possible C2 Endpoint

w2.bruggebogeyed[.]site - Hostname - Possible C2 Endpoint

9b90c62299d4bed2e0752e2e1fc777ac50308534 - SHA1 file hash – Likely GhostSocks payload

3d9d7a7905e46a3e39a45405cb010c1baa735f9e - SHA1 file hash - Likely follow-up payload

10f928e00a1ed0181992a1e4771673566a02f4e3 - SHA1 file hash - Likely follow-up payload

Continue reading
About the author
Gernice Lee
Associate Principal Analyst & Regional Consultancy Lead

Blog

/

AI

/

March 26, 2026

State of AI Cybersecurity 2026: 92% of security professionals concerned about the impact of AI agents

Default blog imageDefault blog image

The findings in this blog are taken from Darktrace's annual State of AI Cybersecurity Report 2026.

AI is already embedded in day-to-day enterprise activity, with 78% of participants in one recent survey reporting that their organizations are using generative AI in at least one business function. Generative AI now acts as an always-on assistant, researcher, creator, and coach across an expanding array of departments and functions. Autonomous agents are performing multi-step operational workflows from end to end. AI features have been layered on top of every SaaS application. And vibe coding is making it possible for employees without deep technical expertise to build their own AI-powered automations.

According to Gartner, more than 80% of enterprises will have deployed GenAI models, applications, or APIs in production environments by the end of this year, up from less than 5% in 2023. Companies report a 130% increase in spending on AI over the same period, with 72% of business leaders using AI tools at least weekly. The outsized efficiency and productivity gains that were once a future vision are quickly becoming everyday reality.

AI is currently driving business growth and innovation, and organizations risk falling behind peers if they don’t keep up with the pace of adoption, but it is also quietly expanding the enterprise attack surface. The modern CISO is challenged to both enable innovation and protect the business from these emerging threats.

AI agents introduce new risks and vulnerabilities

AI agents are playing growing roles in enterprise production environments. In many cases, these agents act with broad permissions across multiple software systems and platforms. This means they’re granted far-reaching access – to sensitive data, business-critical applications, tokens and APIs, and IT and security tools. With this access comes risk for security leaders – 92% are concerned about the use of AI agents across the workforce and their impact on security.

These agents must be governed as identities, with least-privilege access and ongoing monitoring. They can’t be thought of as invisible aspects of the application estate. Understanding how AI agents behave, and how to manage their permissions, control their behavior, and limit their data access will be a top security priority throughout 2026.

Generative AI prompts: The next frontier

Prompts are how users – both human and agentic – interact with AI systems, and they’re where natural language gets translated into model behavior. Natural language is infinite in its potential combinations and permutations, making this aspect of the attack surface open-ended and far more complex than traditional CVEs. With carefully crafted prompts, bad actors may be able to coax models into disclosing sensitive data, bypassing guardrails, or initiating undesirable actions.

Among security leaders, the biggest worries about AI usage in their environments all involve ways that systems might be manipulated to bypass traditional controls.

  • 61% are most concerned about the exposure of sensitive data
  • 56% are most concerned about potential data security and policy violations
  • 51% are most concerned about the misuse or abuse of AI tools

The more employees rely on AI in their day-to-day workflows, the more critical it becomes for security teams to understand how prompt behavior determines model behavior – and where that behavior could go wrong.

What does “securing AI” mean in practice?

AI adoption opens new security risks that blur the boundaries between traditional security disciplines. A single malicious interaction with an AI model could involve identity misuse, sensitive data exposure, application logic abuse, and supply chain risk – all within a single workflow. Protecting this dynamic and rapidly evolving attack surface requires an approach that spans identity security, cloud security, application security, data security, software development security, and more.

The task for security leaders is to implement the tools, policies, and frameworks to mitigate these novel, expansive, and cross-disciplinary risks.

However, within most enterprises, AI policy creation remains in its infancy. Just 37% of security leaders report that their organization has a formal AI policy, representing a small but worrisome decrease from last year. Conversations about AI abound: in 52% of organizations, there’s discussion about an AI policy. Still, talk is cheap, and leaders will need to take action if they’re to successfully enable secure AI innovation.

To govern and protect their AI systems, organizations must take a multi-pronged approach. This requires building out policies, but it also demands that they are able to:

  • Monitor the prompts driving GenAI assistants and agents in real time. Organizations must be able to inspect prompts, sessions, and responses across enterprise GenAI tools, low- and high-code environments, and SaaS and SASE so that they can detect clever conversational prompt attacks and malicious chaining.
  • Secure all business AI agent identities. Security teams need to identify all the agents acting within their environment and supply chain, map their connections and interactions via MCP and services like Amazon S3, and audit their behavior across the cloud, SaaS environments, and on the network and endpoint devices.
  • Maintain centralized, comprehensive visibility. Understanding intent, assessing risks, and enforcing policies all require that security teams have a single view that spans AI interactions across the entire business.
  • Discover and control shadow AI. Teams need to be able to identify unsanctioned AI activities, distinguish the misuse of legitimate tools from their appropriate use, and apply policies to protect data, while guiding users towards approved solutions.

Scaling AI safely and responsibly

The approach that most cybersecurity vendors have taken – using historical patterns to predict future threats – doesn’t work well for AI systems. Because AI changes its behavior in response to the information it encounters while taking action, previous patterns don’t indicate what it will do next. Looking at past attacks can’t tell you how complex models will behave in your individual business.

Securing AI requires interpreting ambiguous interactions, uncovering subtleties that reveal intent within extended conversations, understanding how access accumulates over time, and recognizing when behavior – both human and machine – begins to drift towards areas of risk. To do this, you need to understand what “normal” looks like in each unique organization: how users, systems, applications, and AI agents behave, how they communicate, and how data flows between them.

Darktrace has spent more than a decade designing AI-powered solutions that can understand and adapt to evolving behavior in complex environments. This technology learns directly from the environment it protects, identifying malicious actions that deviate from normal operations, so that it can stop AI-related threats on the very first encounter.

As AI adoption reshapes enterprise operations, humans and machines will collaborate more and more often. This collaboration might dramatically expand the attack surface, but it also has the potential to be a force multiplier for defenders.

Explore the full State of AI Cybersecurity 2026 report for deeper insights into how security leaders are responding to AI-driven risks.

Learn more about securing AI in your enterprise.

[related-resource]

Continue reading
About the author
The Darktrace Community
Your data. Our AI.
Elevate your network security with Darktrace AI