Blog
/
Network
/
October 18, 2022

Kill Chain Insights: Detecting AutoIT Malware Compromise

Discover how AutoIt malware operates and learn strategies to combat this emerging threat in our latest blog post.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Joel Davidson
Cyber Analyst
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
18
Oct 2022

Introduction 

Good defence is like an onion, it has layers. Each part of a security implementation should have checks built in so that if one wall is breached, there are further contingencies. Security aficionados call this ‘defence in depth’, a military concept introduced to the cyber-sphere in 2009 [1]. Since then, it has remained a central tenet when designing secure systems, digital or otherwise [2]. Despite this, the attacker’s advantage is ever-present with continued development of malware and zero-day exploits. No matter how many layers a security platform has, how can organisations be expected to protect against a threat they do not know or even understand? 

Take the case of one Darktrace customer, a government-contracted manufacturing company located in the Americas. This company possesses a modern OT and IT network comprised of several thousand devices. They have dozens of servers, a few of which host Microsoft Exchange. Every week, these few mail servers receive hundreds of malicious payloads which will ultimately attempt to make their way into over a thousand different inboxes while dodging different security gateways. Had the RESPOND portion of Darktrace for Email been properly enabled, this is where the story would have ended. However, in June 2022 an employee made an instinctual decision that could have potentially cost the company its time, money, and reputation as a government contractor. Their crime: opening an unknown html file attached to a compelling phishing email. 

Following this misstep, a download was initiated which resulted in compromise of the system via vulnerable Microsoft admin tools from endpoints largely unknown to conventional OSINT sources. Using these tools, further malicious connectivity was accomplished before finally petering out. Fortunately, their existing Microsoft security gateway was up to date on the command and control (C2) domains observed in this breach and refused the connections.

Darktrace detected this activity at every turn, from the initial email to the download and subsequent attempted C2. Cyber AI Analyst stitched the events together for easy understanding and detected Indicators of Compromise (IOCs) that were not yet flagged in the greater intelligence community and, critically, did this all at machine speed. 

So how did the attacker evade action for so long? The answer is product misconfiguration - they did not refine their ‘layers’.  

Attack Details

On the night of June 8th an employee received a malicious email. Darktrace detected that this email contained a html attachment which itself contained links to endpoints 100% rare to the network. This email also originated from a never-before-seen sender. Although it would usually have been withheld based on these factors, the customer’s Darktrace/Email deployment was set to Advisory Mode meaning it continued through to the inbox. Late the next day, this user opened the attachment which then routed them to the 100% rare endpoint ‘xberxkiw[.]club’, a probable landing page for malware that did not register on OSINT available at the time.

Figure 1- Popular OSINT VirusTotal showing zero hits against the rare endpoint 

Only seconds after reaching the endpoint, Darktrace detected the Microsoft BITS user agent reaching out to another 100% rare endpoint ‘yrioer[.]mikigertxyss[.]com’, which generated a DETECT/Network model breach, ‘Unusual BITS Activity’. This was immediately suspicious since BITS is a deprecated and insecure windows admin tool which has been known to facilitate the movement of malicious payloads into and around a network. Upon successfully establishing a connection, the affected device began downloading a self-professed .zip file. However, Darktrace detected this file to be an extension-swapped .exe file. A PCAP of this activity can be seen below in Figure 2.

Figure 2- PCAP highlighting BITs service connections and false .zip (.exe) download

This activity also triggered a correlating breach of the ‘Masqueraded File Transfer’ model and pushed a high-fidelity alert to the Darktrace Proactive Threat Notification (PTN) service. This ensured both Darktrace and the customer’s SOC team were alerted to the anomalous activity.

At this stage the local SOC were likely beginning their triage. However further connections were being made to extend the compromise on the employee’s device and the network. The file they downloaded was later revealed to be ‘AutoIT3.exe’, a default filename given to any AutoIt script. AutoIt scripts do have legitimate use cases but are often associated with malicious activity for their ability to interact with the Windows GUI and bypass client protections. After opening, these scripts would launch on the host device and probe for other weaknesses. In this case, the script may have attempted to hunt passwords/default credentials, scan the local directory for common sensitive files, or scout local antivirus software on the device. It would then share any information gathered via established C2 channels.  

After the successful download of this mismatched MIME type, the device began attempting to further establish C2 to the endpoint ‘dirirxhitoq[.]kialsoyert[.]tk’. Even though OSINT still did not flag this endpoint, Darktrace detected this outreach as suspicious and initiated its first Cyber AI Analyst investigation into the beaconing activity. Following the sixth connection made to this endpoint on the 10th of June, the infected device breached C2 models, such as ‘Agent Beacon (Long Period)’ and ‘HTTP Beaconing to Rare Destination’. 

As the beaconing continued, it was clear that internal reconnaissance from AutoIt was not widely achieved, although similar IOCs could be detected on at least two other internal devices. This may represent other users opening the same malicious email, or successful lateral movement and infection propagation from the initial user/device. However comparatively, these devices did not experience the same level of infection as the first employee’s machine and never downloaded any malicious executables. AutoIt has a history of being used to deliver information stealers, which suggests a possible motivation had wider network compromise been successful [3].

Thankfully, after the 10th of June no further exploitation was observed. This was likely due to the combined awareness and action brought by the PTN alerting, static security gateways and action from the local security team. The company were protected thanks to defence in depth.  

Darktrace Coverage

Despite this, the role of Darktrace itself cannot be understated. Darktrace/Email was integral to the early detection process and provided insight into the vector and delivery methods used by this attacker. Post-compromise, Darktrace/Network also observed the full range of suspicious activity brought about by this incursion. In particular, the AI analyst feature played a major role in reducing the time for the SOC team to triage by detecting and flagging key information regarding some of the earliest IOCs.

Figure 3- Sample information pulled by AI analyst about one of the involved endpoints

Alongside the early detection, there were several instances where RESPOND/Network would have intervened however autonomous actions were limited to a small test group and not enabled widely throughout the customer’s deployment. As such, this activity continued unimpeded- a weak layer. Figure 4 highlights the first Darktrace RESPOND action which would have been taken.

Figure 4- Upon detecting the download of a mismatched mime from a rare endpoint, Darktrace RESPOND would have blocked all connections to the rare endpoint on the relevant port in a targeted manner

This Darktrace RESPOND action provides a precise and limited response by blocking the anomalous file download. However, after continued anomalous activity, RESPOND would have strengthened its posture and enforced stronger curbs across the wider anomalous activity. This stronger enforcement is a measure designed to relegate a device to its established norm. The breach which would generate this response can be seen below:

Figure 5- After a prolonged period of anomalous activity, Darktrace RESPOND would have stepped in to enforce the typical pattern of life observed on this device

Although Darktrace RESPOND was not fully enabled, this company had an extra layer of security in the PTN service, which alerted them just minutes after the initial file download was detected, alongside details relevant to the investigation. This ensured both Darktrace analysts and their own could review the activity and begin to isolate and remediate the threat. 

Concluding Insights

Thankfully, with multiple layers in their security, the customer managed to escape this incident largely unscathed. Quick and comprehensive email and network detection, customer alerting and local gateway blocking C2 connections ensured that the infection did not have leeway to propagate laterally throughout the network. However, even though this infection did not lead to catastrophe, the fact that it happened in the first place should be a learning point. 

Had RESPOND/Email been properly configured, this threat would have been stopped before reaching its intended recipients, removing the need to rely on end-users as a security measure. Furthermore, had RESPOND/Network been utilized beyond a limited test group, this activity would have been blocked at every other step of the network-level kill chain. From the anomalous MIME download to the establishment of C2, Darktrace RESPOND would have been able to effectively isolate and quarantine this activity to the host device, without any reliance on slow-to-update OSINT sources. RESPOND allows for the automation of time-sensitive security decisions and adds a powerful layer of defence that conventional security solutions cannot provide. Although it can be difficult to relinquish human ownership of these decisions, doing so is necessary to prevent unknown attackers from infiltrating using unknown vectors to achieve unknown ends.  

In conclusion, this incident demonstrates an effective case study around detecting a threat with novel IOCs. However, it is also a reminder that a company’s security makeup can always be improved. Overall, when building security layers in a company’s ‘onion’, it is great to have the best tools, but it is even greater to use them in the best way. Only with continued refining can organisations guarantee defence in depth. 

Thanks to Connor Mooney and Stefan Rowe for their contributions.

Appendices

Darktrace Model Detections

·      Anomalous File / EXE from Rare External Location 

·      Compromise / Agent Beacon (Long Period) 

·      Compromise / HTTP Beaconing to Rare Destination 

·      Device / Large Number of Model Breaches 

·      Device / Suspicious Domain 

·      Device / Unusual BITS Activity 

·      Enhanced Monitoring: Anomalous File / Masqueraded File Transfer 

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Joel Davidson
Cyber Analyst

More in this series

No items found.

Blog

/

Network

/

March 26, 2026

Phantom Footprints: Tracking GhostSocks Malware

Default blog imageDefault blog image

Why are attackers using residential proxies?

In today's threat landscape, blending in to normal activity is the key to success for attackers and the growing reliance on residential proxies shows a significant shift in how threat actors are attempting to bypass IP detection tools.

The increasing dependency on residential proxies has exposed how prevalent proxy services are and how reliant a diverse range of threat actors are on them. From cybercriminal groups to state‑sponsored actors, the need to bypass IP detection tools is fundamental to the success of these groups. One malware that has quietly become notorious for its ability to avoid anomaly detection is GhostSocks, a malware that turns compromised devices into residential proxies.

What is GhostSocks?

Originally marketed on the Russian underground forum xss[.]is as a Malware‑as‑a‑Service (MaaS), GhostSocks enables threat actors to turn compromised devices into residential proxies, leveraging the victim's internet bandwidth to route malicious traffic through it.

How does Ghostsocks malware work? 

The malware offers the threat actor a “clean” IP address, making it look like it is coming from a household user. This enables the bypassing of geographic restrictions and IP detection tools, a perfect tool for avoiding anomaly detection. It wasn’t until 2024, when a partnership was announced with the infamous information stealer Lumma Stealer, that GhostSocks surged into widespread adoption and alluded to who may be the author of the proxy malware.

Written in GoLang, GhostSocks utilizes the SOCKS5 proxy protocol, creating a SOCKS5 connection on infected devices. It uses a relay‑based C2 implementation, where an intermediary server sits in between the real command-and-control (C2) server and the infected device.

How does Ghostsocks malware evade detection?

To further increase evasion, the Ghostsocks malware wraps its SOCKS5 tunnels in TLS encryption, allowing its malicious traffic to blend into normal network traffic.

Early variants of GhostSocks do not implement a persistence mechanism; however, later versions achieve persistence via registry run keys, ensuring sustained proxy operational time [1].

While proxying is its primary purpose, GhostSocks also incorporates backdoor functionality, enabling malicious actors to run arbitrary commands and download and deploy additional malicious payloads. This was evident with the well‑known ransomware group Black Basta, which reportedly used GhostSocks as a way of maintaining long‑term access to victims’ networks [1].

Darktrace’s detection of GhostSocks Malware

Darktrace observed a steady increase in GhostSocks activity across its customer base from late 2025, with its Threat Research team identifying multiple incidents involving the malware. In one notable case from December 2025, Darktrace detected GhostSocks operating alongside Lumma Stealer, reinforcing that the partnership between Lumma and GhostSocks remains active despite recent attempts to disrupt Lumma’s infrastructure.

Darktrace’s first detection of GhostSocks‑related activity came when a device on the network of a customer in the education sector began making connections to an endpoint with a suspicious self‑signed certificate that had never been seen on the network before.

The endpoint in question, 159.89.46[.]92 with the hostname retreaw[.]click, has been flagged by multiple open‑source intelligence (OSINT) sources as being associated with Lumma Stealer’s C2 infrastructure [2], indicating its likely role in the delivery of malicious payloads.

Darktrace’s detection of suspicious SSL connections to retreaw[.]click, indicating an attempted link to Lumma C2 infrastructure.
Figure 1: Darktrace’s detection of suspicious SSL connections to retreaw[.]click, indicating an attempted link to Lumma C2 infrastructure.

Less than two minutes later, Darktrace observed the same device downloading the executable (.exe) file “Renewable.exe” from the IP 86.54.24[.]29, which Darktrace recognized as 100% rare for this network.

Darktrace’s detection of a device downloading the unusual executable file “Renewable.exe”.
Figure 2: Darktrace’s detection of a device downloading the unusual executable file “Renewable.exe”.

Both the file MD5 hash and the executable itself have been identified by multiple OSINT vendors as being associated with the GhostSocks malware [3], with the executable likely the backdoor component of the GhostSocks malware, facilitating the distribution of additional malicious payloads [4].

Following this detection, Darktrace’s Autonomous Response capability recommended a blocking action for the device in an early attempt to stop the malicious file download. In this instance, Darktrace was configured in Human Confirmation Mode, meaning the customer’s security team was required to manually apply any mitigative response actions. Had Autonomous Response been fully enabled at the time of the attack, the connections to 86.54.24[.]29 would have been blocked, rendering the malware ineffective at reaching its C2 infrastructure and halting any further malicious communication.

 Darktrace’s Autonomous Response capability suggesting blocking the suspicious connections to the unusual endpoint from which the malicious executable was downloaded.
Figure 3: Darktrace’s Autonomous Response capability suggesting blocking the suspicious connections to the unusual endpoint from which the malicious executable was downloaded.

As the attack was able to progress, two days later the device was detected downloading additional payloads from the endpoint www.lbfs[.]site (23.106.58[.]48), including “Setup.exe”, “,.exe”, and “/vp6c63yoz.exe”.

Darktrace’s detection of a malicious payload being downloaded from the endpoint www.lbfs[.]site.
Figure 4: Darktrace’s detection of a malicious payload being downloaded from the endpoint www.lbfs[.]site.

Once again, Darktrace recognized the anomalous nature of these downloads and suggested that a “group pattern of life” be enforced on the offending device in an attempt to contain the activity. By enforcing a pattern of life on a device, Darktrace restricts its activity to connections and behaviors similar to those performed by peer devices within the same group, while still allowing it to carry out its expected activity, effectively preventing deviations indicative of compromise while minimizing disruption. As mentioned earlier, these mitigative actions required manual implementation, so the activity was able to continue. Darktrace proceeded to suggest further actions to contain subsequent malicious downloads, including an attempt to block all outbound traffic to stop the attack from progressing.

An overview of download activity and the Autonomous Response actions recommended by Darktrace to block the downloads.
Figure 5: An overview of download activity and the Autonomous Response actions recommended by Darktrace to block the downloads.

Around the same time, a third executable download was detected, this time from the hostname hxxp[://]d2ihv8ymzp14lr.cloudfront.net/2021-08-19/udppump[.]exe, along with the file “udppump.exe”.While GhostSocks may have been present only to facilitate the delivery of additional payloads, there is no indication that these CloudFront endpoints or files are functionally linked to GhostSocks. Rather, the evidence points to broader malicious file‑download activity.

Shortly after the multiple executable files had been downloaded, Darktrace observed the device initiating a series of repeated successful connections to several rare external endpoints, behavior consistent with early-stage C2 beaconing activity.

Cyber AI Analyst’s investigation

Darktrace’s detection of additional malicious file downloads from malicious CloudFront endpoints.
Figure 7: Darktrace’s detection of additional malicious file downloads from malicious CloudFront endpoints.

Throughout the course of this attack, Darktrace’s Cyber AI Analyst carried out its own autonomous investigation, piecing together seemingly separate events into one wider incident encompassing the first suspicious downloads beginning on December 4, the unusual connectivity to many suspicious IPs that followed, and the successful beaconing activity observed two days later. By analyzing these events in real-time and viewing them as part of the bigger picture, Cyber AI Analyst was able to construct an in‑depth breakdown of the attack to aid the customer’s investigation and remediation efforts.

Cyber AI Analyst investigation detailing the sequence of events on the compromised device, highlighting its extensive connectivity to rare endpoints, the related malicious file‑download activity, and finally the emergence of C2 beaconing behavior.
Figure 8: Cyber AI Analyst investigation detailing the sequence of events on the compromised device, highlighting its extensive connectivity to rare endpoints, the related malicious file‑download activity, and finally the emergence of C2 beaconing behavior.

Conclusion

The versatility offered by GhostSocks is far from new, but its ability to convert compromised devices into residential proxy nodes, while enabling long‑term, covert network access—illustrates how threat actors continue to maximise the value of their victims’ infrastructure. Its growing popularity, coupled with its ongoing partnership with Lumma, demonstrates that infrastructure takedowns alone are insufficient; as long as threat actors remain committed to maintaining anonymity and can rapidly rebuild their ecosystems, related malware activity is likely to persist in some form.

Credit to Isabel Evans (Cyber Analyst), Gernice Lee (Associate Principal Analyst & Regional Consultancy Lead – APJ)
Edited by Ryan Traill (Content Manager)

Appendices

References

1.    https://bloo.io/research/malware/ghostsocks

2.    https://www.virustotal.com/gui/domain/retreaw.click/community

3.    https://synthient.com/blog/ghostsocks-from-initial-access-to-residential-proxy

4.    https://www.joesandbox.com/analysis/1810568/0/html

5. https://www.virustotal.com/gui/url/fab6525bf6e77249b74736cb74501a9491109dc7950688b3ae898354eb920413

Darktrace Model Detections

Real-time Detection Models

Anomalous Connection / Suspicious Self-Signed SSL

Anomalous Connection / Rare External SSL Self-Signed

Anomalous File / EXE from Rare External Location

Anomalous File / Multiple EXE from Rare External Locations

Compromise / Possible Fast Flux C2 Activity

Compromise / Large Number of Suspicious Successful Connections

Compromise / Large Number of Suspicious Failed Connections

Compromise / Sustained SSL or HTTP Increase

Autonomous Response Models

Antigena / Network / Significant Anomaly / Antigena Significant Anomaly from Client Block

Antigena / Network / External Threat / Antigena Suspicious File Block

Antigena / Network / Significant Anomaly / Antigena Controlled and Model Alert

Antigena / Network / External Threat / Antigena File then New Outbound Block

Antigena / Network / Significant Anomaly / Antigena Alerts Over Time Block

Antigena / Network / External Threat / Antigena Suspicious Activity Block

MITRE ATT&CK Mapping

Tactic – Technique – Sub-Technique

Resource Development – T1588 - Malware

Initial Access - T1189 - Drive-by Compromise

Persistence – T1112 – Modify Registry

Command and Control – T1071 – Application Layer Protocol

Command and Control – T1095 – Non-application Layer Protocol

Command and Control – T1071 – Web Protocols

Command and Control – T1571 – Non-Standard Port

Command and Control – T1102 – One-Way Communication

List of Indicators of Compromise (IoCs)

86.54.24[.]29 - IP - Likely GhostSocks C2

http[://]86.54.24[.]29/Renewable[.]exe - Hostname - GhostSocks Distribution Endpoint

http[://]d2ihv8ymzp14lr.cloudfront[.]net/2021-08-19/udppump[.]exe - CDN - Payload Distribution Endpoint

www.lbfs[.]site - Hostname - Likely C2 Endpoint

retreaw[.]click - Hostname - Lumma C2 Endpoint

alltipi[.]com - Hostname - Possible C2 Endpoint

w2.bruggebogeyed[.]site - Hostname - Possible C2 Endpoint

9b90c62299d4bed2e0752e2e1fc777ac50308534 - SHA1 file hash – Likely GhostSocks payload

3d9d7a7905e46a3e39a45405cb010c1baa735f9e - SHA1 file hash - Likely follow-up payload

10f928e00a1ed0181992a1e4771673566a02f4e3 - SHA1 file hash - Likely follow-up payload

Continue reading
About the author
Gernice Lee
Associate Principal Analyst & Regional Consultancy Lead

Blog

/

AI

/

March 27, 2026

State of AI Cybersecurity 2026: 92% of security professionals concerned about the impact of AI agents

Default blog imageDefault blog image

The findings in this blog are taken from Darktrace's annual State of AI Cybersecurity Report 2026.

AI is already embedded in day-to-day enterprise activity, with 78% of participants in one recent survey reporting that their organizations are using generative AI in at least one business function. Generative AI now acts as an always-on assistant, researcher, creator, and coach across an expanding array of departments and functions. Autonomous agents are performing multi-step operational workflows from end to end. AI features have been layered on top of every SaaS application. And vibe coding is making it possible for employees without deep technical expertise to build their own AI-powered automations.

According to Gartner, more than 80% of enterprises will have deployed GenAI models, applications, or APIs in production environments by the end of this year, up from less than 5% in 2023. Companies report a 130% increase in spending on AI over the same period, with 72% of business leaders using AI tools at least weekly. The outsized efficiency and productivity gains that were once a future vision are quickly becoming everyday reality.

AI is currently driving business growth and innovation, and organizations risk falling behind peers if they don’t keep up with the pace of adoption, but it is also quietly expanding the enterprise attack surface. The modern CISO is challenged to both enable innovation and protect the business from these emerging threats.

AI agents introduce new risks and vulnerabilities

AI agents are playing growing roles in enterprise production environments. In many cases, these agents act with broad permissions across multiple software systems and platforms. This means they’re granted far-reaching access – to sensitive data, business-critical applications, tokens and APIs, and IT and security tools. With this access comes risk for security leaders – 92% are concerned about the use of AI agents across the workforce and their impact on security.

These agents must be governed as identities, with least-privilege access and ongoing monitoring. They can’t be thought of as invisible aspects of the application estate. Understanding how AI agents behave, and how to manage their permissions, control their behavior, and limit their data access will be a top security priority throughout 2026.

Generative AI prompts: The next frontier

Prompts are how users – both human and agentic – interact with AI systems, and they’re where natural language gets translated into model behavior. Natural language is infinite in its potential combinations and permutations, making this aspect of the attack surface open-ended and far more complex than traditional CVEs. With carefully crafted prompts, bad actors may be able to coax models into disclosing sensitive data, bypassing guardrails, or initiating undesirable actions.

Among security leaders, the biggest worries about AI usage in their environments all involve ways that systems might be manipulated to bypass traditional controls.

  • 61% are most concerned about the exposure of sensitive data
  • 56% are most concerned about potential data security and policy violations
  • 51% are most concerned about the misuse or abuse of AI tools

The more employees rely on AI in their day-to-day workflows, the more critical it becomes for security teams to understand how prompt behavior determines model behavior – and where that behavior could go wrong.

What does “securing AI” mean in practice?

AI adoption opens new security risks that blur the boundaries between traditional security disciplines. A single malicious interaction with an AI model could involve identity misuse, sensitive data exposure, application logic abuse, and supply chain risk – all within a single workflow. Protecting this dynamic and rapidly evolving attack surface requires an approach that spans identity security, cloud security, application security, data security, software development security, and more.

The task for security leaders is to implement the tools, policies, and frameworks to mitigate these novel, expansive, and cross-disciplinary risks.

However, within most enterprises, AI policy creation remains in its infancy. Just 37% of security leaders report that their organization has a formal AI policy, representing a small but worrisome decrease from last year. Conversations about AI abound: in 52% of organizations, there’s discussion about an AI policy. Still, talk is cheap, and leaders will need to take action if they’re to successfully enable secure AI innovation.

To govern and protect their AI systems, organizations must take a multi-pronged approach. This requires building out policies, but it also demands that they are able to:

  • Monitor the prompts driving GenAI assistants and agents in real time. Organizations must be able to inspect prompts, sessions, and responses across enterprise GenAI tools, low- and high-code environments, and SaaS and SASE so that they can detect clever conversational prompt attacks and malicious chaining.
  • Secure all business AI agent identities. Security teams need to identify all the agents acting within their environment and supply chain, map their connections and interactions via MCP and services like Amazon S3, and audit their behavior across the cloud, SaaS environments, and on the network and endpoint devices.
  • Maintain centralized, comprehensive visibility. Understanding intent, assessing risks, and enforcing policies all require that security teams have a single view that spans AI interactions across the entire business.
  • Discover and control shadow AI. Teams need to be able to identify unsanctioned AI activities, distinguish the misuse of legitimate tools from their appropriate use, and apply policies to protect data, while guiding users towards approved solutions.

Scaling AI safely and responsibly

The approach that most cybersecurity vendors have taken – using historical patterns to predict future threats – doesn’t work well for AI systems. Because AI changes its behavior in response to the information it encounters while taking action, previous patterns don’t indicate what it will do next. Looking at past attacks can’t tell you how complex models will behave in your individual business.

Securing AI requires interpreting ambiguous interactions, uncovering subtleties that reveal intent within extended conversations, understanding how access accumulates over time, and recognizing when behavior – both human and machine – begins to drift towards areas of risk. To do this, you need to understand what “normal” looks like in each unique organization: how users, systems, applications, and AI agents behave, how they communicate, and how data flows between them.

Darktrace has spent more than a decade designing AI-powered solutions that can understand and adapt to evolving behavior in complex environments. This technology learns directly from the environment it protects, identifying malicious actions that deviate from normal operations, so that it can stop AI-related threats on the very first encounter.

As AI adoption reshapes enterprise operations, humans and machines will collaborate more and more often. This collaboration might dramatically expand the attack surface, but it also has the potential to be a force multiplier for defenders.

Explore the full State of AI Cybersecurity 2026 report for deeper insights into how security leaders are responding to AI-driven risks.

Learn more about securing AI in your enterprise.

[related-resource]

Continue reading
About the author
The Darktrace Community
Your data. Our AI.
Elevate your network security with Darktrace AI