Blog
/
/
April 22, 2021

Darktrace Identifies APT35 in Pre-Infected State

Learn how Darktrace identified APT35 (Charming Kitten) in a pre-infected environment. Gain insights into the detection and mitigation of this threat.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Max Heinemeyer
Global Field CISO
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
22
Apr 2021

What is APT35?

APT35, sometimes referred to as Charming Kitten, Imperial Kitten, or Tortoiseshell, is a notorious cyber-espionage group which has been active for nearly 10 years. Famous for stealing scripts from HBO’s Game of Thrones in 2017 and suspected of interfering in the U.S. presidential election last year, it has launched extensive campaigns against organizations and officials across North America and the Middle East. Public attribution has associated APT35 with an Iran-based nation state threat actor.

Darktrace regularly detects attacks by many known threat actors including Evil Corp and APT41, alongside large amounts of malicious but uncategorized activity from sophisticated attack groups. As Cyber AI doesn’t rely on pre-defined rules, signatures, or threat intelligence to detect cyber-attacks, it often detects new and previously unknown threats.

This blog post examines a real-world instance of APT35 activity in an organization in the EMEA region. Darktrace observed this activity last June, but due to ongoing investigations, details are only now being released with the wider community. It represents an interesting case for the value of self-learning AI in two key ways:

  • Identifying ‘low and slow’ attacks: How do you spot an attacker that is lying low and conducts very little detectable activity?
  • Detecting pre-existing infections without signatures: What if a threat actor is already inside the system when Cyber AI is activated?

Advanced Persistent Threats (APTs) lying low

APT35 had already infected a single corporate device, likely via a spear phishing email, when Cyber AI was deployed in the company’s digital estate for the first time.

The infected device exhibited no other signs of malicious activity beyond continued command and control (C2) beaconing, awaiting instructions from the attackers for several days. This is what we call ‘lying low’ – where the hacker stays present within a system, but remains under the radar, avoiding detection either intentionally, or because they’re focusing on another victim while being content with backdoor access into the organization.

Either way, this is a nightmare scenario for a security team and any security vendor: an APT which has established a foothold and is lying in wait to continue their attack – undetected.

Finding the infected device

When Darktrace’s AI was first activated, it spent five business days learning the unique ‘patterns of life’ for the organization. After this initial, short learning period, Darktrace immediately flagged the infected device and the C2 activity.

Although the breach device had been beaconing since before Darktrace was implemented, Cyber AI automatically clusters devices into ‘peer groups’ based on similar behavioral patterns, enabling Darktrace to identify the continued C2 traffic coming from the device as highly unusual in comparison to the wider, automatically identified peer group. None of its behaviorally close neighbors were doing anything remotely similar, and Darktrace was therefore able to determine that the activity was malicious, and that it represented C2 beaconing.

Darktrace detected the APT35 C2 activity without the use of any signatures or threat intelligence on multiple levels. Responding to the alerts, the internal security team quickly isolated the device and verified with the Darktrace system that no further reconnaissance, lateral movement, or data exfiltration had taken place.

APT35 ‘Charming Kitten’ analysis

Once the C2 was detected, Cyber AI Analyst immediately began analyzing the infected device. The Cyber AI Analyst only highlights the most severe incidents in any given environment and automates many of the typical level one and level two SOC tasks. This includes reviewing all alerts, investigating the scope and nature of each event, and reducing time to triage by 92%.

Figure 1: Similar Cyber AI Analyst report observing C2 communications

Numerous factors made the C2 activity stand out strongly to Darktrace. Combining all those small anomalies, Darktrace was able to autonomously prioritize this behavior and classify it as the most significant security incident in the week.

Figure 2: Example list of C2 detections for an APT35 attack

Some of the command and control destinations were known to threat intelligence and open-source intelligence (OSINT) – for instance, the domain cortanaservice[.]com is a known C2 domain for APT35.

However, the presence of a known malicious domain does not guarantee detection. In fact, the organization had a very mature security stack, yet they failed to discover the existing APT35 infection until Darktrace was activated in their environment.

Assessing the impact of the intrusion

Once an intrusion has been identified, it is important to understand the extent of it – such as whether lateral movement is occurring and what connectivity the infected device has in general. Asset management is never perfect, so it can be very hard for organizations to determine what damage a compromised device is capable of inflicting.

Darktrace presents this information in real time, and from a bird’s-eye perspective, making the assessment very simple. It immediately highlights which subnet the device is located in and any further context.

Figure 3: Darktrace’s Threat Visualizer displaying the connectivity of a device

Based on this information, the organization confirmed that it was a corporate device that had been infected by APT35. As Darktrace shows any credentials associated with the device, a quick assessment could be made of potentially compromised accounts.

Figure 4: Similar and associated credentials of a device

Luckily, only a single local user account was associated with the device.

The exact level of privileges and connectivity which the infected device had, as well as the extent to which the intrusion might have spread from the initially infected device, was still uncertain. By looking at the device’s event log, this became rapidly clear within minutes.

Filtering first for internal connections only (excluding any connections going to the Internet) gave a good idea of the level of connectivity of the device. A cursory glance showed that the device did indeed have some level of internal connectivity. It made DNS requests to the internal domain controller and was making successful NetBIOS connections over ports 135 and 139 internally.

By filtering further in the event log, it quickly became clear that in this time the device had not used any administrative channels, such as RDP, SSH, Telnet, or SMB. This is a strong indicator that no lateral movement over common channels had taken place.

It is more difficult to assess whether the device was performing any other suspicious activity, like stealthy reconnaissance or staging data from other internal devices. Darktrace provided another capability to assess this quickly – filtering the device’s network connections to show only unusual or new connections.

Figure 5: Event device log filtered to show unusual connections only

Darktrace assesses each individual connection for every entity observed in context, using its unsupervised machine learning to evaluate how unusual a given connection is. This could be a single new failed internal connection attempt, indicating stealthy reconnaissance, or a connection over SMB at an unusual time to a new internal destination, implying lateral movement or data staging.

By filtering for only unusual or new connections, Darktrace’s AI produces further leads that can be pursued extremely quickly, thanks to the context and added visibility.

No further suspicious internal connections were observed, strengthening the hypothesis that APT35 was lying low at that time.

Unprecedented but not unpreventable

Darktrace’s 24/7 monitoring service, Proactive Threat Notifications, would have alerted on and escalated the incident. Darktrace RESPOND would have responded autonomously and enforced normal activity for the device, preventing the C2 traffic without interrupting regular business workflows.

It is impossible to predefine where the next attack will come from. APT35 is just one of the many sophisticated threat actors on the scene, and with such a diverse and volatile threat landscape, unsupervised machine learning is crucial in spotting and defending against anomalies, no matter what form they take.

This case study helps illustrate how Darktrace detects pre-existing infections and ‘low and slow’ attacks, and further shows how Darktrace can be used to quickly understand the scope and extent of an intrusion.

Learn how Cyber AI Analyst detected APT41 two weeks before public attribution

Shortened list of C2 detections over four days on the infected device:

  • Compromise / Sustained TCP Beaconing Activity To Rare Endpoint
  • Compromise / Beaconing Meta Model
  • Compromise / Beaconing Activity To External Rare
  • Compromise / SSL Beaconing To Rare Destination
  • Compromise / Slow Beaconing To External Rare
  • Compromise / High Volume of Connections with Beacon Score
  • Compromise / Unusual Connections to Rare Lets Encrypt
  • Compromise / Beacon for 4 Days
  • Compromise / Agent Beacon

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Max Heinemeyer
Global Field CISO

More in this series

No items found.

Blog

/

/

April 21, 2025

Why Asset Visibility and Signature-Based Threat Detection Fall Short in ICS Security

operational technology operators looking at equipment Default blog imageDefault blog image

In the realm of Industrial Control System (ICS) security, two concepts often dominate discussions:

  1. Asset visibility
  2. Signature-based threat detection

While these are undoubtedly important components of a cybersecurity strategy, many organizations focus on them as the primary means to enhance ICS security. However, this is more of a short-term approach and these organizations often realize too late that these efforts do not translate into actually securing their environment.

To truly secure your environment, organizations should focus their efforts on anomaly detection across core network segments. This shift enables enhanced threat detection, while also providing a more meaningful and dynamic view of asset communication.

By prioritizing anomaly detection, organizations can build a more resilient security posture, detecting and mitigating threats before they escalate into serious incidents.

The shortcomings of asset visibility and signature-based threat detection

Asset visibility is frequently touted as the foundation of ICS security. The idea is that you cannot protect what you cannot see.

However, organizations that invest heavily in asset discovery tools often end up with extensive inventories of connected devices but little actionable insight into their security posture or risk level, let alone any indication as to whether these assets have been compromised.

Simply knowing what assets exist does not equate to securing them.

Worse, asset discovery is often a time-consuming static process. By the time practitioners complete their inventory, not only is there likely to have been changes to their assets, but the threat landscape may have already evolved, introducing new vulnerabilities and attack vectors  that were not previously accounted for.

Signature-based detection is reactive, not proactive

Traditional signature-based threat detection relies on known attack patterns and predefined signatures to identify malicious activity. This approach is fundamentally reactive because it can only detect threats that have already been identified elsewhere.

In an ICS environment where cyber-attacks on OT systems have become more frequent, sophisticated, and destructive, signature-based detection provides a false sense of security while failing to detect sophisticated, previously unseen threats:

Additionally, adversaries often dwell within OT networks for extended periods, studying their specific conditions to identify the most effective way to cause disruption. This means that the likelihood of any attack within OT network looking the same as a previous attack is unlikely.

Implementation effort vs. actual security gains

Many organizations spend considerable time and resources implementing asset visibility solutions and signature-based detection systems only to be required to constantly tune and adjust the sensitivity of the solution.

Despite these efforts, these tools often fail to deliver the level of protection expected, leaving gaps in detection, an overwhelming amount of asset data, and a constant stream of false positives and false negatives from signature-based systems.

A more effective approach: Anomaly detection at core network segments

While it's important to understand the type of device involved during alert triage, organizations should shift their focus from static asset visibility and threat signatures to anomaly detection across critical network segments. This method provides a superior approach to ICS security for several reasons:

Proactive threat detection

Anomaly detection monitors network behavior in real time and identifies deviations . This means that even novel or previously unseen threats can be detected based on unusual network activity, rather than relying on predefined signatures.

Granular security insights

By analyzing traffic patterns across key network segments, organizations can gain deeper insights into how assets interact. This not only improves threat detection but also organically enhances asset visibility. Instead of simply cataloging devices, organizations gain meaningful visibility into how they behave within the network, understanding their unique pattern of life, and making it easier to detect malicious activity.

Efficiency and scalability

Implementing anomaly detection allows security teams to focus on real threats rather than sifting through massive inventories of assets or managing signature updates. It scales better with evolving threats and provides continuous monitoring without requiring constant manual intervention.

Enhanced threat detection for critical infrastructure

Unlike traditional security approaches that rely on static baselines or threat intelligence that doesn't reflect the unique behaviors of your OT environment, Darktrace / OT uses multiple AI techniques to continuously learn and adapt to your organization’s real-world activity across IT, OT, and IoT.

By building a dynamic understanding of each device’s pattern of life, it detects threats at every stage of the kill chain — from known malware to zero-days and insider attacks — without overwhelming your team with false positives or unnecessary alerts. This ensures scalable protection as your environment evolves, without a significant increase in operational overhead.

[related-resource]

Continue reading
About the author
Jeffrey Macre
Industrial Security Solutions Architect

Blog

/

/

April 16, 2025

Introducing Version 2 of Darktrace’s Embedding Model for Investigation of Security Threats (DEMIST-2)

woman looking at laptop at deskDefault blog imageDefault blog image

DEMIST-2 is Darktrace’s latest embedding model, built to interpret and classify security data with precision. It performs highly specialized tasks and can be deployed in any environment. Unlike generative language models, DEMIST-2 focuses on providing reliable, high-accuracy detections for critical security use cases.

DEMIST-2 Core Capabilities:  

  • Enhances Cyber AI Analyst’s ability to triage and reason about security incidents by providing expert representation and classification of security data, and as a part of our broader multi-layered AI system
  • Classifies and interprets security data, in contrast to language models that generate unpredictable open-ended text responses  
  • Incorporates new innovations in language model development and architecture, optimized specifically for cybersecurity applications
  • Deployable across cloud, on-prem, and edge environments, DEMIST-2 delivers low-latency, high-accuracy results wherever it runs. It enables inference anywhere.

Cybersecurity is constantly evolving, but the need to build precise and reliable detections remains constant in the face of new and emerging threats. Darktrace’s Embedding Model for Investigation of Security Threats (DEMIST-2) addresses these critical needs and is designed to create stable, high-fidelity representations of security data while also serving as a powerful classifier. For security teams, this means faster, more accurate threat detection with reduced manual investigation. DEMIST-2's efficiency also reduces the need to invest in massive computational resources, enabling effective protection at scale without added complexity.  

As an embedding language model, DEMIST-2 classifies and creates meaning out of complex security data. This equips our Self-Learning AI with the insights to compare, correlate, and reason with consistency and precision. Classifications and embeddings power core capabilities across our products where accuracy is not optional, as a part of our multi-layered approach to AI architecture.

Perhaps most importantly, DEMIST-2 features a compact architecture that delivers analyst-level insights while meeting diverse deployment needs across cloud, on-prem, and edge environments. Trained on a mixture of general and domain-specific data and designed to support task specialization, DEMIST-2 provides privacy-preserving inference anywhere, while outperforming larger general-purpose models in key cybersecurity tasks.

This proprietary language model reflects Darktrace's ongoing commitment to continually innovate our AI solutions to meet the unique challenges of the security industry. We approach AI differently, integrating diverse insights to solve complex cybersecurity problems. DEMIST-2 shows that a refined, optimized, domain-specific language model can deliver outsized results in an efficient package. We are redefining possibilities for cybersecurity, but our methods transfer readily to other domains. We are eager to share our findings to accelerate innovation in the field.  

The evolution of DEMIST-2

Key concepts:  

  • Tokens: The smallest units processed by language models. Text is split into fragments based on frequency patterns allowing models to handle unfamiliar words efficiently
  • Low-Rank Adaptors (LoRA): Small, trainable components added to a model that allow it to specialize in new tasks without retraining the full system. These components learn task-specific behavior while the original foundation model remains unchanged. This approach enables multiple specializations to coexist, and work simultaneously, without drastically increasing processing and memory requirements.

Darktrace began using large language models in our products in 2022. DEMIST-2 reflects significant advancements in our continuous experimentation and adoption of innovations in the field to address the unique needs of the security industry.  

It is important to note that Darktrace uses a range of language models throughout its products, but each one is chosen for the task at hand. Many others in the artificial intelligence (AI) industry are focused on broad application of large language models (LLMs) for open-ended text generation tasks. Our research shows that using LLMs for classification and embedding offers better, more reliable, results for core security use cases. We’ve found that using LLMs for open-ended outputs can introduce uncertainty through inaccurate and unreliable responses, which is detrimental for environments where precision matters. Generative AI should not be applied to use cases, such as investigation and threat detection, where the results can deeply matter. Thoughtful application of generative AI capabilities, such as drafting decoy phishing emails or crafting non-consequential summaries are helpful but still require careful oversight.

Data is perhaps the most important factor for building language models. The data used to train DEMIST-2 balanced the need for general language understanding with security expertise. We used both publicly available and proprietary datasets.  Our proprietary dataset included privacy-preserving data such as URIs observed in customer alerts, anonymized at source to remove PII and gathered via the Call Home and aianalyst.darktrace.com services. For additional details, read our Technical Paper.  

DEMIST-2 is our way of addressing the unique challenges posed by security data. It recognizes that security data follows its own patterns that are distinct from natural language. For example, hostnames, HTTP headers, and certificate fields often appear in predictable ways, but not necessarily in a way that mirrors natural language. General-purpose LLMs tend to break down when used in these types of highly specialized domains. They struggle to interpret structure and context, fragmenting important patterns during tokenization in ways that can have a negative impact on performance.  

DEMIST-2 was built to understand the language and structure of security data using a custom tokenizer built around a security-specific vocabulary of over 16,000 words. This tokenizer allows the model to process inputs more accurately like encoded payloads, file paths, subdomain chains, and command-line arguments. These types of data are often misinterpreted by general-purpose models.  

When the tokenizer encounters unfamiliar or irregular input, it breaks the data into smaller pieces so it can still be processed. The ability to fall back to individual bytes is critical in cybersecurity contexts where novel or obfuscated content is common. This approach combines precision with flexibility, supporting specialized understanding with resilience in the face of unpredictable data.  

Along with our custom tokenizer, we made changes to support task specialization without increasing model size. To do this, DEMIST-2 uses LoRA . LoRA is a technique that integrates lightweight components with the base model to allow it to perform specific tasks while keeping memory requirements low. By using LoRA, our proprietary representation of security knowledge can be shared and reused as a starting point for more highly specialized models, for example, it takes a different type of specialization to understand hostnames versus to understand sensitive filenames. DEMIST-2 dynamically adapts to these needs and performs them with purpose.  

The result is that DEMIST-2 is like having a room of specialists working on difficult problems together, while sharing a basic core set of knowledge that does not need to be repeated or reintroduced to every situation. Sharing a consistent base model also improves its maintainability and allows efficient deployment across diverse environments without compromising speed or accuracy.  

Tokenization and task specialization represent only a portion of the updates we have made to our embedding model. In conjunction with the changes described above, DEMIST-2 integrates several updated modeling techniques that reduce latency and improve detections. To learn more about these details, our training data and methods, and a full write-up of our results, please read our scientific whitepaper.

DEMIST-2 in action

In this section, we highlight DEMIST-2's embeddings and performance. First, we show a visualization of how DEMIST-2 classifies and interprets hostnames, and second, we present its performance in a hostname classification task in comparison to other language models.  

Embeddings can often feel abstract, so let’s make them real. Figure 1 below is a 2D visualization of how DEMIST-2 classifies and understands hostnames. In reality, these hostnames exist across many more dimensions, capturing details like their relationships with other hostnames, usage patterns, and contextual data. The colors and positions in the diagram represent a simplified view of how DEMIST-2 organizes and interprets these hostnames, providing insights into their meaning and connections. Just like an experienced human analyst can quickly identify and group hostnames based on patterns and context, DEMIST-2 does the same at scale.  

DEMIST-2 visualization of hostname relationships from a large web dataset.
Figure 1: DEMIST-2 visualization of hostname relationships from a large web dataset.

Next, let’s zoom in on two distinct clusters that DEMIST-2 recognizes. One cluster represents small businesses (Figure 2) and the other, Russian and Polish sites with similar numerical formats (Figure 3). These clusters demonstrate how DEMIST-2 can identify specific groupings based on real-world attributes such as regional patterns in website structures, common formats used by small businesses, and other properties such as its understanding of how websites relate to each other on the internet.

Cluster of small businesses
Figure 2: Cluster of small businesses
Figure 3: Cluster of Russian and Polish sites with a similar numerical format

The previous figures provided a view of how DEMIST-2 works. Figure 4 highlights DEMIST-2’s performance in a security-related classification task. The chart shows how DEMIST-2, with just 95 million parameters, achieves nearly 94% accuracy—making it the highest-performing model in the chart, despite being the smallest. In comparison, the larger model with 2.78 billion parameters achieves only about 89% accuracy, showing that size doesn’t always mean better performance. Small models don’t mean poor performance. For many security-related tasks, DEMIST-2 outperforms much larger models.

Hostname classification task performance comparison against comparable open source foundation models
Figure 4: Hostname classification task performance comparison against comparable open source foundation models

With these examples of DEMIST-2 in action, we’ve shown how it excels in embedding and classifying security data while delivering high performance on specialized security tasks.  

The DEMIST-2 advantage

DEMIST-2 was built for precision and reliability. Our primary goal was to create a high-performance model capable of tackling complex cybersecurity tasks. Optimizing for efficiency and scalability came second, but it is a natural outcome of our commitment to building a strong, effective solution that is available to security teams working across diverse environments. It is an enormous benefit that DEMIST-2 is orders of magnitude smaller than many general-purpose models. However, and much more importantly, it significantly outperforms models in its capabilities and accuracy on security tasks.  

Finding a product that fits into an environment’s unique constraints used to mean that some teams had to settle for less powerful or less performant products. With DEMIST-2, data can remain local to the environment, is entirely separate from the data of other customers, and can even operate in environments without network connectivity. The size of our model allows for flexible deployment options while at the same time providing measurable performance advantages for security-related tasks.  

As security threats continue to evolve, we believe that purpose-built AI systems like DEMIST-2 will be essential tools for defenders, combining the power of modern language modeling with the specificity and reliability that builds trust and partnership between security practitioners and AI systems.

Conclusion

DEMIST-2 has additional architectural and deployment updates that improve performance and stability. These innovations contribute to our ability to minimize model size and memory constraints and reflect our dedication to meeting the data handling and privacy needs of security environments. In addition, these choices reflect our dedication to responsible AI practices.

DEMIST-2 is available in Darktrace 6.3, along with a new DIGEST model that uses GNNs and RNNs to score and prioritize threats with expert-level precision.

[related-resource]

Continue reading
About the author
Margaret Cunningham, PhD
Director, Security & AI Strategy, Field CISO
Your data. Our AI.
Elevate your network security with Darktrace AI