Blog
/
Email
/
July 18, 2023

Understanding Email Security & the Psychology of Trust

Photo showing woman logging into her laptop with username and passwordDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
18
Jul 2023
We explore how psychological research into the nature of trust relates to our relationship with technology - and what that means for AI solutions.

When security teams discuss the possibility of phishing attacks targeting their organization, often the first reaction is to assume it is inevitable because of the users. Users are typically referenced in cyber security conversations as organizations’ greatest weaknesses, cited as the causes of many grave cyber-attacks because they click links, open attachments, or allow multi-factor authentication bypass without verifying the purpose.

While for many, the weakness of the user may feel like a fact rather than a theory, there is significant evidence to suggest that users are psychologically incapable of protecting themselves from exploitation by phishing attacks, with or without regular cyber awareness trainings. The psychology of trust and the nature of human reliance on technology make the preparation of users for the exploitation of that trust in technology very difficult – if not impossible.

This Darktrace long read will highlight principles of psychological and sociological research regarding the nature of trust, elements of the trust that relate to technology, and how the human brain is wired to rely on implicit trust. These principles all point to the outcome that humans cannot be relied upon to identify phishing. Email security driven by machine augmentation, such as AI anomaly detection, is the clearest solution to tackle that challenge.

What is the psychology of trust?

Psychological and sociological theories on trust largely centre around the importance of dependence and a two-party system: the trustor and the trustee. Most research has studied the impacts of trust decisions on interpersonal relationships, and the characteristics which make those relationships more or less likely to succeed. In behavioural terms, the elements most frequently referenced in trust decisions are emotional characteristics such as benevolence, integrity, competence, and predictability.1

Most of the behavioural evaluations of trust decisions survey why someone chooses to trust another person, how they made that decision, and how quickly they arrived at their choice. However, these micro-choices about trust require the context that trust is essential to human survival. Trust decisions are rooted in many of the same survival instincts which require the brain to categorize information and determine possible dangers. More broadly, successful trust relationships are essential in maintaining the fabric of human society, critical to every element of human life.

Trust can be compared to dark matter (Rotenberg, 2018), which is the extensive but often difficult to observe material that binds planets and earthly matter. In the same way, trust is an integral but often a silent component of human life, connecting people and enabling social functioning.2

Defining implicit and routine trust

As briefly mentioned earlier, dependence is an essential element of the trusting relationship. Being able to build a routine of trust, based on the maintenance rather than establishment of trust, becomes implicit within everyday life. For example, speaking to a friend about personal issues and life developments is often a subconscious reaction to the events occurring, rather than an explicit choice to trust said friend each time one has new experiences.

Active and passive levels of cognition are important to recognize in decision-making, such as trust choices. Decision-making is often an active cognitive process requiring a lot of resource from the brain. However, many decisions occur passively, especially if they are not new choices e.g. habits or routines. The brain’s focus turns to immediate tasks while relegating habitual choices to subconscious thought processes, passive cognition. Passive cognition leaves the brain open to impacts from inattentional blindness, wherein the individual may be abstractly aware of the choice but it is not the focus of their thought processes or actively acknowledged as a decision. These levels of cognition are mostly referenced as “attention” within the brain’s cognition and processing.3

This idea is essentially a concept of implicit trust, meaning trust which is occurring as background thought processes rather than active decision-making. This implicit trust extends to multiple areas of human life, including interpersonal relationships, but also habitual choice and lifestyle. When combined with the dependence on people and services, this implicit trust creates a haze of cognition where trust is implied and assumed, rather than actively chosen across a myriad of scenarios.

Trust and technology

As researchers at the University of Cambridge highlight in their research into trust and technology, ‘In a fundamental sense, all technology depends on trust.’  The same implicit trust systems which allow us to navigate social interactions by subconsciously choosing to trust, are also true of interactions with technology. The implied trust in technology and services is perhaps most easily explained by a metaphor.

Most people have a favourite brand of soda. People will routinely purchase that soda and drink it without testing it for chemicals or bacteria and without reading reviews to ensure the companies that produce it have not changed their quality standards. This is a helpful, representative example of routine trust, wherein the trust choice is implicit through habitual action and does not mean the person is actively thinking about the ramifications of continuing to use a product and trust it.

The principle of dependence is especially important in trust and technology discussions, because the modern human is entirely reliant on technology and so has no way to avoid trusting it.5   Specifically important in workplace scenarios, employees are given a mandatory set of technologies, from programs to devices and services, which they must interact with on a daily basis. Over time, the same implicit trust that would form between two people forms between the user and the technology. The key difference between interpersonal trust and technological trust is that deception is often much more difficult to identify.

The implicit trust in workplace technology

To provide a bit of workplace-specific context, organizations rely on technology providers for the operation (and often the security) of their devices. The organizations also rely on the employees (users) to use those technologies within the accepted policies and operational guidelines. The employees rely on the organization to determine which products and services are safe or unsafe.

Within this context, implicit trust is occurring at every layer of the organization and its technological holdings, but often the trust choice is only made annually by a small security team rather than continually evaluated. Systems and programs remain in place for years and are used because “that’s the way it’s always been done. Within that context, the exploitation of that trust by threat actors impersonating or compromising those technologies or services is extremely difficult to identify as a human.

For example, many organizations utilize email communications to promote software updates for employees. Typically, it would consist of email prompting employees to update versions from the vendors directly or from public marketplaces, such as App Store on Mac or Microsoft Store for Windows. If that kind of email were to be impersonated, spoofing an update and including a malicious link or attachment, there would be no reason for the employee to question that email, given the explicit trust enforced through habitual use of that service and program.

Inattentional blindness: How the brain ignores change

Users are psychologically predisposed to trust routinely used technologies and services, with most of those trust choices continuing subconsciously. Changes to these technologies would often be subject to inattentional blindness, a psychological phenomenon wherein the brain either overwrites sensory information with what the brain expects to see rather than what is actually perceived.

A great example of inattentional blindness6 is the following experiment, which asks individuals to count the number of times a ball is passed between multiple people. While that is occurring, something else is going on in the background, which, statistically, those tested will not see. The shocking part of this experiment comes after, when the researcher reveals that the event occurring in the background not seen by participants was a person in a gorilla suit moving back and forth between the group. This highlights how significant details can be overlooked by the brain and “overwritten” with other sensory information. When applied to technology, inattentional blindness and implicit trust makes spotting changes in behaviour, or indicators that a trusted technology or service has been compromised, nearly impossible for most humans to detect.

With all this in mind, how can you prepare users to correctly anticipate or identify a violation of that trust when their brains subconsciously make trust decisions and unintentionally ignore cues to suggest a change in behaviour? The short answer is, it’s difficult, if not impossible.

How threats exploit our implicit trust in technology

Most cyber threats are built around the idea of exploiting the implicit trust humans place in technology. Whether it’s techniques like “living off the land”, wherein programs normally associated with expected activities are leveraged to execute an attack, or through more overt psychological manipulation like phishing campaigns or scams, many cyber threats are predicated on the exploitation of human trust, rather than simply avoiding technological safeguards and building backdoors into programs.

In the case of phishing, it is easy to identify the attempts to leverage the trust of users in technology and services. The most common example of this would be spoofing, which is one of the most common tactics observed by Darktrace/Email. Spoofing is mimicking a trusted user or service, and can be accomplished through a variety of mechanisms, be it the creation of a fake domain meant to mirror a trusted link type, or the creation of an email account which appears to be a Human Resources, Internal Technology or Security service.

In the case of a falsified internal service, often dubbed a “Fake Support Spoof”, the user is exploited by following instructions from an accepted organizational authority figure and service provider, whose actions should normally be adhered to. These cases are often difficult to spot when studying the sender’s address or text of the email alone, but are made even more difficult to detect if an account from one of those services is compromised and the sender’s address is legitimate and expected for correspondence. Especially given the context of implicit trust, detecting deception in these cases would be extremely difficult.

How email security solutions can solve the problem of implicit trust

How can an organization prepare for this exploitation? How can it mitigate threats which are designed to exploit implicit trust? The answer is by using email security solutions that leverage behavioural analysis via anomaly detection, rather than traditional email gateways.

Expecting humans to identify the exploitation of their own trust is a high-risk low-reward endeavour, especially when it takes different forms, affects different users or portions of the organization differently, and doesn’t always have obvious red flags to identify it as suspicious. Cue email security using anomaly detection as the key answer to this evolving problem.

Anomaly detection enabled by machine learning and artificial intelligence (AI) removes the inattentional blindness that plagues human users and security teams and enables the identification of departures from the norm, even those designed to mimic expected activity. Using anomaly detection mitigates multiple human cognitive biases which might prevent teams from identifying evolving threats, and also guarantees that all malicious behaviour will be detected. Of course, anomaly detection means that security teams may be alerted to benign anomalous activity, but still guarantees that no threat, no matter how novel or cleverly packaged, won’t be identified and raised to the human security team.

Utilizing machine learning, especially unsupervised machine learning, mimics the benefits of human decision making and enables the identification of patterns and categorization of information without the framing and biases which allow trust to be leveraged and exploited.

For example, say a cleverly written email is sent from an address which appears to be a Microsoft affiliate, suggesting to the user that they need to patch their software due to the discovery of a new vulnerability. The sender’s address appears legitimate and there are news stories circulating on major media providers that a new Microsoft vulnerability is causing organizations a lot of problems. The link, if clicked, forwards the user to a login page to verify their Microsoft credentials before downloading the new version of the software. After logging in, the program is available for download, and only requires a few minutes to install. Whether this email was created by a service like ChatGPT (generative AI) or written by a person, if acted upon it would give the threat actor(s) access to the user’s credential and password as well as activate malware on the device and possibly broader network if the software is downloaded.

If we are relying on users to identify this as unusual, there are a lot of evidence points that enforce their implicit trust in Microsoft services that make them want to comply with the email rather than question it. Comparatively, anomaly detection-driven email security would flag the unusualness of the source, as it would likely not be coming from a Microsoft-owned IP address and the sender would be unusual for the organization, which does not normally receive mail from the sender. The language might indicate solicitation, an attempt to entice the user to act, and the link could be flagged as it contains a hidden redirect or tailored information which the user cannot see, whether it is hidden beneath text like “Click Here” or due to link shortening. All of this information is present and discoverable in the phishing email, but often invisible to human users due to the trust decisions made months or even years ago for known products and services.

AI-driven Email Security: The Way Forward

Email security solutions employing anomaly detection are critical weapons for security teams in the fight to stay ahead of evolving threats and varied kill chains, which are growing more complex year on year. The intertwining nature of technology, coupled with massive social reliance on technology, guarantees that implicit trust will be exploited more and more, giving threat actors a variety of avenues to penetrate an organization. The changing nature of phishing and social engineering made possible by generative AI is just a drop in the ocean of the possible threats organizations face, and most will involve a trusted product or service being leveraged as an access point or attack vector. Anomaly detection and AI-driven email security are the most practical solution for security teams aiming to prevent, detect, and mitigate user and technology targeting using the exploitation of trust.

References

1https://www.kellogg.northwestern.edu/trust-project/videos/waytz-ep-1.aspx

2Rotenberg, K.J. (2018). The Psychology of Trust. Routledge.

3https://www.cognifit.com/gb/attention

4https://www.trusttech.cam.ac.uk/perspectives/technology-humanity-society-democracy/what-trust-technology-conceptual-bases-common

5Tyler, T.R. and Kramer, R.M. (2001). Trust in organizations : frontiers of theory and research. Thousand Oaks U.A.: Sage Publ, pp.39–49.

6https://link.springer.com/article/10.1007/s00426-006-0072-4

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Author
Hanah Darley
Director of Threat Research
Book a 1-1 meeting with one of our experts
Share this article

More in this series

No items found.

Blog

/

OT

/

February 18, 2025

Unifying IT & OT With AI-Led Investigations for Industrial Security

Default blog imageDefault blog image

As industrial environments modernize, IT and OT networks are converging to improve efficiency, but this connectivity also creates new attack paths. Previously isolated OT systems are now linked to IT and cloud assets, making them more accessible to attackers.

While organizations have traditionally relied on air gaps, firewalls, data diodes, and access controls to separate IT and OT, these measures alone aren’t enough. Threat actors often infiltrate IT/Enterprise networks first then exploit segmentation, compromising credentials, or shared IT/OT systems to move laterally, escalate privileges, and ultimately enter the OT network.

To defend against these threats, organizations must first ensure they have complete visibility across IT and OT environments.

Visibility: The first piece of the puzzle

Visibility is the foundation of effective industrial cybersecurity, but it’s only the first step. Without visibility across both IT and OT, security teams risk missing key alerts that indicate a threat targeting OT at their earliest stages.

For Attacks targeting OT, early stage exploits often originate in IT environments, adversaries perform internal reconnaissance among other tactics and procedures but then laterally move into OT first affecting IT devices, servers and workstations within the OT network. If visibility is limited, these threats go undetected. To stay ahead of attackers, organizations need full-spectrum visibility that connects IT and OT security, ensuring no early warning signs are missed.

However, visibility alone isn’t enough. More visibility also means more alerts, this doesn’t just make it harder to separate real threats from routine activity, but bogs down analysts who have to investigate all these alerts to determine their criticality.

Investigations: The real bottleneck

While visibility is essential, it also introduces a new challenge: Alert fatigue. Without the right tools, analysts are often occupied investigating alerts with little to no context, forcing them to manually piece together information and determine if an attack is unfolding. This slows response times and increases the risk of missing critical threats.

Figure 1: Example ICS attack scenario

With siloed visibility across IT and OT each of these events shown above would be individually alerted by a detection engine with little to no context nor correlation. Thus, an analyst would have to try to piece together these events manually. Traditional security tools struggle to keep pace with the sophistication of these threats, resulting in an alarming statistic: less than 10% of alerts are thoroughly vetted, leaving organizations vulnerable to undetected breaches. As a result, incidents inevitably follow.

Darktrace’s Cyber AI Analyst uses AI-led investigations to improve workflows for analysts by automatically correlating alerts wherever they occur across both IT and OT. The multi-layered AI engine identifies high-priority incidents, and provides analysts with clear, actionable insights, reducing noise and highlighting meaningful threats. The AI significantly alleviates workloads, enabling teams to respond faster and more effectively before an attack escalates.

Overcoming organizational challenges across IT and OT

Beyond technical challenges like visibility and alert management, organizational dynamics further complicate IT-OT security efforts. Fundamental differences in priorities, workflows, and risk perspectives create challenges that can lead to misalignment between teams:

Non-transferable practices: IT professionals might assume that cybersecurity practices from IT environments can be directly applied to OT environments. This can lead to issues, as OT systems and workflows may not handle IT security processes as expected. It's crucial to recognize and respect the unique requirements and constraints of OT environments.

Segmented responsibilities: IT and OT teams often operate under separate organizational structures, each with distinct priorities, goals, and workflows. While IT focuses on data security, network integrity, and enterprise applications, OT prioritizes uptime, reliability, and physical processes.

Different risk perspectives: While IT teams focus on preventing cyber threats and regulatory violations, OT teams prioritize uptime and operational reliability making them drawn towards asset inventory tools that provide no threat detection capability.

Result: A combination of disparate and ineffective tools and misaligned teams can make any progress toward risk reduction at an organization seem impossible. The right tools should be able to both free up time for collaboration and prompt better communication between IT and OT teams where it is needed. However, different size operations structure their IT and OT teams differently which impacts the priorities for each team.

In real-world scenarios, small IT teams struggle to manage security across both IT and OT, while larger organizations with OT security teams face alert fatigue and numerous false positives slowing down investigations and hindering effective communication with the IT security teams.

By unifying visibility and investigations, Darktrace / OT helps organizations of all sizes detect threats earlier, streamline workflows, and enhance security across both IT and OT environments. The following examples illustrate how AI-driven investigations can transform security operations, improving detection, investigation, and response.

Before and after AI-led investigation

Before: Small manufacturing company

At a small manufacturing company, a 1-3 person IT team juggles everything from email security to network troubleshooting. An analyst might see unusual traffic through the firewall:

  • Unusual repeated outbound traffic from an IP within their OT network destined to an unidentifiable external IP.

With no dedicated OT security tools and limited visibility into the industrial network, they don’t know what the internal device in question is, if it is beaconing to a malicious external IP, and what it may be doing to other devices within the OT network. Without a centralized dashboard, they must manually check logs, ask operators about changes, and hunt for anomalies across different systems.

After a day of investigation, they concluded the traffic was not to be expected activity. They stop production within their smaller OT network, update their firewall rules and factory reset all OT devices and systems within the blast radius of the IP device in question.

After: Faster, automated response with Cyber AI Analyst

With Darktrace / OT and Cyber AI Analyst, the IT team moves from reactive, manual investigations to proactive, automated threat detection:

  • Cyber AI Analyst connects alerts across their IT and OT infrastructure temporally mapping them to attack frameworks and provides contextual analysis of how alerts are linked, revealing in real time attackers attempting lateral movement from IT to OT.
  • A human-readable incident report explains the full scope of the incident, eliminating hours of manual investigation.
  • The team is faster to triage as they are led directly to prioritized high criticality alerts, now capable of responding immediately instead of wasting valuable time hunting for answers.

By reducing noise, providing context, and automating investigations, Cyber AI Analyst transforms OT security, enabling small IT teams to detect, understand, and respond to threats—without deep OT cybersecurity expertise.

Before: Large critical infrastructure organization

In large critical infrastructure operations, OT and IT teams work in separate silos. The OT security team needs to quickly assess and prioritize alerts, but their system floods them with notifications:

  • Multiple new device connected to the ICS network alerts
  • Multiple failed logins to HMI detected
  • Multiple Unusual Modbus/TCP commands detected
  • Repeated outbound OT traffic to IT destinations

At first glance, these alerts seem important, but without context, it’s unclear whether they indicate a routine error, a misconfiguration, or an active cyber-attack. They might ask:

  • Are the failed logins just a mistake, or a brute-force attempt?
  • Is the outbound traffic part of a scheduled update, or data exfiltration?

Without correlation across events, the engineer must manually investigate each one—checking logs, cross-referencing network activity, and contacting operators—wasting valuable time. Meanwhile, if it’s a coordinated attack, the adversary may already be disrupting operations.

After: A new workflow with Cyber AI Analyst

With Cyber AI Analyst, the OT security team gets clear, automated correlation of security events, making investigations faster and more efficient:

  • Automated correlation of OT threats: Instead of isolated alerts, Cyber AI Analyst stitches together related events, providing a single, high-confidence incident report that highlights key details.
  • Faster time to meaning: The system connects anomalous behaviors (e.g., failed logins, unusual traffic from an HMI, and unauthorized PLC modifications) into a cohesive narrative, eliminating hours of manual log analysis.
  • Prioritized and actionable alerts: OT security receives clear, ranked incidents, immediately highlighting what matters most.
  • Rapid threat understanding: Security teams know within minutes whether an event is a misconfiguration or a cyber-attack, allowing for faster containment.

With Cyber AI Analyst, large organizations cut through alert noise, accelerate investigations, and detect threats faster—without disrupting OT operations.

An AI-led approach to industrial cybersecurity

Security vendors with a primary focus on IT may lack insight into OT threats. Even OT-focused vendors have limited visibility into IT device exploitation within OT networks, leading to failed ability to detect early indicators of compromise. A comprehensive solution must account for the unique characteristics of various OT environments.

In a world where industrial security is no longer just about protecting OT but securing the entire digital-physical ecosystem as it interacts with the OT network, Darktrace / OT is an AI-driven solution that unifies visibility across IT, IoT and OT, Cloud into one cohesive defense strategy.

Whether an attack originates from an external breach, an insider threat, a supply chain compromise, in the Cloud, OT, or IT domains Cyber AI Analyst ensures that security teams see the full picture - before disruption occurs.

Learn more about Darktrace / OT 

  • Unify IT and OT security under a single platform, ensuring seamless communication and protection for all interconnected devices.
  • Maintain uptime with AI-driven threat containment, stopping attacks without disrupting production.
  • Mitigate risks with or without patches, leveraging MITRE mitigations to reduce attack opportunities.

Download the solution brief to see how Darktrace secures critical infrastructure.

Continue reading
About the author
Daniel Simonds
Director of Operational Technology

Blog

/

Email

/

February 13, 2025

Why Darktrace / EMAIL excels against APTs

Default blog imageDefault blog image

What are APTs?

An Advanced Persistent Threat (APT) describes an adversary with sophisticated levels of expertise and significant resources, with the ability to carry out targeted cyber campaigns. These campaigns may penetrate an organization and remain undetected for long periods, allowing attackers to gather intelligence or cause damage over time.

Over the last few decades, the term APT has evolved from being almost exclusively associated with nation-state actors to a broader definition that includes highly skilled, well-resourced threat groups. While still distinct from mass, opportunistic cybercrime or "spray and pray" attacks, APT now refers to the elite tier of adversaries, whether state-sponsored or not, who demonstrate advanced capabilities, persistence, and a clear strategic focus. This shift reflects the growing sophistication of cyber threats, where non-state actors can now rival nation-states in executing covert, methodical intrusions to achieve long-term objectives.

These attacks are resource-intensive for threat actors to execute, but the potential rewards—ranging from financial gain to sensitive data theft—can be significant. In 2020, Business Email Compromise (BEC) attacks netted cybercriminals over $1.8 billion.1

And recently, the advent of AI has helped to automate launching these attacks, lowering the barriers to entry and making it more efficient to orchestrate the kind of attack that might previously have taken weeks to create. Research shows that AI can do 90% of a threat actor’s work2 – reducing time-to-target by automating tasks rapidly and avoiding errors in phishing communications. Email remains the most popular vector for initiating these sophisticated attacks, making it a critical battleground for cyber defense.

What makes APTs so successful?

The success of Advanced Persistent Threats (APTs) lies in their precision, persistence, and ability to exploit human and technical vulnerabilities. These attacks are carefully tailored to specific targets, using techniques like social engineering and spear phishing to gain initial access.

Once inside, attackers move laterally through networks, often remaining undetected for months or even years, silently gathering intelligence or preparing for a decisive strike. Alternatively, they might linger inside an account within the M365 environment, which could be even more valuable in terms of gathering information – in 2023 the average time to identify a breach in 2023 was 204 days.3

The subtle and long-term outlook nature of APTs makes them highly effective, as traditional security measures often fail to identify the subtle signs of compromise.

How Darktrace’s approach is designed to catch the most advanced threats

Luckily for our customers, Darktrace’s AI approach is uniquely equipped to detect and neutralize APTs. Unlike the majority of email security solutions that rely on static rules and signatures, or that train their AI on previous known-bad attack patterns, Darktrace leverages Self-Learning AI that baselines normal patterns of behavior within an organization, to immediately detect unusual activity that may signal an APT in progress.  

But in the modern era of email threats, no email security solution can guarantee 100% effectiveness. Because attackers operate with great sophistication, carefully adapting their tactics to evade detection – whether by altering attachments, leveraging compromised accounts, or moving laterally across an organization – a siloed security approach risks missing these subtle, multi-domain threats. That’s why a robust defense-in-depth strategy is essential to mitigate APTs.

Real-world threat finds: Darktrace / EMAIL in action

Let’s take a look at some real-world scenarios where Darktrace / EMAIL stopped tactics associated with APT campaigns in their tracks – from adversary-in-the-middle attacks to suspicious lateral movement.

1: How Darktrace disrupted an adversary-in-the-middle attack by identifying abnormal login redirects and blocking credential exfiltration

In October 2024, Darktrace detected an adversary-in-the-middle (AiTM) attack targeting a Darktrace customer. The attack began with a phishing email from a seemingly legitimate Dropbox address, which contained multiple link payloads inviting the recipient to access a file. Other solutions would have struggled to catch this attack, as the initial AitM attack was launched through delivering a malicious URL through a trusted vendor or service. Once compromised, the threat actor could have laid low on the target account, gathering reconnaissance, without detection from the email security solution.  

Darktrace / EMAIL identified the abnormal login redirects and flagged the suspicious activity. Darktrace / IDENTITY then detected unusual login patterns and blocked credential exfiltration attempts, effectively disrupting the attack and preventing the adversary from gaining unauthorized access. Read more.

Figure 1: Overview of the malicious email in the Darktrace / EMAIL console, highlighting Dropbox associated content/link payloads

2: How Darktrace stopped lateral movement to block NTLM hash theft

In early 2024, Darktrace detected an attack by the TA577 threat group, which aimed to steal NTLM hashes to gain unauthorized access to systems. The attack began with phishing emails containing ZIP files that connected to malicious infrastructure.  

A traditional email security solution would have likely missed this attack by focusing too heavily on analyzing the zip file payloads or relying on reputation analysis to understand whether the infrastructure was registered as bad before this activity was a recognized IoC.

Because it correlates activity across domains, Darktrace identified unusual lateral movement within the network and promptly blocked the attempts to steal NTLM hashes, effectively preventing the attackers from accessing sensitive credentials and securing the network. Read more.

Figure 2: A summary of anomaly indicators seen for a campaign email sent by TA577, as detected by Darktrace / EMAIL

3: How Darktrace prevented the WarmCookie backdoor deployment embedded in phishing emails

In mid-2024, Darktrace identified a phishing campaign targeting organizations with emails impersonating recruitment firms. These emails contained malicious links that, when clicked, deployed the WarmCookie backdoor.  

These emails are difficult to detect, as they use social engineering tactics to manipulate users into engaging with emails and following the embedded malicious links – but if a security solution is not analysing content and context, these could be allowed through.

In several observed cases across customer environments, Darktrace detected and blocked the suspicious behavior associated with WarmCookie that had already managed to evade customers’ native email security. By using behavioral analysis to correlate anomalous activity across the digital estate, Darktrace was able to identify the backdoor malware strain and notify customers. Read more.

Conclusion

These threat examples highlight a key principle of the Darktrace approach – that a backwards-facing approach grounded in threat intelligence will always be one step behind.

Most threat actors operate in campaigns, carefully crafting attacks and testing them across multiple targets. Once a campaign is identified, good defenders and traditional security solutions quickly update their defenses with new threat intelligence, rules, and signatures. However, APTs have the resources to rapidly adapt – spinning up new infrastructure, modifying payloads and altering their attack footprint to evade detection.

This is where Darktrace / EMAIL excels. Only by analyzing each user, message and interaction can an email security solution hope to catch the types of highly-sophisticated attacks that have the potential to cause major reputational and financial damage. Darktrace / EMAIL ensures that even the most subtle threats are detected and blocked with autonomous response, before causing impact – helping organizations remain one step ahead of increasingly adaptive threat actors.

Download the Darktrace / EMAIL Solution Brief

Discover the most advanced cloud-native AI email security solution to protect your domain and brand while preventing phishing, novel social engineering, business email compromise, account takeover, and data loss.

  • Gain up to 13 days of earlier threat detection and maximize ROI on your current email security
  • Experience 20-25% more threat blocking power with Darktrace / EMAIL
  • Stop the 58% of threats bypassing traditional email security

References

[1] FBI Internet Crime Report 2020

[2] https://www.optiv.com/insights/discover/blog/future-security-automation-how-ai-machine-learning-and-automation-are

[3] IBM Cost of a Data Breach Report 2023

Continue reading
About the author
Carlos Gray
Product Manager
Your data. Our AI.
Elevate your network security with Darktrace AI