Blog

Email

Thought Leadership

Email Security and the Psychology of Trust: Why Users Face a Losing Game of “Spot the Fake”

Photo showing woman logging into her laptop with username and passwordDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
18
Jul 2023
18
Jul 2023

When security teams discuss the possibility of phishing attacks targeting their organization, often the first reaction is to assume it is inevitable because of the users. Users are typically referenced in cyber security conversations as organizations’ greatest weaknesses, cited as the causes of many grave cyber-attacks because they click links, open attachments, or allow multi-factor authentication bypass without verifying the purpose.

While for many, the weakness of the user may feel like a fact rather than a theory, there is significant evidence to suggest that users are psychologically incapable of protecting themselves from exploitation by phishing attacks, with or without regular cyber awareness trainings. The psychology of trust and the nature of human reliance on technology make the preparation of users for the exploitation of that trust in technology very difficult – if not impossible.

This Darktrace long read will highlight principles of psychological and sociological research regarding the nature of trust, elements of the trust that relate to technology, and how the human brain is wired to rely on implicit trust. These principles all point to the outcome that humans cannot be relied upon to identify phishing. Email security driven by machine augmentation, such as AI anomaly detection, is the clearest solution to tackle that challenge.

What is the psychology of trust?

Psychological and sociological theories on trust largely centre around the importance of dependence and a two-party system: the trustor and the trustee. Most research has studied the impacts of trust decisions on interpersonal relationships, and the characteristics which make those relationships more or less likely to succeed. In behavioural terms, the elements most frequently referenced in trust decisions are emotional characteristics such as benevolence, integrity, competence, and predictability.1

Most of the behavioural evaluations of trust decisions survey why someone chooses to trust another person, how they made that decision, and how quickly they arrived at their choice. However, these micro-choices about trust require the context that trust is essential to human survival. Trust decisions are rooted in many of the same survival instincts which require the brain to categorize information and determine possible dangers. More broadly, successful trust relationships are essential in maintaining the fabric of human society, critical to every element of human life.

Trust can be compared to dark matter (Rotenberg, 2018), which is the extensive but often difficult to observe material that binds planets and earthly matter. In the same way, trust is an integral but often a silent component of human life, connecting people and enabling social functioning.2

Defining implicit and routine trust

As briefly mentioned earlier, dependence is an essential element of the trusting relationship. Being able to build a routine of trust, based on the maintenance rather than establishment of trust, becomes implicit within everyday life. For example, speaking to a friend about personal issues and life developments is often a subconscious reaction to the events occurring, rather than an explicit choice to trust said friend each time one has new experiences.

Active and passive levels of cognition are important to recognize in decision-making, such as trust choices. Decision-making is often an active cognitive process requiring a lot of resource from the brain. However, many decisions occur passively, especially if they are not new choices e.g. habits or routines. The brain’s focus turns to immediate tasks while relegating habitual choices to subconscious thought processes, passive cognition. Passive cognition leaves the brain open to impacts from inattentional blindness, wherein the individual may be abstractly aware of the choice but it is not the focus of their thought processes or actively acknowledged as a decision. These levels of cognition are mostly referenced as “attention” within the brain’s cognition and processing.3

This idea is essentially a concept of implicit trust, meaning trust which is occurring as background thought processes rather than active decision-making. This implicit trust extends to multiple areas of human life, including interpersonal relationships, but also habitual choice and lifestyle. When combined with the dependence on people and services, this implicit trust creates a haze of cognition where trust is implied and assumed, rather than actively chosen across a myriad of scenarios.

Trust and technology

As researchers at the University of Cambridge highlight in their research into trust and technology, ‘In a fundamental sense, all technology depends on trust.’  The same implicit trust systems which allow us to navigate social interactions by subconsciously choosing to trust, are also true of interactions with technology. The implied trust in technology and services is perhaps most easily explained by a metaphor.

Most people have a favourite brand of soda. People will routinely purchase that soda and drink it without testing it for chemicals or bacteria and without reading reviews to ensure the companies that produce it have not changed their quality standards. This is a helpful, representative example of routine trust, wherein the trust choice is implicit through habitual action and does not mean the person is actively thinking about the ramifications of continuing to use a product and trust it.

The principle of dependence is especially important in trust and technology discussions, because the modern human is entirely reliant on technology and so has no way to avoid trusting it.5   Specifically important in workplace scenarios, employees are given a mandatory set of technologies, from programs to devices and services, which they must interact with on a daily basis. Over time, the same implicit trust that would form between two people forms between the user and the technology. The key difference between interpersonal trust and technological trust is that deception is often much more difficult to identify.

The implicit trust in workplace technology

To provide a bit of workplace-specific context, organizations rely on technology providers for the operation (and often the security) of their devices. The organizations also rely on the employees (users) to use those technologies within the accepted policies and operational guidelines. The employees rely on the organization to determine which products and services are safe or unsafe.

Within this context, implicit trust is occurring at every layer of the organization and its technological holdings, but often the trust choice is only made annually by a small security team rather than continually evaluated. Systems and programs remain in place for years and are used because “that’s the way it’s always been done. Within that context, the exploitation of that trust by threat actors impersonating or compromising those technologies or services is extremely difficult to identify as a human.

For example, many organizations utilize email communications to promote software updates for employees. Typically, it would consist of email prompting employees to update versions from the vendors directly or from public marketplaces, such as App Store on Mac or Microsoft Store for Windows. If that kind of email were to be impersonated, spoofing an update and including a malicious link or attachment, there would be no reason for the employee to question that email, given the explicit trust enforced through habitual use of that service and program.

Inattentional blindness: How the brain ignores change

Users are psychologically predisposed to trust routinely used technologies and services, with most of those trust choices continuing subconsciously. Changes to these technologies would often be subject to inattentional blindness, a psychological phenomenon wherein the brain either overwrites sensory information with what the brain expects to see rather than what is actually perceived.

A great example of inattentional blindness6 is the following experiment, which asks individuals to count the number of times a ball is passed between multiple people. While that is occurring, something else is going on in the background, which, statistically, those tested will not see. The shocking part of this experiment comes after, when the researcher reveals that the event occurring in the background not seen by participants was a person in a gorilla suit moving back and forth between the group. This highlights how significant details can be overlooked by the brain and “overwritten” with other sensory information. When applied to technology, inattentional blindness and implicit trust makes spotting changes in behaviour, or indicators that a trusted technology or service has been compromised, nearly impossible for most humans to detect.

With all this in mind, how can you prepare users to correctly anticipate or identify a violation of that trust when their brains subconsciously make trust decisions and unintentionally ignore cues to suggest a change in behaviour? The short answer is, it’s difficult, if not impossible.

How threats exploit our implicit trust in technology

Most cyber threats are built around the idea of exploiting the implicit trust humans place in technology. Whether it’s techniques like “living off the land”, wherein programs normally associated with expected activities are leveraged to execute an attack, or through more overt psychological manipulation like phishing campaigns or scams, many cyber threats are predicated on the exploitation of human trust, rather than simply avoiding technological safeguards and building backdoors into programs.

In the case of phishing, it is easy to identify the attempts to leverage the trust of users in technology and services. The most common example of this would be spoofing, which is one of the most common tactics observed by Darktrace/Email. Spoofing is mimicking a trusted user or service, and can be accomplished through a variety of mechanisms, be it the creation of a fake domain meant to mirror a trusted link type, or the creation of an email account which appears to be a Human Resources, Internal Technology or Security service.

In the case of a falsified internal service, often dubbed a “Fake Support Spoof”, the user is exploited by following instructions from an accepted organizational authority figure and service provider, whose actions should normally be adhered to. These cases are often difficult to spot when studying the sender’s address or text of the email alone, but are made even more difficult to detect if an account from one of those services is compromised and the sender’s address is legitimate and expected for correspondence. Especially given the context of implicit trust, detecting deception in these cases would be extremely difficult.

How email security solutions can solve the problem of implicit trust

How can an organization prepare for this exploitation? How can it mitigate threats which are designed to exploit implicit trust? The answer is by using email security solutions that leverage behavioural analysis via anomaly detection, rather than traditional email gateways.

Expecting humans to identify the exploitation of their own trust is a high-risk low-reward endeavour, especially when it takes different forms, affects different users or portions of the organization differently, and doesn’t always have obvious red flags to identify it as suspicious. Cue email security using anomaly detection as the key answer to this evolving problem.

Anomaly detection enabled by machine learning and artificial intelligence (AI) removes the inattentional blindness that plagues human users and security teams and enables the identification of departures from the norm, even those designed to mimic expected activity. Using anomaly detection mitigates multiple human cognitive biases which might prevent teams from identifying evolving threats, and also guarantees that all malicious behaviour will be detected. Of course, anomaly detection means that security teams may be alerted to benign anomalous activity, but still guarantees that no threat, no matter how novel or cleverly packaged, won’t be identified and raised to the human security team.

Utilizing machine learning, especially unsupervised machine learning, mimics the benefits of human decision making and enables the identification of patterns and categorization of information without the framing and biases which allow trust to be leveraged and exploited.

For example, say a cleverly written email is sent from an address which appears to be a Microsoft affiliate, suggesting to the user that they need to patch their software due to the discovery of a new vulnerability. The sender’s address appears legitimate and there are news stories circulating on major media providers that a new Microsoft vulnerability is causing organizations a lot of problems. The link, if clicked, forwards the user to a login page to verify their Microsoft credentials before downloading the new version of the software. After logging in, the program is available for download, and only requires a few minutes to install. Whether this email was created by a service like ChatGPT (generative AI) or written by a person, if acted upon it would give the threat actor(s) access to the user’s credential and password as well as activate malware on the device and possibly broader network if the software is downloaded.

If we are relying on users to identify this as unusual, there are a lot of evidence points that enforce their implicit trust in Microsoft services that make them want to comply with the email rather than question it. Comparatively, anomaly detection-driven email security would flag the unusualness of the source, as it would likely not be coming from a Microsoft-owned IP address and the sender would be unusual for the organization, which does not normally receive mail from the sender. The language might indicate solicitation, an attempt to entice the user to act, and the link could be flagged as it contains a hidden redirect or tailored information which the user cannot see, whether it is hidden beneath text like “Click Here” or due to link shortening. All of this information is present and discoverable in the phishing email, but often invisible to human users due to the trust decisions made months or even years ago for known products and services.

AI-driven Email Security: The Way Forward

Email security solutions employing anomaly detection are critical weapons for security teams in the fight to stay ahead of evolving threats and varied kill chains, which are growing more complex year on year. The intertwining nature of technology, coupled with massive social reliance on technology, guarantees that implicit trust will be exploited more and more, giving threat actors a variety of avenues to penetrate an organization. The changing nature of phishing and social engineering made possible by generative AI is just a drop in the ocean of the possible threats organizations face, and most will involve a trusted product or service being leveraged as an access point or attack vector. Anomaly detection and AI-driven email security are the most practical solution for security teams aiming to prevent, detect, and mitigate user and technology targeting using the exploitation of trust.

References

1https://www.kellogg.northwestern.edu/trust-project/videos/waytz-ep-1.aspx

2Rotenberg, K.J. (2018). The Psychology of Trust. Routledge.

3https://www.cognifit.com/gb/attention

4https://www.trusttech.cam.ac.uk/perspectives/technology-humanity-society-democracy/what-trust-technology-conceptual-bases-common

5Tyler, T.R. and Kramer, R.M. (2001). Trust in organizations : frontiers of theory and research. Thousand Oaks U.A.: Sage Publ, pp.39–49.

6https://link.springer.com/article/10.1007/s00426-006-0072-4

NEWSLETTER

Like this and want more?

Stay up to date on the latest industry news and insights.
You can unsubscribe at any time. Privacy Policy
INSIDE THE SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
AUTHOR
ABOUT ThE AUTHOR
Hanah Darley
Director of Threat Research
share this article
PRODUCT SPOTLIGHT
COre coverage

More in this series

No items found.

Blog

Inside the SOC

Protecting Prospects: How Darktrace Detected an Account Hijack Within Days of Deployment

Default blog imageDefault blog image
28
Sep 2023

Cloud Migration Expanding the Attack Surface

Cloud migration is here to stay – accelerated by pandemic lockdowns, there has been an ongoing increase in the use of public cloud services, and Gartner has forecasted worldwide public cloud spending to grow around 20%, or by almost USD 600 billion [1], in 2023. With more and more organizations utilizing cloud services and moving their operations to the cloud, there has also been a corresponding shift in malicious activity targeting cloud-based software and services, including Microsoft 365, a prominent and oft-used Software-as-a-Service (SaaS).

With the adoption and implementation of more SaaS products, the overall attack surface of an organization increases – this gives malicious actors additional opportunities to exploit and compromise a network, necessitating proper controls to be in place. This increased attack surface can leave organization’s open to cyber risks like cloud misconfigurations, supply chain attacks and zero-day vulnerabilities [2]. In order to achieve full visibility over cloud activity and prevent SaaS compromise, it is paramount for security teams to deploy sophisticated security measures that are able to learn an organization’s SaaS environment and detect suspicious activity at the earliest stage.

Darktrace Immediately Detects Hijacked Account

In May 2023, Darktrace observed a chain of suspicious SaaS activity on the network of a customer who was about to begin their trial of Darktrace/Cloud™ and Darktrace/Email™. Despite being deployed on the network for less than a week, Darktrace DETECT™ recognized that the legitimate SaaS account, belonging to an executive at the organization, had been hijacked. Darktrace/Email was able to provide full visibility over inbound and outbound mail and identified that the compromised account was subsequently used to launch an internal spear-phishing campaign.

If Darktrace RESPOND™ were enabled in autonomous response mode at the time of this compromise, it would have been able to take swift preventative action to disrupt the account compromise and prevent the ensuing phishing attack.

Account Hijack Attack Overview

Unusual External Sources for SaaS Credentials

On May 9, 2023, Darktrace DETECT/Cloud detected the first in a series of anomalous activities performed by a Microsoft 365 user account that was indicative of compromise, namely a failed login from an external IP address located in Virginia.

Figure 1: The failed login notice, as seen in Darktrace DETECT/Cloud. The notice includes additional context about the failed login attempt to the SaaS account.

Just a few minutes later, Darktrace observed the same user credential being used to successfully login from the same unusual IP address, with multi-factor authentication (MFA) requirements satisfied.

Figure 2: The “Unusual External Source for SaaS Credential Use” model breach summary, showing the successful login to the SaaS user account (with MFA), from the rare external IP address.

A few hours after this, the user credential was once again used to login from a different city in the state of Virginia, with MFA requirements successfully met again. Around the time of this activity, the SaaS user account was also observed previewing various business-related files hosted on Microsoft SharePoint, behavior that, taken in isolation, did not appear to be out of the ordinary and could have represented legitimate activity.

The following day, May 10, however, there were additional login attempts observed from two different states within the US, namely Texas and Florida. Darktrace understood that this activity was extremely suspicious, as it was highly improbable that the legitimate user would be able to travel over 2,500 miles in such a short period of time. Both login attempts were successful and passed MFA requirements, suggesting that the malicious actor was employing techniques to bypass MFA. Such MFA bypass techniques could include inserting malicious infrastructure between the user and the application and intercepting user credentials and tokens, or by compromising browser cookies to bypass authentication controls [3]. There have also been high-profile cases in the recent years of legitimate users mistakenly (and perhaps even instinctively) accepting MFA prompts on their token or mobile device, believing it to be a legitimate process despite not having performed the login themselves.

New Email Rule

On the evening of May 10, following the successful logins from multiple US states, Darktrace observed the Microsoft 365 user creating a new inbox rule, named “.’, in Microsoft Outlook from an IP located in Florida. Threat actors are often observed naming new email rules with single characters, likely to evade detection, but also for the sake of expediency so as to not expend any additional time creating meaningful labels.

In this case the newly created email rules included several suspicious properties, including ‘AlwaysDeleteOutlookRulesBlob’, ‘StopProcessingRules’ and “MoveToFolder”.

Firstly, ‘AlwaysDeleteOutlookRulesBlob’ suppresses or hides warning messages that typically appear if modifications to email rules are made [4]. In this case, it is likely the malicious actor was attempting to implement this property to obfuscate the creation of new email rules.

The ‘StopProcessingRules’ rule meant that any subsequent email rules created by the legitimate user would be overridden by the email rule created by the malicious actor [5]. Finally, the implementation of “MoveToFolder” would allow the malicious actor to automatically move all outgoing emails from the “Sent” folder to the “Deleted Items” folder, for example, further obfuscating their malicious activities [6]. The utilization of these email rule properties is frequently observed during account hijackings as it allows attackers to delete and/or forward key emails, delete evidence of exploitation and launch phishing campaigns [7].

In this incident, the new email rule would likely have enabled the malicious actor to evade the detection of traditional security measures and achieve greater persistence using the Microsoft 365 account.

Figure 3: Screenshot of the “New Email Rule” model breach. The Office365 properties associated with the newly modified Microsoft Outlook inbox rule, “.”, are highlighted in red.

Account Update

A few hours after the creation of the new email rule, Darktrace observed the threat actor successfully changing the Microsoft 365 user’s account password, this time from a new IP address in Texas. As a result of this action, the attacker would have locked out the legitimate user, effectively gaining full access over the SaaS account.

Figure 4: The model breach event log showing the user password and token change updates performed by the compromised SaaS account.

Phishing Emails

The compromised SaaS account was then observed sending a high volume of suspicious emails to both internal and external email addresses. Darktrace was able to identify that the emails attempting to impersonate the legitimate service DocuSign and contained a malicious link prompting users to click on the text “Review Document”. Upon clicking this link, users would be redirected to a site hosted on Adobe Express, namely hxxps://express.adobe[.]com/page/A9ZKVObdXhN4p/.

Adobe Express is a free service that allows users to create web pages which can be hosted and shared publicly; it is likely that the threat actor here leveraged the service to use in their phishing campaign. When clicked, such links could result in a device unwittingly downloading malware hosted on the site, or direct unsuspecting users to a spoofed login page attempting to harvest user credentials by imitating legitimate companies like Microsoft.

Figure 5: Screenshot of the phishing email, containing a malicious link hidden behind the “Review Document” text. The embedded link directs to a now-defunct page that was hosted on Adobe Express.

The malicious site hosted on Adobe Express was subsequently taken down by Adobe, possibly in response to user reports of maliciousness. Unfortunately though, platforms like this that offer free webhosting services can easily and repeatedly be abused by malicious actors. Simply by creating new pages hosted on different IP addresses, actors are able to continue to carry out such phishing attacks against unsuspecting users.

In addition to the suspicious SaaS and email activity that took place between May 9 and May 10, Darktrace/Email also detected the compromised account sending and receiving suspicious emails starting on May 4, just two days after Darktrace’s initial deployment on the customer’s environment. It is probable that the SaaS account was compromised around this time, or even prior to Darktrace’s deployment on May 2, likely via a phishing and credential harvesting campaign similar to the one detailed above.

Figure 6: Event logs of the compromised SaaS user, here seen breaching several Darktrace/Email model breaches on 4th May.

Darktrace Coverage

As the customer was soon to begin their trial period, Darktrace RESPOND was set in “human confirmation” mode, meaning that any preventative RESPOND actions required manual application by the customer’s security team.

If Darktrace RESPOND had been enabled in autonomous response mode during this incident, it would have taken swift mitigative action by logging the suspicious user out of the SaaS account and disabling the account for a defined period of time, in doing so disrupting the attack at the earliest possible stage and giving the customer the necessary time to perform remediation steps.  As it was, however, these RESPOND actions were suggested to the customer’s security team for them to manually apply.

Figure 7: Example of Darktrace RESPOND notices, in response to the anomalous user activity.

Nevertheless, with Darktrace DETECT/Cloud in place, visibility over the anomalous cloud-based activities was significantly increased, enabling the swift identification of the chain of suspicious activities involved in this compromise.

In this case, the prospective customer reached out to Darktrace directly through the Ask the Expert (ATE) service. Darktrace’s expert analyst team then conducted a timely and comprehensive investigation into the suspicious activity surrounding this SaaS compromise, and shared these findings with the customer’s security team.

Conclusion

Ultimately, this example of SaaS account compromise highlights Darktrace’s unique ability to learn an organization’s digital environment and recognize activity that is deemed to be unexpected, within a matter of days.

Due to the lack of obvious or known indicators of compromise (IoCs) associated with the malicious activity in this incident, this account hijack would likely have gone unnoticed by traditional security tools that rely on a rules and signatures-based approach to threat detection. However, Darktrace’s Self-Learning AI enables it to detect the subtle deviations in a device’s behavior that could be indicative of an ongoing compromise.

Despite being newly deployed on a prospective customer’s network, Darktrace DETECT was able to identify unusual login attempts from geographically improbable locations, suspicious email rule updates, password changes, as well as the subsequent mounting of a phishing campaign, all before the customer’s trial of Darktrace had even begun.

When enabled in autonomous response mode, Darktrace RESPOND would be able to take swift preventative action against such activity as soon as it is detected, effectively shutting down the compromise and mitigating any subsequent phishing attacks.

With the full deployment of Darktrace’s suite of products, including Darktrace/Cloud and Darktrace/Email, customers can rest assured their critical data and systems are protected, even in the case of hybrid and multi-cloud environments.

Credit: Samuel Wee, Senior Analyst Consultant & Model Developer

Appendices

References

[1] https://www.gartner.com/en/newsroom/press-releases/2022-10-31-gartner-forecasts-worldwide-public-cloud-end-user-spending-to-reach-nearly-600-billion-in-2023

[2] https://www.upguard.com/blog/saas-security-risks

[3] https://www.microsoft.com/en-us/security/blog/2022/11/16/token-tactics-how-to-prevent-detect-and-respond-to-cloud-token-theft/

[4] https://learn.microsoft.com/en-us/powershell/module/exchange/disable-inboxrule?view=exchange-ps

[5] https://learn.microsoft.com/en-us/dotnet/api/microsoft.exchange.webservices.data.ruleactions.stopprocessingrules?view=exchange-ews-api

[6] https://learn.microsoft.com/en-us/dotnet/api/microsoft.exchange.webservices.data.ruleactions.movetofolder?view=exchange-ews-api

[7] https://blog.knowbe4.com/check-your-email-rules-for-maliciousness

Darktrace Model Detections

Darktrace DETECT/Cloud and RESPOND Models Breached:

SaaS / Access / Unusual External Source for SaaS Credential Use

SaaS / Unusual Activity / Multiple Unusual External Sources for SaaS Credential

Antigena / SaaS / Antigena Unusual Activity Block (RESPOND Model)

SaaS / Compliance / New Email Rule

Antigena / SaaS / Antigena Significant Compliance Activity Block

SaaS / Compromise / Unusual Login and New Email Rule (Enhanced Monitoring Model)

Antigena / SaaS / Antigena Suspicious SaaS Activity Block (RESPOND Model)

SaaS / Compromise / SaaS Anomaly Following Anomalous Login (Enhanced Monitoring Model)

SaaS / Compromise / Unusual Login and Account Update

Antigena / SaaS / Antigena Suspicious SaaS Activity Block (RESPOND Model)

IoC – Type – Description & Confidence

hxxps://express.adobe[.]com/page/A9ZKVObdXhN4p/ - Domain – Probable Phishing Page (Now Defunct)

37.19.221[.]142 – IP Address – Unusual Login Source

35.174.4[.]92 – IP Address – Unusual Login Source

MITRE ATT&CK Mapping

Tactic - Techniques

INITIAL ACCESS, PRIVILEGE ESCALATION, DEFENSE EVASION, PERSISTENCE

T1078.004 – Cloud Accounts

DISCOVERY

T1538 – Cloud Service Dashboards

CREDENTIAL ACCESS

T1539 – Steal Web Session Cookie

RESOURCE DEVELOPMENT

T1586 – Compromise Accounts

PERSISTENCE

T1137.005 – Outlook Rules

Probability yardstick used to communicate the probability that statements or explanations given are correct.
Continue reading
About the author
Min Kim
Cyber Security Analyst

Blog

Email

Darktrace/Email in Action: Why AI-Driven Email Security is the Best Defense Against Sustained Phishing Campaigns

Photo of man checking emails on laptopDefault blog imageDefault blog image
26
Sep 2023

Stopping the bad while allowing the good

Since its inception, email has been regarded as one of the most important tools for businesses, revolutionizing communication and allowing global teams to become even more connected. But besides organizations heavily relying on email for their daily operations, threat actors have also recognized that the inbox is one of the easiest ways to establish an initial foothold on the network.

Today, not only are phishing campaigns and social engineering attacks becoming more prevalent, but the level of sophistication of these attacks are also increasing with the help of generative AI tools that allow for the creation of hyper-realistic emails with minimal errors, effectively lowering the barrier to entry for threat actors. These diverse and stealthy types of attacks evade traditional email security tools based on rules and signatures, because they are less likely to contain the low-sophistication markers of a typical phishing attack.  

In a situation where the sky is the limit for attackers and security teams are lean, how can teams equip themselves to tackle these threats? How can they accurately detect increasingly realistic malicious emails and neutralize these threats before it is too late? And importantly, how can email security block these threats while allowing legitimate emails to flow freely?

Instead of relying on past attack data, Darktrace’s Self-Learning AI detects the slightest deviation from a user’s pattern of life and responds autonomously to contain potential threats, stopping novel attacks in their tracks before damage is caused. It doesn’t define ‘good’ and ‘bad’ like traditional email tools, rather it understands each user and what is normal for them – and what’s not.

This blog outlines how Darktrace/Email™ used its understanding of ‘normal’ to accurately detect and respond to a sustained phishing campaign targeting a real-life company.

Responding to a sustained phishing attack

Over the course of 24 hours, Darktrace detected multiple emails containing different subjects, all from different senders to different recipients in one organization. These emails were sent from different IP addresses, but all came from the same autonomous system number (ASN).

Figure 1: The sender freemail addresses and subject lines all followed a certain format. The subject lines followed the format of “<First name> <Last name>”, possibly to induce curiosity. The senders were all freemail accounts and contained first names, last names and some numbers, showing the attempts to make these email addresses appear legitimate.

The emails themselves had many suspicious indicators. All senders had no prior association with the recipient, and the emails generated a high general inducement score. This score is generated by structural and non-specific content analysis of the email – a high score indicates that the email is trying to induce the recipient into taking a particular action, which may lead to account compromise.

Additionally, each email contained a visually prominent link to a file storage service, hidden behind a shortened bit.ly link. The similarities across all these emails pointed to a sustained campaign targeting the organization by a single threat actor.

Figure 2: One of the emails is shown above. Like all the other emails, it contained a highly suspicious and shortened link.
Figure 3: In another one of the emails, the link observed had similar characteristics. But this email stands out from the rest. The sender's name seems to be randomly set – the 3 alphabets are close to each other on the keyboard.

With all these suspicious indicators, many models were breached. This drove up the anomaly score, causing Darktrace/Email to hold all suspicious emails from the recipients’ inboxes, safeguarding the recipients from potential account compromise and disallowing the threats from taking hold in the network.

Imagining a phishing attack without Darktrace/Email

So what could have happened if Darktrace had not withheld these emails, and the recipients had clicked on the links? File storage sites have a wide variety of uses that allow attackers to be creative in their attack strategy. If the user had clicked on the shortened link, the possible consequences are numerous. The link could have led to a login page for unsuspecting victims to input their credentials, or it could have hosted malware that would automatically download if the link was clicked. With the compromised credentials, threat actors could even bypass MFA, change email rules, or gain privileged access to a network. The downloaded malware might also be a keylogger, leading to cryptojacking, or could open a back door for threat actors to return to at a later time.

Figure 4: Darktrace/Email highlights suspicious link characteristics and provides an option to preview the pages.
Figure 5: At the point of writing, both links could not be reached. This could be because they were one-time unique links created specifically for the user, and can no longer be accessed once the campaign has ceased.

The limits of traditional email security tools

Secure email gateways (SEGs) and static AI security tools may have found it challenging to detect this phishing campaign as malicious. While Darktrace was able to correlate these emails to determine that a sustained phishing campaign was taking place, the pattern among these emails is far too generic for specific rules as set in traditional security tools. If we take the characteristic of the freemail account sender as an example, setting a rule to block all emails from freemail accounts may lead to more legitimate emails being withheld, since these addresses have a variety of uses.

With these factors in mind, these emails could have easily slipped through traditional security filters and led to a devastating impact on the organization.

Conclusion

As threat actors step up their attacks in sophistication, prioritizing email security is more crucial than ever to preserving a safe digital environment. In response to these challenges, Darktrace/Email offers a set-and-forget solution that continuously learns and adapts to changes in the organization.  

Through an evolving understanding of every environment in which it is deployed, its threat response becomes increasingly precise in neutralizing only the bad, while allowing the good – delivering email security that doesn’t come at the expense of business growth.

Continue reading
About the author

Good news for your business.
Bad news for the bad guys.

Start your free trial

Start your free trial

Flexible delivery
Cloud-based deployment.
Fast install
Just 1 hour to set up – and even less for an email security trial.
Choose your journey
Try out Self-Learning AI wherever you most need it — including cloud, network or email.
No commitment
Full access to the Darktrace Threat Visualizer and three bespoke Threat Reports, with no obligation to purchase.
For more information, please see our Privacy Notice.
Thanks, your request has been received
A member of our team will be in touch with you shortly.
YOU MAY FIND INTERESTING
Oops! Something went wrong while submitting the form.

Get a demo

Flexible delivery
You can either install it virtually or with hardware.
Fast install
Just 1 hour to set up – and even less for an email security trial.
Choose your journey
Try out Self-Learning AI wherever you most need it — including cloud, network or email.
No commitment
Full access to the Darktrace Threat Visualizer and three bespoke Threat Reports, with no obligation to purchase.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.