The state of email security and phishing attacks
Employees send and receive hundreds of emails a day to keep businesses moving. Unfortunately, it just takes one employee to interact with an undetected phishing email to potentially put an entire organization at risk from cyber disruption. Attackers know this, which is why they continue to develop and improve email phishing attacks.
Increased attack sophistication makes it harder than ever for traditional cyber security solutions like SEGs, firewalls, and spam filters to detect and mitigate increasingly novel and sophisticated email threats.
When there are tell-tale signs of a threat, these solutions can identify an incoming message as suspicious. Pointers such as emails from unknown senders, messages which contain an unusual amount of poor spelling and grammar or encourage the receiver to respond to an unexpected but supposedly urgent request.
That is, if the phishing attacks weren’t blocked by security measures before reaching the victim’s inbox. But, this is happening more and more often as phishing campaigns are becoming more advanced. Attackers are showing signs of consistently bypassing traditional protections and getting through to exploit victims.
Darktrace email threat reporting
In its End of Year Threat Report, Darktrace analyzed over 10 million phishing emails targeting customer environments between September 1 and December 31, 2023. Our findings signal that attackers are starting to take advantage of advancements in artificial intelligence (AI), including using Generative AI tools such as Large Language Models (LLMs) to create more convincing and sophisticated phishing messages – and at scale.
LLMs and Phishing
With the right AI prompts, attackers can use these LLMs to help write convincing email messages designed to target specific countries, companies or even individuals – all without the suspicious hallmarks which are traditionally associated with standard phishing attacks. The attackers don’t even need to speak the language of the individuals or groups they’re targeting. LLMs lower language barriers for attackers; using their native tongue, they can simply ask the Generative AI to write a message in the language of their choosing.
These techniques are designed to build trust and manipulate recipients into giving up sensitive information like user credentials, intellectual property or bank information or coerce them into downloading malicious payloads which can be used to launch further attacks on business infrastructure. With the appropriate research, attackers can tailor the messages to increase the chances of being successful, like making them look like a legitimate company email or request.
Social engineering phishing attacks
A year ago. Darktrace shared research which found a 135% increase in ‘novel social engineering attacks’ in the first two months of 2023, corresponding with the widespread adoption of ChatGPT. These novel phishing attacks showed a strong linguistic deviation compared to other phishing emails, which suggested to us that Generative AI was already providing an avenue for threat actors to craft sophisticated and targeted attacks at speed and scale
We’ve seen this trend continue. Our End of Year Threat Report found 38% of these emails were identified as utilizing novel social engineering techniques.
Attackers are also deploying another technique to make phishing emails look more convincing – they’re making the emails themselves longer and more sophisticated.
A potential victim might be suspicious of an ‘urgent’ email which prompts them to take action without an explanation - but if there’s additional context in the text, it adds an aura of legitimacy which is difficult to act against.
And threat actors know this; 28% of phishing emails analyzed by Darktrace over the period were identified as having “significant” amount of text – containing over 1,000 characters, which equates to over 200 words.
It’s a sign that attackers are innovating and bolstering their efforts to craft sophisticated phishing campaigns, potentially leveraging Generative AI tools to automate social engineering activity by creating longer, more convincing phishing emails.
QR code phishing
But this is far from the only innovative method which attackers are using to bypass traditional security defences. Among the 10 million plus emails analyzed during the reporting period, Darktrace/Email detected over 639,000 malicious QR codes within the messages.
Malicious QR codes placed within emails have become an increasingly common form of phishing attack, especially as QR codes have become a more common method for sharing links to information or buying links for products in recent years.
Attackers are deploying QR codes because they provide a way of directing unsuspecting victims to malicious websites or download links without needing to use a traditional phishing URL.
The advantage of implanting QR codes for attackers is that while phishing URLs are something which traditional security solutions are actively looking to identify and mitigate, malicious QR codes are more difficult for them to detect.
Applying AI to email security
Traditional security solutions which rely heavily on previously identified malicious emails and known bad senders are struggling to identify and defend against these novel and increasingly sophisticated email threats.
But by using AI that learns the unique digital environment and patterns of each business, Darktrace/Email can recognize the subtle deviations in expected email activity to determine whether any given email could represent a threat to the business. It is then able to make highly accurate decisions to mitigate and neutralize any email attack it faces helping to keep your organization safe from cyber disruption.
It’s therefore imperative that in the battle against ever-evolving, ever more sophisticated cyber threats, defenders are also embracing AI to keep businesses safe. By effectively applying AI to cyber security challenges, defenders can take a proactive approach to cyber security, staying one step ahead of malicious attackers, with real-time detection and automated response to known and unknown threats looking to disrupt the business via the inbox.
Darktrace/Email was recently awarded a 2024 AI Excellence Award for Machine Learning by Business Intelligence Group.
Join Darktrace on 9 April for a virtual event to explore the latest innovations needed to get ahead of the rapidly evolving threat landscape. Register today to hear more about our latest innovations coming to Darktrace’s offerings.