For several weeks now, we’ve seen how cyber-criminals have used the ongoing global health crisis as a ‘fearware’ topic to mount and spread their attacks. But as more and more of the world’s population works from home, and as consumption of digital content subsequently increases, hackers are finding novel ways to exploit the full range of human emotions through sophisticated email attacks.
From attackers creating ‘digital fake’ campaigns that offer ‘advice’ for those self-isolating, to threat-actors masquerading behind trusted websites to launch malware, the last few weeks have demonstrated how quickly cyber-criminals can adapt their techniques in the email realm. This blog presents four ways hackers are changing their tactics in light of current trends and changing behaviors, and how security teams can react to defend against these developments.
Increased subscription
With a marked increase in digital subscription to entertainment sites and news sources, it should come as no surprise that spammers and hackers have doubled down on using fake newsletter subscriptions in their email attacks.
For security tools such as gateways and inboxes that look at the historic mail-flow, a new email subscription to a newsletter can look very much like any other – especially when the email passes all existing security tests and verifications. A brand new campaign or domain may not have been identified as malicious yet, and thus is allowed into the recipient’s inbox.
Analyzing emails within the broader business context gives a full understanding of the circumstances in which it was received. This requires looking beyond the inbox and considering the user’s ‘pattern of life’ across all touchpoints across the digital ecosystem. In the case of benign subscription emails, a user will have recently visited the domain of the sender and requested the email newsletter. There is an action ahead of receiving the email – requesting it.
Drawing insights from both email traffic and the user’s wider ‘pattern of life’ across the digital business, AI can tell the difference between an email newsletter that has and has not been requested. This simple act alone can help security teams understand when a user has voluntarily signed up for a newsletter versus when they have been targeted by a malicious attack, enabling them to respond appropriately.
Rapid adoption of remote presentation sites
As remote working sees a rapid rise, there has been a sharp increase in the number of people using presentation creation sites. Darktrace has recently picked up on a large number of attacks in which these trusted sites have been exploited to openly host malicious links. Malicious payloads are embedded within presentations, which are then shared in emails that go undetected by gateway tools.
Figure 1: Canva and Infogram, two presentation sites leveraged in this latest string of attacks
Several indicators suggest that this activity originates from a single, well-organized threat-actor or group, including the rotational targeting of presentation sites (Canva, Infogram, Axel, Piktochart, and Sway), the highly-focused nature of the attack type (taking place within the space of two weeks), and the consistent nature of these emails. These emails were seen across a large number of deployments, which appeared to utilize a strikingly similar fake eFax notification format.
Worryingly, the emails appear to display none of the typical ‘trademark’ identifiers often seen for phishing emails, such as spoofed or impersonated email addresses or suspicious link strings. For this reason, they go undetected by products such as Microsoft’s spam and phishing tools. As such, they are currently being delivered to recipients’ inboxes without any alteration or addition of safety features.
This activity appears to represent a significant and currently unrecognized external threat. Whilst the novel nature of the activity allowed it to easily bypass legacy tools, a more nuanced understanding of the human behind the email address allowed Darktrace’s AI to uniquely identify this series of emails as highly threatening. The technology recognized that the links and domains were highly unusual, not only in the context of the recipients’ normal behavior, but the ‘pattern of life’ of their peer group and the organization at large.
An unprecedented convergence of personal and professional
While IT and compliance teams are having to find ways to keep digital environments secure in remote working conditions, users are also changing their own behavior – not only in terms of devices and tools accessed, but also in what content and files are consumed and interacted with. This convergence of the personal and the professional, and the resulting expansion in the attack surface, presents a new set of challenges to security teams. Compromised email credentials and hijacked accounts become even harder to spot.
Securing these environments requires technology that can adapt to the new way of working, without having to explicitly reconfigure or re-write the rules. Digital activity has changed overnight, and will only continue to change – security tools that cannot adapt and grow with that change will fast become redundant. By continuously learning and evolving its understanding of every user and device, AI is being relied upon to protect workers, especially as we now shift our behavior to use more cloud-based communication and collaboration tools.
Adaptive AI-powered attacks
A recent Forrester report found that over half of security professionals expect AI-augmented cyber-attacks to be made evident to the public within the next twelve months. One way this is likely to manifest itself is with the automation of well-crafted spear phishing campaigns.
As attackers use AI to better understand the type of content that each user interacts with, along with the prevalent emotions that drive each individual user, malware or malicious links can be masked in content that is highly targeted to specific users. Individuals who are actively seeking information on particular topics, or are more likely to share and forward light-hearted, humorous content may be targeted more frequently or aggressively.
Using AI to study the target, hackers can leverage insights at a speed and scale never seen before. With sophisticated domain spoofing, indiscriminate writing styles, and carefully hidden malicious links, human analysts and traditional security tools alike will stand little chance.
To prepare for this next wave of attacks, security teams themselves are relying on AI that analyzes emails in light of behaviors across email platforms and the organization at large. Rather than analyzing emails in isolation and at a single point in time, Cyber AI correlates insights over time, and continuously revisits emails many thousands of times as new evidence emerges.
While traditional defenses ask whether elements of an email have been observed in historical attacks, Antigena Email is the only solution that can reliably ask whether it would be unusual for a recipient to interact with a given email in the context of their normal ‘pattern of life’, as well as that of their peers and the wider organization. This contextual knowledge allows the AI to make highly accurate decisions and neutralize the full range of email attacks – from ‘clean’ spoofing emails that seek to wire a fraudulent payment to sophisticated spear phishing attempts.