Email communications in the era of generative AI
Today’s headlines on cyber security attacks are dominated by serious breaches that cripple critical national infrastructure providers, wreak geopolitical havoc through nation state espionage, and create paralyzing financial demands through extortion and ransomware. But how do these breaches occur in the first place, and why do they keep happening?
Social engineering – specifically malicious cyber campaigns delivered via email - remain the primary source of an organization’s vulnerability to attack. A recent survey conducted by Darktrace to 6,700 global cross-industry professionals across the UK, US, Australia, France, Germany, and Netherlands, found almost a third (30%) have fallen victim to a fraudulent email. The risk, as ever, seems to be uncontrollably on the rise – 70% of respondents have noticed an increase in the frequency of malicious emails in the last six months.
At the end of last year, I wrote about how widespread accessibility to generative AI has compounded the problem, leading to likely efficiency gains for skilled threat actors launching email attacks. Just a few weeks ago, we published research which demonstrated that while the number of email attacks across our own customer base has remained steady since ChatGPT’s release, those that rely on tricking victims into clicking malicious links have declined, while linguistic complexity, including text volume, punctuation, and sentence length among others, have increased. This indicates that cyber-criminals may be redirecting their focus to crafting more sophisticated social engineering scams that exploit user trust delivered over email.
Our latest findings, conducted by Darktrace Research, today reveal a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email customers from January to February 2023, corresponding with the widespread adoption of ChatGPT. A novel social engineering phishing email is an email attack that shows a strong linguistic deviation – semantically and syntactically – compared to other phishing emails. The trend suggests that generative AI, such as ChatGPT, is providing an avenue for threat actors to craft sophisticated and targeted attacks at speed and scale.
Organizations are already well aware of the danger novel threats pose – globally, 82% of respondents reported are concerned that hackers can use generative AI to create scam emails that are indistinguishable from genuine communication.
Navigating the trust deficit
Employees are being exploited continuously through their primary means of communication - email. Given that humans, and our psychology, are at the heart of the problem, organizations have scrambled to develop awareness and training programs that are delivered routinely to employees in an attempt to help them defend themselves and their organization. While a defense-in-depth approach is generally a good practice to follow, the value of security awareness training, teaching employees to spot attempts at social engineering delivered in text format, is diminishing rapidly. This problem has become especially evident now that we find ourselves in a world where ‘bad’ emails look all but undistinguishable to the human eye from ‘good’ emails.
Over emphasizing the responsibility of employees poses a business challenge as well as a security problem: a bloated trust deficit. In times of economic uncertainty, business success is predicated on employee productivity, creativity, and efficiency. All three of these are squashed when employees treat the communication they receive, both internal and external, with forensic suspicion and mistrust.
As generative AI becomes more mainstream – across images, audio, video, and text – we anticipate that this erosion of trust in digital communication will only continue. What happens to the workplace if an employee is left questioning their own perception when communicating with their manager or a colleague, who they can see and hear over a video conference call, but who may be an entirely fictitious creation?
The way forward
The future lies in a partnership between artificial intelligence and human beings where the algorithms are responsible for determining whether communication is malicious or benign, thereby taking the burden of responsibility off the human. In turn, humans can pursue activities that drive productivity and performance. Current training and awareness programs, coupled with traditional email security technologies which assume that knowledge of past attacks will help predict campaigns, are not sustainable in this rapidly evolving landscape.
Our research reveals that traditional email security tools that rely on knowledge of past threats, take an average of thirteen days from an attack being launched on a victim to that attack being detected. This leaves defenders vulnerable for almost two weeks if they rely solely on traditional tools. When this statistic is contextualized against the 135% increase we’ve seen in novel social engineering campaigns, the inefficiency of ‘attack centric’ email security is made perfectly evident. If we don’t know how to define entirely new and unique attacks, how can we ever hope to recognize them based on past attacks?
A deep understanding of ‘you’ is needed in the face of these radical changes to the email threat landscape. Instead of trying to predict attacks, an understanding of your employees’ behavior, based from their email inbox, is needed to create patterns of life for every email user: their relationships, tone and sentiments and hundreds of other data points. By leveraging AI to combat email security threats, we not only reduce risk, but revitalize organizational trusts and contribute to business outcomes.
Ironically, generative AI may be worsening the social engineering challenge, but AI that knows you, could be the parry.