The last 12 months have been a watershed moment in the public perception and adoption of AI. With the rise of generative AI systems like ChatGPT and Google Bard, AI is becoming more embedded in our everyday lives and there is a lot of hype around what these tools can – or will - do.
In cyber security, AI is a double-edged sword. Its use by cyber-attackers is still in its infancy, but Darktrace expects that the mass availability of generative AI tools like ChatGPT will significantly enhance attackers’ capabilities by providing better tools to generate and automate human-like attacks. There are three areas where Darktrace sees potential for AI to significantly enhance the capabilities of attackers: increasing the sophistication of low-level threat actors, increasing the speed of attacks through automation and eroding trust among users.
We’ve already started to see some potential indicators of these shifts.
In April, Darktrace revealed a 135% increase in ‘novel social engineering attacks’ – email attacks that show a strong linguistic deviation from other phishing emails – from January to February 2023 . The timing corresponds with the widespread adoption of ChatGPT and suggests the use of generative AI tools is providing an avenue for threat actors to craft more sophisticated and targeted attacks, at speed and scale.
Between May and July this year, our Cyber AI Research Centre observed that multistage payload attacks, in which a malicious email encourages the recipient to follow a series of steps before delivering a payload or attempting to harvest sensitive information, have increased by an average of 59% across Darktrace customers. Nearly 50,000 more of these attacks were detected by Darktrace in July than May, indicating potential use of automation, and the speed of these types of attacks will likely rise as greater automation and AI are adopted and applied by attackers.
In the same period, Darktrace has seen changes in attacks that abuse trust. While VIP impersonation – phishing emails that mimic senior executives – decreased 11%, email account takeover attempts increased by 52% and impersonation of the internal IT team increased by 19% . The changes suggest that as employees have become better attuned to the impersonation of senior executives, attackers are pivoting to impersonating IT teams to launch their attacks. While it’s common for attackers to pivot and adjust their techniques as efficacy declines, generative AI – particularly deepfakes - has the potential to disrupt this pattern in favor of attackers. Factors like increasing linguistic sophistication and highly realistic voice deep fakes could more easily be deployed to deceive employees.
These early indicators give us a glimpse of a new era of disruption and challenges for cyber security. An era where novel is the new normal.
Darktrace was built for this moment.
Darktrace began ten years ago as an AI Research Centre. We saw that AI could address an existential threat – defending people, businesses and nations from a world of constantly evolving threats. This threat is only poised to grow as AI is increasingly used by attackers. That’s why we became one of the first to apply AI to cyber security and built a completely AI native technology platform aimed at freeing the world of cyber disruption.
We built everything at Darktrace with the same philosophy of using the right AI and the right data for the job.
Most AI today is trained periodically in offline training environments on huge amounts of combined historic training data. You give all that data to the AI, and then after a few days or weeks, you get a static AI model which you push live to serve its role until the next version is ready. This is ideal for tasks like generating imagery or, in cyber security, checking against known attack patterns, but the AI is static – it doesn’t learn or adapt until the next version is pushed live.
Darktrace takes a different and unique approach to nearly everyone else in cyber security. Our distinction lies in the algorithms we use, the data we use AND, most importantly, in how the two interact.
Instead of taking your data to the AI, we take our AI to your data. Inside every single customer lies a Darktrace AI that is completely unique to them – their OWN data AI pipeline – plugged into their enterprise and self-learning in real time from everything that happens in their digital world –including email, cloud environments, manufacturing and operational systems, and physical locations.
The pace of new threats and the sophistication of the technology, including the use of AI, now outpaces any notion that a week old view of historic cyber threats can fully protect a business – either from the new threats that we’re seeing today from the sudden availability of generative AI tools, or the threats of tomorrow. For example, automated deepfakes where you can’t trust what you’re hearing or seeing, your employees being tricked into being inadvertent insiders, or self-evolving code designed to evade the best of those legacy defenses.
And because the increased use of AI in attacks will mean novel attacks will become the new normal, only Darktrace stands between those attacks succeeding or failing. We’ve seen this before with our technology detecting, and protecting customers against, Log4J, supply chain attacks like SolarWinds, the novel phishing scams we saw during the Covid-19 lockdowns, zero days like the Citrix Netscaler attack, novel ransomware worms such as WannaCry, or sophisticated nation-state attacks like APT35. We didn’t protect businesses because we were looking specifically for these threats, but we found them because every threat, whether known or novel, accidental or malicious, human or AI driven, impacts the customer, its people and its data.
The right AI for the right job
Today we’re on our 6th generation of Darktrace AI and, as we’ve innovated and developed, we’ve built a platform of applied AI techniques and algorithms that utilise Darktrace’s live, tailored knowledge of a business, to defend it alongside human security teams. Our focus has always been on using the right AI and the right data for the job, which is why our software uses:
- A wide range of our own self-learning methods to understand new information and decide if something never seen before looks suspicious.
- Real time Bayesian Probabilistic Methods allow models to be efficiently updated and controlled in real time.
- Generative and applied AI run simulated phishing campaigns, tabletop exercises and realistic drills.
- Deep-neural networks replicate the thought process of humans.
- Graph theory understands the incredibly complex relationships between people, systems, organizations and supply chains.
- Offensive AI techniques such as Generative Adversarial Networks (GANs) help to test and improve our ability to counter AI driven attacks.
- Natural language processing and large language models interpret and produce human consumable output.
This complex platform of AI tools and techniques, all sat within a business, focused on the customers’ data, brings a range of advantages in data privacy, explainability and data transfer costs. But its main achievement is the one we set out for ten years ago. It can provide protection that is always on - always learning, able to detect and stop the unusual, the suspicious and the novel – and, ultimately, to protect our customers from it. That’s what we’ve always done and that’s what we will continue to do, regardless of how the landscape shifts.
 Based on the average change in email attacks between January and February 2023 detected across Darktrace/Email deployments with control of outliers.
 Based on the change in the average number of emails assigned this classification per 10,000 emails on each Darktrace/Email deployment in May versus July 2023 (significantly more than 1,000 deployments in total).