As people continue to be the weak link in most organizations’ cybersecurity practices, the growing use of generative AI tools in cyber-attacks makes email, their primary communications channel, a more compelling target than ever. The risk associated with Business Email Compromise (BEC) in particular continues to rise as generative AI tools equip attackers to build and launch social engineering and phishing campaigns with greater speed, scale, and sophistication.
What is BEC?
BEC is defined in different ways, but generally refers to cyber-attacks in which attackers abuse email — and users’ trust — to trick employees into transferring funds or divulging sensitive company data.
Unlike generic phishing emails, most BEC attacks do not rely on “spray and pray” dissemination or on users’ clicking bogus links or downloading malicious attachments. Instead, modern BEC campaigns use a technique called “pretexting.”
What is pretexting?
Pretexting is a more specific form of phishing that describes an urgent but false situation — the pretext — that requires the transfer of funds or revelation of confidential data.
This type of attack, and therefore BEC, is dominating the email threat landscape. As reported in Verizon’s 2024 Data Breach Investigation Report, recently there has been a “clear overtaking of pretexting as a more likely social action than phishing.” The data shows pretexting, “continues to be the leading cause of cybersecurity incidents (accounting for 73% of breaches)” and one of “the most successful ways of monetizing a breach.”
Pretexting and BEC work so well because they exploit humans’ natural inclination to trust the people and companies they know. AI compounds the risk by making it easier for attackers to mimic known entities and harder for security tools and teams – let alone unsuspecting recipients of routine emails – to tell the difference.
BEC attacks now incorporate AI
With the growing use of AI by threat actors, trends point to BEC gaining momentum as a threat vector and becoming harder to detect. By adding ingenuity, machine speed, and scale, generative AI tools like OpenAI’s ChatGPT give threat actors the ability to create more personalized, targeted, and convincing emails at scale.
In 2023, Darktrace researchers observed a 135% rise in ‘novel social engineering attacks’ across Darktrace / EMAIL customers, corresponding with the widespread adoption of ChatGPT.
Large Language Models (LLMs) like ChatGPT can draft believable messages that feel like emails that target recipients expect to receive. For example, generative AI tools can be used to send fake invoices from vendors known to be involved with well-publicized construction projects. These messages also prove harder to detect as AI automatically:
- Avoids misspellings and grammatical errors
- Creates multiple variations of email text
- Translates messages that read well in multiple languages
- And accomplishes additional, more targeted tactics
AI creates a force multiplier that allows primitive mass-mail campaigns to evolve into sophisticated automated attacks. Instead of spending weeks studying the target to craft an effective email, cybercriminals might only spend an hour or two and achieve a better result.
Challenges of detecting AI-powered BEC attacks
Rules-based detections miss unknown attacks
One major challenge comes from the fact that rules based on known attacks have no basis to deny new threats. While native email security tools defend against known attacks, many modern BEC attacks use entirely novel language and can omit payloads altogether. Instead, they rely on pure social engineering or bide their time until security tools recognize the new sender as a legitimate contact.
Most defensive AI can’t keep pace with attacker innovation
Security tools might focus on the meaning of an email’s text in trying to recognize a BEC attack, but defenders still end up in a rules and signature rat race. Some newer Integrated Cloud Email Security (ICES) vendors attempt to use AI defensively to improve the flawed approach of only looking for exact matches. Employing data augmentation to identify similar-looking emails helps to a point but not enough to outpace novel attacks built with generative AI.
What tools can stop BEC?
A modern defense-in-depth strategy must use AI to counter the impact of AI in the hands of attackers. As found in our 2024 State of AI Cybersecurity Report, 96% of survey participants believe AI-driven security solutions are a must have for countering AI-powered threats.
However, not all AI tools are the same. Since BEC attacks continue to change, defensive AI-powered tools should focus less on learning what attacks look like, and more on learning normal behavior for the business. By understanding expected behavior on the company’s side, the security solution will be able to recognize anomalous and therefore suspicious activity, regardless of the word choice or payload type.
To combat the speed and scale of new attacks, an AI-led BEC defense should spot novel threats.
Darktrace / EMAIL™ can do that.
Self-Learning AI builds profiles for every email user, including their relationships, tone and sentiment, content, and link sharing patterns. Rich context helps in understanding how people communicate and identifying deviations from the normal routine to determine what does and does not belong in an individual’s inbox and outbox.
Other email security vendors may claim to use behavioral AI and unsupervised machine learning in their products, but their AI are still pre-trained with historical data or signatures to recognize malicious activity, rather than demonstrating a true learning process. Darktrace’s Self Learning-AI truly learns from the organization in which it is installed, allowing it to detect unknown and novel vectors that other security tools are not yet trained on.
Because Darktrace understands the human behind email communications rather than knowledge of past attacks, Darktrace / EMAIL can stop the most sophisticated and evolving email security risks. It enhances your native email security by leveraging business-centric behavioral anomaly detection across inbound, outbound, and lateral messages in both email and Teams.
This unique approach quickly identifies sophisticated threats like BEC, ransomware, phishing, and supply chain attacks without duplicating existing capabilities or relying on traditional rules, signatures, and payload analysis.
The power of Darktrace’s AI can be seen in its speed and adaptability: Darktrace / EMAIL blocks the most novel threats up to 13 days faster than traditional security tools.
Learn more about AI-led BEC threats, how these threats extend beyond the inbox, and how organizations can adopt defensive AI to outpace attacker innovation in the white paper “Beyond the Inbox: A Guide to Preventing Business Email Compromise.”