Blog
/

Email

/
September 30, 2024

Business Email Compromise (BEC) in the Age of AI

Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
30
Sep 2024
Generative AI tools have increased the risk of BEC, and traditional cybersecurity defenses struggle to stay ahead of the growing speed, scale, and sophistication of attacks. Only multilayered, defense-in-depth strategies can counter the AI-powered BEC threat.

As people continue to be the weak link in most organizations’ cybersecurity practices, the growing use of generative AI tools in cyber-attacks makes email, their primary communications channel, a more compelling target than ever. The risk associated with Business Email Compromise (BEC) in particular continues to rise as generative AI tools equip attackers to build and launch social engineering and phishing campaigns with greater speed, scale, and sophistication.

What is BEC?

BEC is defined in different ways, but generally refers to cyber-attacks in which attackers abuse email — and users’ trust — to trick employees into transferring funds or divulging sensitive company data.

Unlike generic phishing emails, most BEC attacks do not rely on “spray and pray” dissemination or on users’ clicking bogus links or downloading malicious attachments. Instead, modern BEC campaigns use a technique called “pretexting.”

What is pretexting?

Pretexting is a more specific form of phishing that describes an urgent but false situation — the pretext — that requires the transfer of funds or revelation of confidential data.  

This type of attack, and therefore BEC, is dominating the email threat landscape. As reported in Verizon’s 2024 Data Breach Investigation Report, recently there has been a “clear overtaking of pretexting as a more likely social action than phishing.” The data shows pretexting, “continues to be the leading cause of cybersecurity incidents (accounting for 73% of breaches)” and one of “the most successful ways of monetizing a breach.”

Pretexting and BEC work so well because they exploit humans’ natural inclination to trust the people and companies they know. AI compounds the risk by making it easier for attackers to mimic known entities and harder for security tools and teams – let alone unsuspecting recipients of routine emails – to tell the difference.

BEC attacks now incorporate AI

With the growing use of AI by threat actors, trends point to BEC gaining momentum as a threat vector and becoming harder to detect. By adding ingenuity, machine speed, and scale, generative AI tools like OpenAI’s ChatGPT give threat actors the ability to create more personalized, targeted, and convincing emails at scale.

In 2023, Darktrace researchers observed a 135% rise in ‘novel social engineering attacks’ across Darktrace / EMAIL customers, corresponding with the widespread adoption of ChatGPT.

Large Language Models (LLMs) like ChatGPT can draft believable messages that feel like emails that target recipients expect to receive. For example, generative AI tools can be used to send fake invoices from vendors known to be involved with well-publicized construction projects. These messages also prove harder to detect as AI automatically:

  • Avoids misspellings and grammatical errors
  • Creates multiple variations of email text  
  • Translates messages that read well in multiple languages
  • And accomplishes additional, more targeted tactics

AI creates a force multiplier that allows primitive mass-mail campaigns to evolve into sophisticated automated attacks. Instead of spending weeks studying the target to craft an effective email, cybercriminals might only spend an hour or two and achieve a better result.  

Challenges of detecting AI-powered BEC attacks

Rules-based detections miss unknown attacks

One major challenge comes from the fact that rules based on known attacks have no basis to deny new threats. While native email security tools defend against known attacks, many modern BEC attacks use entirely novel language and can omit payloads altogether. Instead, they rely on pure social engineering or bide their time until security tools recognize the new sender as a legitimate contact.  

Most defensive AI can’t keep pace with attacker innovation

Security tools might focus on the meaning of an email’s text in trying to recognize a BEC attack, but defenders still end up in a rules and signature rat race. Some newer Integrated Cloud Email Security (ICES) vendors attempt to use AI defensively to improve the flawed approach of only looking for exact matches. Employing data augmentation to identify similar-looking emails helps to a point but not enough to outpace novel attacks built with generative AI.

What tools can stop BEC?

A modern defense-in-depth strategy must use AI to counter the impact of AI in the hands of attackers. As found in our 2024 State of AI Cybersecurity Report, 96% of survey participants believe AI-driven security solutions are a must have for countering AI-powered threats.

However, not all AI tools are the same. Since BEC attacks continue to change, defensive AI-powered tools should focus less on learning what attacks look like, and more on learning normal behavior for the business. By understanding expected behavior on the company’s side, the security solution will be able to recognize anomalous and therefore suspicious activity, regardless of the word choice or payload type.  

To combat the speed and scale of new attacks, an AI-led BEC defense should spot novel threats.

Darktrace / EMAIL™ can do that.  

Self-Learning AI builds profiles for every email user, including their relationships, tone and sentiment, content, and link sharing patterns. Rich context helps in understanding how people communicate and identifying deviations from the normal routine to determine what does and does not belong in an individual’s inbox and outbox.  

Other email security vendors may claim to use behavioral AI and unsupervised machine learning in their products, but their AI are still pre-trained with historical data or signatures to recognize malicious activity, rather than demonstrating a true learning process. Darktrace’s Self Learning-AI truly learns from the organization in which it is installed, allowing it to detect unknown and novel vectors that other security tools are not yet trained on.

Because Darktrace understands the human behind email communications rather than knowledge of past attacks, Darktrace / EMAIL can stop the most sophisticated and evolving email security risks. It enhances your native email security by leveraging business-centric behavioral anomaly detection across inbound, outbound, and lateral messages in both email and Teams.

This unique approach quickly identifies sophisticated threats like BEC, ransomware, phishing, and supply chain attacks without duplicating existing capabilities or relying on traditional rules, signatures, and payload analysis.  

The power of Darktrace’s AI can be seen in its speed and adaptability: Darktrace / EMAIL blocks the most novel threats up to 13 days faster than traditional security tools.

Learn more about AI-led BEC threats, how these threats extend beyond the inbox, and how organizations can adopt defensive AI to outpace attacker innovation in the white paper “Beyond the Inbox: A Guide to Preventing Business Email Compromise.”

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Author
Carlos Gray
Product Manager

Carlos Gonzalez Gray is a Product Marketing Manager at Darktrace, based in the Madrid Office. As an email security Subject Matter Expert he collaborates with the global product team to align each product with the company’s ethos and ensures Darktrace are continuously pushing the boundaries of innovation. His prior role at Darktrace was in Sales Engineering, leading the Iberian team and specializing in both the email and OT sectors. Additionally, his prior experience as a consultant to IBEX 35 companies in Spain has made him well-versed in compliance, auditing, and data privacy. Carlos holds an Honors BA in Political Science and a Masters in Cybersecurity from IE University.

Book a 1-1 meeting with one of our experts
Share this article

More in this series

No items found.

Blog

/

October 10, 2024

/

Email

How Darktrace won an email security trial by learning the business, not the breach

Default blog imageDefault blog image

Recently, Darktrace ran a customer trial of our email security product for a leading European infrastructure operator looking to upgrade its email protection.

During this prospective customer trial, Darktrace encountered several security incidents that penetrated existing security layers. Two of these incidents were Business Email Compromise (BEC) attacks, which we’re going to take a closer look at here.  

Darktrace was deployed for a trial at the same time as two other email security vendors, who were also being evaluated by the prospective customer. Darktrace’s superior detection of threats in this trial laid the groundwork for the respective company to choose our product.

Let’s dig into some of the elements of this Darktrace tech win and how they came to light during this trial.

Why truly intelligent AI starts learning from scratch

Darktrace’s detection capabilities are powered by true unsupervised machine learning, which detects anomalous activity from its ever-evolving understanding of normal for every unique environment. Consequently, it learns every business from the beginning, training on an organization’s data to understand normal for its users, devices, assets and the millions of connections between them.  

This learning period takes around a week, during which the AI hones its understanding of the business to a precise degree. At this stage, the system may produce some noise or lack precision, but this is a testament to our unsupervised machine learning. Unlike solutions that promise faster results by relying on preset assumptions, our AI takes the necessary time to learn from scratch, ensuring a deeper understanding and increasingly accurate detection over time.

Real threats detected by Darktrace

Attack 1: Supply chain attack

BEC and supply chain attacks are notoriously difficult to detect, as they take advantage of established, trusted senders.  

This attack came from a legitimate server via a known supplier with which the prospective customer had active and ongoing communication. Using the compromised account, the attacker didn’t just send out randomized spam, they crafted four sophisticated social engineering emails with the aim of soliciting users to click on a link – directly tapping into existing conversations. Darktrace / EMAIL was configured in passive mode during this trial; it would otherwise have held the emails before they arrived in the inbox. Luckily in this instance, one user reported the email to the CISO before any other users clicked the link. Upon investigation, the link contained timed ransomware detonation.  

Darktrace was the only vendor that caught any of these four emails. Our unique behavioral AI approach enables Darktrace / EMAIL to protect customers from even the most sophisticated attacks that abuse prior trust and relationships.

How did Darktrace catch this attack that other vendors missed?

With traditional email security, security teams have been obliged to allow entire organizations to eliminate false positives – on the premise that it’s easier to make a broad decision based on an entire known domain and assume that potential risk of a supply chain attack.

By contrast, Darktrace adopts a zero trust mentality, analyzing every email to understand whether communication that has previously been safe remains safe. That’s why Darktrace is uniquely positioned to detect BEC, based on its deep learning of internal and external users. Because it creates individual profiles for every account, group and business composed of multiple signals, it can detect deviations in their communication patterns based on the context and content of each message. We think of this as the ‘self-learning’ vs ‘learning the breach’ differentiator.

Fig 1: Darktrace analysis of one of four malicious emails sent by the trusted supplier. It gives it an anomaly score of 100, despite it being from a known correspondent with a known domain relationship and moderate mailing history.

If set in autonomous mode where it can apply actions, Darktrace / EMAIL would have quarantined all four emails. Using machine learning indicators such as ‘Inducement Shift’ and ‘General Behavioral Anomaly’, it deemed the four emails ‘Out of Character’. It also identified the link as highly likely to be phishing, based purely on its context. These indicators are critical because the link itself belonged to a widely used legitimate domain, leveraging their established internet reputation to appear safe.  

Around an hour later the supplier regained control of the account and sent a legitimate email alerting a wide distribution list to the phishing emails sent. Darktrace was able to discern the previously sent malicious emails from the current legitimate emails and allowed these emails through. Compared to other vendors that have a static understanding of malicious which needs to be updated (in cases like this, once a supplier is de-compromised), Darktrace’s deep understanding of external entities enables further nuance and precision in determining good from bad.

Fig 2: Darktrace let through four emails (subject line: Virus E-Mail) from the supplier once they had regained control of the compromised account, with a limited anomaly score despite having held the previous malicious emails. If any actions had been taken a red icon would show on the right-hand side – in this instance Darktrace did not take action and let the emails through.

Attack 2: Microsoft 365 account takeover

As part of building behavioral profiles of every email user, Darktrace analyzes their wider account activity. Account activity, such as unusual login patterns and administrative activity, is a key variable to detect account compromise before malicious activity occurs, but it also feeds into Darktrace’s understanding of which emails should belong in every user’s inbox.  

When the customer experienced an account compromise on day two of the trial, Darktrace began an investigation and was able to provide the full breakdown and scope of the incident.

The account was compromised via an email, which Darktrace would have blocked if it had been deployed autonomously at the time. Once the account had been compromised, detection details included:

  • Unusual Login and Account Update
  • Multiple Unusual External Sources for SaaS Credential
  • Unusual Activity Block
  • Login From Rare Endpoint While User is Active
Fig 3: Darktrace flagged the following indicators of compromise that deviated from normal behavior for the user in question, signaling an account takeover

With Darktrace / EMAIL, every user is analyzed for behavioral signals including authentication and configuration activity. Here the unusual login, credential input and rare endpoint were all clear signals a compromised account, contextualized against what is normal for that employee. Because Darktrace isn’t looking at email security merely from the perspective of the inbox. It constantly reevaluates the identity of each individual, group and organization (as defined by their behavioral signals), to determine precisely what belongs in the inbox and what doesn’t.  

In this instance, Darktrace / EMAIL would have blocked the incident were it not deployed in passive mode. In the initial intrusion it would have blocked the compromising email. And once the account was compromised, it would have taken direct blocking actions on the account based on the anomalous activity it detected, providing an extra layer of defense beyond the inbox.  

Account takeover protection is always part of Darktrace / EMAIL, which can be extended to fully cover Microsoft 365 SaaS with Darktrace / IDENTITY. By bringing SaaS activity into scope, security teams also benefit from an extended set of use cases including compliance and resource management.

Why this customer committed to Darktrace / EMAIL

“Darktrace was the only AI vendor that showed learning,” – CISO, Trial Customer

Throughout this trial, Darktrace evolved its understanding of the trial customer’s business and its email users. It identified attacks that other vendors did not, while allowing safe emails through. Furthermore, the CISO explicitly cited Darktrace as the only technology that demonstrated autonomous learning. As well as catching threats that other vendors did not, the CISO saw maturity areas such as how Darktrace dealt with non-productive mail and business-as-usual emails, without any user input.  Because of the nature of unsupervised ML, Darktrace’s learning of right and wrong will never be static or complete – it will continue to revise its understanding and adapt to the changing business and communications landscape.

This case study highlights a key tenet of Darktrace’s philosophy – that a rules and tuning-based approach will always be one step behind. Delivering benign emails while holding back malicious emails from the same domain demonstrates that safety is not defined in a straight line, or by historical precedent. Only by analyzing every email in-depth for its content and context can you guarantee that it belongs.  

While other solutions are making efforts to improve a static approach with AI, Darktrace’s AI remains truly unsupervised so it is dynamic enough to catch the most agile and evolving threats. This is what allows us to protect our customers by plugging a vital gap in their security stack that ensures they can meet the challenges of tomorrow's email attacks.

Interested in learning more about Darktrace / EMAIL? Check out our product hub.

Continue reading
About the author
Carlos Gray
Product Manager

Blog

/

October 4, 2024

/

Inside the SOC

From Call to Compromise: Darktrace’s Response to a Vishing-Induced Network Attack

Default blog imageDefault blog image

What is vishing?

Vishing, or voice phishing, is a type of cyber-attack that utilizes telephone devices to deceive targets. Threat actors typically use social engineering tactics to convince targets that they can be trusted, for example, by masquerading as a family member, their bank, or trusted a government entity. One method frequently used by vishing actors is to intimidate their targets, convincing them that they may face monetary fines or jail time if they do not provide sensitive information.

What makes vishing attacks dangerous to organizations?

Vishing attacks utilize social engineering tactics that exploit human psychology and emotion. Threat actors often impersonate trusted entities and can make it appear as though a call is coming from a reputable or known source.  These actors often target organizations, specifically their employees, and pressure them to obtain sensitive corporate data, such as privileged credentials, by creating a sense of urgency, intimidation or fear. Corporate credentials can then be used to gain unauthorized access to an organization’s network, often bypassing traditional security measures and human security teams.

Darktrace’s coverage of vishing attack

On August 12, 2024, Darktrace / NETWORK identified malicious activity on the network of a customer in the hospitality sector. The customer later confirmed that a threat actor had gained unauthorized access through a vishing attack. The attacker successfully spoofed the IT support phone number and called a remote employee, eventually leading to the compromise.

Figure 1: Timeline of events in the kill chain of this attack.

Establishing a Foothold

During the call, the remote employee was requested to authenticate via multi-factor authentication (MFA). Believing the caller to be a member of their internal IT support, using the legitimate caller ID, the remote user followed the instructions and confirmed the MFA prompt, providing access to the customer’s network.

This authentication allowed the threat actor to login into the customer’s environment by proxying through their Virtual Private Network (VPN) and gain a foothold in the network. As remote users are assigned the same static IP address when connecting to the corporate environment, the malicious actor appeared on the network using the correct username and IP address. While this stealthy activity might have evaded traditional security tools and human security teams, Darktrace’s anomaly-based threat detection identified an unusual login from a different hostname by analyzing NTLM requests from the static IP address, which it determined to be anomalous.

Observed Activity

  • On 2024-08-12 the static IP was observed using a credential belonging to the remote user to initiate an SMB session with an internal domain controller, where the authentication method NTLM was used
  • A different hostname from the usual hostname associated with this remote user was identified in the NTLM authentication request sent from a device with the static IP address to the domain controller
  • This device does not appear to have been seen on the network prior to this event.

Darktrace, therefore, recognized that this login was likely made by a malicious actor.

Internal Reconnaissance

Darktrace subsequently observed the malicious actor performing a series of reconnaissance activities, including LDAP reconnaissance, device hostname reconnaissance, and port scanning:

  • The affected device made a 53-second-long LDAP connection to another internal domain controller. During this connection, the device obtained data about internal Active Directory (AD) accounts, including the AD account of the remote user
  • The device made HTTP GET requests (e.g., HTTP GET requests with the Target URI ‘/nice ports,/Trinity.txt.bak’), indicative of Nmap usage
  • The device started making reverse DNS lookups for internal IP addresses.
Figure 2: Model alert showing the IP address from which the malicious actor connected and performed network scanning activities via port 9401.
Figure 3: Model Alert Event Log showing the affected device connecting to multiple internal locations via port 9401.

Lateral Movement

The threat actor was also seen making numerous failed NTLM authentication requests using a generic default Windows credential, indicating an attempt to brute force and laterally move through the network. During this activity, Darktrace identified that the device was using a different hostname than the one typically used by the remote employee.

Cyber AI Analyst

In addition to the detection by Darktrace / NETWORK, Darktrace’s Cyber AI Analyst launched an autonomous investigation into the ongoing activity. The investigation was able to correlate the seemingly separate events together into a broader incident, continuously adding new suspicious linked activities as they occurred.

Figure 4: Cyber AI Analyst investigation showing the activity timeline, and the activities associated with the incident.

Upon completing the investigation, Cyber AI Analyst provided the customer with a comprehensive summary of the various attack phases detected by Darktrace and the associated incidents. This clear presentation enabled the customer to gain full visibility into the compromise and understand the activities that constituted the attack.

Figure 5: Cyber AI Analyst displaying the observed attack phases and associated model alerts.

Darktrace Autonomous Response

Despite the sophisticated techniques and social engineering tactics used by the attacker to bypass the customer’s human security team and existing security stack, Darktrace’s AI-driven approach prevented the malicious actor from continuing their activities and causing more harm.

Darktrace’s Autonomous Response technology is able to enforce a pattern of life based on what is ‘normal’ and learned for the environment. If activity is detected that represents a deviation from expected activity from, a model alert is triggered. When Darktrace’s Autonomous Response functionality is configured in autonomous response mode, as was the case with the customer, it swiftly applies response actions to devices and users without the need for a system administrator or security analyst to perform any actions.

In this instance, Darktrace applied a number of mitigative actions on the remote user, containing most of the activity as soon as it was detected:

  • Block all outgoing traffic
  • Enforce pattern of life
  • Block all connections to port 445 (SMB)
  • Block all connections to port 9401
Figure 6: Darktrace’s Autonomous Response actions showing the actions taken in response to the observed activity, including blocking all outgoing traffic or enforcing the pattern of life.

Conclusion

This vishing attack underscores the significant risks remote employees face and the critical need for companies to address vishing threats to prevent network compromises. The remote employee in this instance was deceived by a malicious actor who spoofed the phone number of internal IT Support and convinced the employee to perform approve an MFA request. This sophisticated social engineering tactic allowed the attacker to proxy through the customer’s VPN, making the malicious activity appear legitimate due to the use of static IP addresses.

Despite the stealthy attempts to perform malicious activities on the network, Darktrace’s focus on anomaly detection enabled it to swiftly identify and analyze the suspicious behavior. This led to the prompt determination of the activity as malicious and the subsequent blocking of the malicious actor to prevent further escalation.

While the exact motivation of the threat actor in this case remains unclear, the 2023 cyber-attack on MGM Resorts serves as a stark illustration of the potential consequences of such threats. MGM Resorts experienced significant disruptions and data breaches following a similar vishing attack, resulting in financial and reputational damage [1]. If the attack on the customer had not been detected, they too could have faced sensitive data loss and major business disruptions. This incident underscores the critical importance of robust security measures and vigilant monitoring to protect against sophisticated cyber threats.

Credit to Rajendra Rushanth (Cyber Security Analyst) and Ryan Traill (Threat Content Lead)

Appendices

Darktrace Model Detections

  • Device / Unusual LDAP Bind and Search Activity
  • Device / Attack and Recon Tools
  • Device / Network Range Scan
  • Device / Suspicious SMB Scanning Activity
  • Device / RDP Scan
  • Device / UDP Enumeration
  • Device / Large Number of Model Breaches
  • Device / Network Scan
  • Device / Multiple Lateral Movement Model Breaches (Enhanced Monitoring)
  • Device / Reverse DNS Sweep
  • Device / SMB Session Brute Force (Non-Admin)

List of Indicators of Compromise (IoCs)

IoC - Type – Description

/nice ports,/Trinity.txt.bak - URI – Unusual Nmap Usage

MITRE ATT&CK Mapping

Tactic – ID – Technique

INITIAL ACCESS – T1200 – Hardware Additions

DISCOVERY – T1046 – Network Service Scanning

DISCOVERY – T1482 – Domain Trust Discovery

RECONNAISSANCE – T1590 – IP Addresses

T1590.002 – DNS

T1590.005 – IP Addresses

RECONNAISSANCE – T1592 – Client Configurations

T1592.004 – Client Configurations

RECONNAISSANCE – T1595 – Scanning IP Blocks

T1595.001 – Scanning IP Blocks

T1595.002 – Vulnerability Scanning

References

[1] https://www.bleepingcomputer.com/news/security/securing-helpdesks-from-hackers-what-we-can-learn-from-the-mgm-breach/

Continue reading
About the author
Rajendra Rushanth
Cyber Analyst
Your data. Our AI.
Elevate your network security with Darktrace AI