Blog

No items found.

Cyber Security Threats - Email Compromise With Generative AI

Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
01
Apr 2023
01
Apr 2023
Discover how generative AI is impacting email attacks and what companies can do to prepare for more sophisticated and targeted attacker campaigns.

Email communications in the era of generative AI

Today’s headlines on cyber security attacks are dominated by serious breaches that cripple critical national infrastructure providers, wreak geopolitical havoc through nation state espionage, and create paralyzing financial demands through extortion and ransomware. But how do these breaches occur in the first place, and why do they keep happening?

Social engineering – specifically malicious cyber campaigns delivered via email - remain the primary source of an organization’s vulnerability to attack. A recent survey conducted by Darktrace to 6,700 global cross-industry professionals across the UK, US, Australia, France, Germany, and Netherlands, found almost a third (30%) have fallen victim to a fraudulent email. The risk, as ever, seems to be uncontrollably on the rise – 70% of respondents have noticed an increase in the frequency of malicious emails in the last six months. 

At the end of last year, I wrote about how widespread accessibility to generative AI has compounded the problem, leading to likely efficiency gains for skilled threat actors launching email attacks. Just a few weeks ago, we published research which demonstrated that while the number of email attacks across our own customer base has remained steady since ChatGPT’s release, those that rely on tricking victims into clicking malicious links have declined, while linguistic complexity, including text volume, punctuation, and sentence length among others, have increased. This indicates that cyber-criminals may be redirecting their focus to crafting more sophisticated social engineering scams that exploit user trust delivered over email. 

Our latest findings, conducted by Darktrace Research, today reveal a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email customers from January to February 2023, corresponding with the widespread adoption of ChatGPT. A novel social engineering phishing email is an email attack that shows a strong linguistic deviation – semantically and syntactically – compared to other phishing emails. The trend suggests that generative AI, such as ChatGPT, is providing an avenue for threat actors to craft sophisticated and targeted attacks at speed and scale.

Organizations are already well aware of the danger novel threats pose – globally, 82% of respondents reported are concerned that hackers can use generative AI to create scam emails that are indistinguishable from genuine communication.

Navigating the trust deficit 

Employees are being exploited continuously through their primary means of communication - email. Given that humans, and our psychology, are at the heart of the problem, organizations have scrambled to develop awareness and training programs that are delivered routinely to employees in an attempt to help them defend themselves and their organization. While a defense-in-depth approach is generally a good practice to follow, the value of security awareness training, teaching employees to spot attempts at social engineering delivered in text format, is diminishing rapidly. This problem has become especially evident now that we find ourselves in a world where ‘bad’ emails look all but undistinguishable to the human eye from ‘good’ emails. 

Over emphasizing the responsibility of employees poses a business challenge as well as a security problem: a bloated trust deficit. In times of economic uncertainty, business success is predicated on employee productivity, creativity, and efficiency. All three of these are squashed when employees treat the communication they receive, both internal and external, with forensic suspicion and mistrust. 

As generative AI becomes more mainstream – across images, audio, video, and text – we anticipate that this erosion of trust in digital communication will only continue. What happens to the workplace if an employee is left questioning their own perception when communicating with their manager or a colleague, who they can see and hear over a video conference call, but who may be an entirely fictitious creation?

The way forward

The future lies in a partnership between artificial intelligence and human beings where the algorithms are responsible for determining whether communication is malicious or benign, thereby taking the burden of responsibility off the human. In turn, humans can pursue activities that drive productivity and performance. Current training and awareness programs, coupled with traditional email security technologies which assume that knowledge of past attacks will help predict campaigns, are not sustainable in this rapidly evolving landscape. 

Our research reveals that traditional email security tools that rely on knowledge of past threats, take an average of thirteen days from an attack being launched on a victim to that attack being detected. This leaves defenders vulnerable for almost two weeks if they rely solely on traditional tools. When this statistic is contextualized against the 135% increase we’ve seen in novel social engineering campaigns, the inefficiency of ‘attack centric’ email security is made perfectly evident. If we don’t know how to define entirely new and unique attacks, how can we ever hope to recognize them based on past attacks? 

A deep understanding of ‘you’ is needed in the face of these radical changes to the email threat landscape. Instead of trying to predict attacks, an understanding of your employees’ behavior, based from their email inbox, is needed to create patterns of life for every email user: their relationships, tone and sentiments and hundreds of other data points. By leveraging AI to combat email security threats, we not only reduce risk, but revitalize organizational trusts and contribute to business outcomes.

Ironically, generative AI may be worsening the social engineering challenge, but AI that knows you, could be the parry. 

INSIDE THE SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
AUTHOR
ABOUT ThE AUTHOR
Max Heinemeyer
Chief Product Officer

Max is a cyber security expert with over a decade of experience in the field, specializing in a wide range of areas such as Penetration Testing, Red-Teaming, SIEM and SOC consulting and hunting Advanced Persistent Threat (APT) groups. At Darktrace, Max is closely involved with Darktrace’s strategic customers & prospects. He works with the R&D team at Darktrace, shaping research into new AI innovations and their various defensive and offensive applications. Max’s insights are regularly featured in international media outlets such as the BBC, Forbes and WIRED. Max holds an MSc from the University of Duisburg-Essen and a BSc from the Cooperative State University Stuttgart in International Business Information Systems.

Book a 1-1 meeting with one of our experts
share this article
USE CASES
No items found.
PRODUCT SPOTLIGHT
No items found.
COre coverage
No items found.

More in this series

No items found.

Blog

Inside the SOC

Disarming the WarmCookie Backdoor: Darktrace’s Oven-Ready Solution

Default blog imageDefault blog image
26
Jul 2024

What is WarmCookie malware?

WarmCookie, also known as BadSpace [2], is a two-stage backdoor tool that provides functionality for threat actors to retrieve victim information and launch additional payloads. The malware is primarily distributed via phishing campaigns according to multiple open-source intelligence (OSINT) providers.

Backdoor malware: A backdoor tool is a piece of software used by attackers to gain and maintain unauthorized access to a system. It bypasses standard authentication and security mechanisms, allowing the attacker to control the system remotely.

Two-stage backdoor malware: This means the backdoor operates in two distinct phases:

1. Initial Stage: The first stage involves the initial infection and establishment of a foothold within the victim's system. This stage is often designed to be small and stealthy to avoid detection.

2. Secondary Stage: Once the initial stage has successfully compromised the system, it retrieves or activates the second stage payload. This stage provides more advanced functionalities for the attacker, such as extensive data exfiltration, deeper system control, or the deployment of additional malicious payloads.

How does WarmCookie malware work?

Reported attack patterns include emails attempting to impersonate recruitment firms such as PageGroup, Michael Page, and Hays. These emails likely represented social engineering tactics, with attackers attempting to manipulate jobseekers into engaging with the emails and following malicious links embedded within [3].

This backdoor tool also adopts stealth and evasion tactics to avoid the detection of traditional security tools. Reported evasion tactics included custom string decryption algorithms, as well as dynamic API loading to prevent researchers from analyzing and identifying the core functionalities of WarmCookie [1].

Before this backdoor makes an outbound network request, it is known to capture details from the target machine, which can be used for fingerprinting and identification [1], this includes:

- Computer name

- Username

- DNS domain of the machine

- Volume serial number

WarmCookie samples investigated by external researchers were observed communicating communicated over HTTP to a hardcoded IP address using a combination of RC4 and Base64 to protect its network traffic [1]. Ultimately, threat actors could use this backdoor to deploy further malicious payloads on targeted networks, such as ransomware.

Darktrace Coverage of WarmCookie

Between April and June 2024, Darktrace’s Threat Research team investigated suspicious activity across multiple customer networks indicating that threat actors were utilizing the WarmCookie backdoor tool. Observed cases across customer environments all included the download of unusual executable (.exe) files and suspicious outbound connectivity.

Affected devices were all observed making external HTTP requests to the German-based external IP, 185.49.69[.]41, and the URI, /data/2849d40ade47af8edfd4e08352dd2cc8.

The first investigated instance occurred between April 23 and April 24, when Darktrace detected a a series of unusual file download and outbound connectivity on a customer network, indicating successful WarmCookie exploitation. As mentioned by Elastic labs, "The PowerShell script abuses the Background Intelligent Transfer Service (BITS) to download WarmCookie and run the DLL with the Start export" [1].

Less than a minute later, the same device was observed making HTTP requests to the rare external IP address: 185.49.69[.]41, which had never previously been observed on the network, for the URI /data/b834116823f01aeceed215e592dfcba7. The device then proceeded to download masqueraded executable file from this endpoint. Darktrace recognized that these connections to an unknown endpoint, coupled with the download of a masqueraded file, likely represented malicious activity.

Following this download, the device began beaconing back to the same IP, 185.49.69[.]41, with a large number of external connections observed over port 80.  This beaconing related behavior could further indicate malicious software communicating with command-and-control (C2) servers.

Darktrace’s model alert coverage included the following details:

[Model Alert: Device / Unusual BITS Activity]

- Associated device type: desktop

- Time of alert: 2024-04-23T14:10:23 UTC

- ASN: AS28753 Leaseweb Deutschland GmbH

- User agent: Microsoft BITS/7.8

[Model Alert: Anomalous File / EXE from Rare External Location]

[Model Alert: Anomalous File / Masqueraded File Transfer]

- Associated device type: desktop

- Time of alert: 2024-04-23T14:11:18 UTC

- Destination IP: 185.49.69[.]41

- Destination port: 80

- Protocol: TCP

- Application protocol: HTTP

- ASN: AS28753 Leaseweb Deutschland GmbH

- User agent: Mozilla / 4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1;.NET CLR 1.0.3705)

- Event details: File: http[:]//185.49.69[.]41/data/b834116823f01aeceed215e592dfcba7, total seen size: 144384B, direction: Incoming

- SHA1 file hash: 4ddf0d9c750bfeaebdacc14152319e21305443ff

- MD5 file hash: b09beb0b584deee198ecd66976e96237

[Model Alert: Compromise / Beaconing Activity To External Rare]

- Associated device type: desktop

- Time of alert: 2024-04-23T14:15:24 UTC

- Destination IP: 185.49.69[.]41

- Destination port: 80

- Protocol: TCP

- Application protocol: HTTP

- ASN: AS28753 Leaseweb Deutschland GmbH  

- User agent: Mozilla / 4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1;.NET CLR 1.0.3705)

Between May 7 and June 4, Darktrace identified a wide range of suspicious external connectivity on another customer’s environment. Darktrace’s Threat Research team further investigated this activity and assessed it was likely indicative of WarmCookie exploitation on customer devices.

Similar to the initial use case, BITS activity was observed on affected devices, which is utilized to download WarmCookie [1]. This initial behavior was observed with the device after triggering the model: Device / Unusual BITS Activity on May 7.

Just moments later, the same device was observed making HTTP requests to the aforementioned German IP address, 185.49.69[.]41 using the same URI /data/2849d40ade47af8edfd4e08352dd2cc8, before downloading a suspicious executable file.

Just like the first use case, this device followed up this suspicious download with a series of beaconing connections to 185.49.69[.]41, again with a large number of connections via port 80.

Similar outgoing connections to 185.49.69[.]41 and model alerts were observed on additional devices during the same timeframe, indicating that numerous customer devices had been compromised.

Darktrace’s model alert coverage included the following details:

[Model Alert: Device / Unusual BITS Activity]

- Associated device type: desktop

- Time of alert: 2024-05-07T09:03:23 UTC

- ASN: AS28753 Leaseweb Deutschland GmbH

- User agent: Microsoft BITS/7.8

[Model Alert: Anomalous File / EXE from Rare External Location]

[Model Alert: Anomalous File / Masqueraded File Transfer]

- Associated device type: desktop

- Time of alert: 2024-05-07T09:03:35 UTC  

- Destination IP: 185.49.69[.]41

- Protocol: TCP

- ASN: AS28753 Leaseweb Deutschland GmbH

- Event details: File: http[:]//185.49.69[.]41/data/2849d40ade47af8edfd4e08352dd2cc8, total seen size: 72704B, direction: Incoming

- SHA1 file hash: 5b0a35c574ee40c4bccb9b0b942f9a9084216816

- MD5 file hash: aa9a73083184e1309431b3c7a3e44427  

[Model Alert: Anomalous Connection / New User Agent to IP Without Hostname]

- Associated device type: desktop

- Time of alert: 2024-05-07T09:04:14 UTC  

- Destination IP: 185.49.69[.]41  

- Application protocol: HTTP  

- URI: /data/2849d40ade47af8edfd4e08352dd2cc8

- User agent: Microsoft BITS/7.8  

[Model Alert: Compromise / HTTP Beaconing to New Endpoint]

- Associated device type: desktop

- Time of alert: 2024-05-07T09:08:47 UTC

- Destination IP: 185.49.69[.]41

- Protocol: TCP

- Application protocol: HTTP  

- ASN: AS28753 Leaseweb Deutschland GmbH  

- URI: /  

- User agent: Mozilla / 4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1;.NET CLR 1.0.3705) \

Cyber AI Analyst Coverage Details around the external destination, ‘185.49.69[.]41’.
Figure 1: Cyber AI Analyst Coverage Details around the external destination, ‘185.49.69[.]41’.
External Sites Summary verifying the geographical location of the external IP, 185.49.69[.]41’.
Figure 2: External Sites Summary verifying the geographical location of the external IP, 185.49.69[.]41’.

Fortunately, this particular customer was subscribed to Darktrace’s Proactive Threat Notification (PTN) service and the Darktrace Security Operation Center (SOC) promptly investigated the activity and alerted the customer. This allowed their security team to address the activity and begin their own remediation process.

In this instance, Darktrace’s Autonomous Response capability was configured in Human Confirmation mode, meaning any mitigative actions required manual application by the customer’s security team.

Despite this, Darktrace recommended two actions to contain the activity: blocking connections to the suspicious IP address 185.49.69[.]41 and any IP addresses ending with '69[.]41', as well as the ‘Enforce Pattern of Life’ action. By enforcing a pattern of life, Darktrace can restrict a device (or devices) to its learned behavior, allowing it to continue regular business activities uninterrupted while blocking any deviations from expected activity.

Actions suggested by Darktrace to contain the emerging activity, including blocking connections to the suspicious endpoint and restricting the device to its ‘pattern of life’.
Figure 3: Actions suggested by Darktrace to contain the emerging activity, including blocking connections to the suspicious endpoint and restricting the device to its ‘pattern of life’.

Conclusion

Backdoor tools like WarmCookie enable threat actors to gather and leverage information from target systems to deploy additional malicious payloads, escalating their cyber attacks. Given that WarmCookie’s primary distribution method seems to be through phishing campaigns masquerading as trusted recruitments firms, it has the potential to affect a large number of organziations.

In the face of such threats, Darktrace’s behavioral analysis provides organizations with full visibility over anomalous activity on their digital estates, regardless of whether the threat bypasses by human security teams or email security tools. While threat actors seemingly managed to evade customers’ native email security and gain access to their networks in these cases, Darktrace identified the suspicious behavior associated with WarmCookie and swiftly notified customer security teams.

Had Darktrace’s Autonomous Response capability been fully enabled in these cases, it could have blocked any suspicious connections and subsequent activity in real-time, without the need of human intervention, effectively containing the attacks in the first instance.

Credit to Justin Torres, Cyber Security Analyst and Dylan Hinz, Senior Cyber Security Analyst

Appendices

Darktrace Model Detections

- Anomalous File / EXE from Rare External Location

- Anomalous File / Masqueraded File Transfer  

- Compromise / Beacon to Young Endpoint  

- Compromise / Beaconing Activity To External Rare  

- Compromise / HTTP Beaconing to New Endpoint  

- Compromise / HTTP Beaconing to Rare Destination

- Compromise / High Volume of Connections with Beacon Score

- Compromise / Large Number of Suspicious Successful Connections

- Compromise / Quick and Regular Windows HTTP Beaconing

- Compromise / SSL or HTTP Beacon

- Compromise / Slow Beaconing Activity To External Rare

- Compromise / Sustained SSL or HTTP Increase

- Compromise / Sustained TCP Beaconing Activity To Rare Endpoint

- Anomalous Connection / Multiple Failed Connections to Rare Endpoint

- Anomalous Connection / New User Agent to IP Without Hostname

- Compromise / Sustained SSL or HTTP Increase

AI Analyst Incident Coverage:

- Unusual Repeated Connections

- Possible SSL Command and Control to Multiple Endpoints

- Possible HTTP Command and Control

- Suspicious File Download

Darktrace RESPOND Model Detections:

- Antigena / Network / External Threat / Antigena Suspicious File Block

- Antigena / Network / External Threat / Antigena Suspicious File Pattern of Life Block

List of IoCs

IoC - Type - Description + Confidence

185.49.69[.]41 – IP Address – WarmCookie C2 Endpoint

/data/2849d40ade47af8edfd4e08352dd2cc8 – URI – Likely WarmCookie URI

/data/b834116823f01aeceed215e592dfcba7 – URI – Likely WarmCookie URI

4ddf0d9c750bfeaebdacc14152319e21305443ff  - SHA1 Hash  – Possible Malicious File

5b0a35c574ee40c4bccb9b0b942f9a9084216816  - SHA1 Hash – Possiblem Malicious File

MITRE ATT&CK Mapping

(Technique Name) – (Tactic) – (ID) – (Sub-Technique of)

Drive-by Compromise - INITIAL ACCESS - T1189

Ingress Tool Transfer - COMMAND AND CONTROL - T1105

Malware - RESOURCE DEVELOPMENT - T1588.001 - T1588

Lateral Tool Transfer - LATERAL MOVEMENT - T1570

Web Protocols - COMMAND AND CONTROL - T1071.001 - T1071

Web Services - RESOURCE DEVELOPMENT - T1583.006 - T1583

Browser Extensions - PERSISTENCE - T1176

Application Layer Protocol - COMMAND AND CONTROL - T1071

Fallback Channels - COMMAND AND CONTROL - T1008

Multi-Stage Channels - COMMAND AND CONTROL - T1104

Non-Standard Port - COMMAND AND CONTROL - T1571

One-Way Communication - COMMAND AND CONTROL - T1102.003 - T1102

Encrypted Channel - COMMAND AND CONTROL - T1573

External Proxy - COMMAND AND CONTROL - T1090.002 - T1090

Non-Application Layer Protocol - COMMAND AND CONTROL - T1095

References

[1] https://www.elastic.co/security-labs/dipping-into-danger

[2] https://www.gdatasoftware.com/blog/2024/06/37947-badspace-backdoor

[3] https://thehackernews.com/2024/06/new-phishing-campaign-deploys.html

Continue reading
About the author
Justin Torres
Cyber Analyst

Blog

Thought Leadership

The State of AI in Cybersecurity: Understanding AI Technologies

Default blog imageDefault blog image
24
Jul 2024

About the State of AI Cybersecurity Report

Darktrace surveyed 1,800 CISOs, security leaders, administrators, and practitioners from industries around the globe. Our research was conducted to understand how the adoption of new AI-powered offensive and defensive cybersecurity technologies are being managed by organizations.

This blog continues the conversation from “The State of AI in Cybersecurity: Unveiling Global Insights from 1,800 Security Practitioners”. This blog will focus on security professionals’ understanding of AI technologies in cybersecurity tools.

To access download the full report, click here.

How familiar are security professionals with supervised machine learning

Just 31% of security professionals report that they are “very familiar” with supervised machine learning.

Many participants admitted unfamiliarity with various AI types. Less than one-third felt "very familiar" with the technologies surveyed: only 31% with supervised machine learning and 28% with natural language processing (NLP).

Most participants were "somewhat" familiar, ranging from 46% for supervised machine learning to 36% for generative adversarial networks (GANs). Executives and those in larger organizations reported the highest familiarity.

Combining "very" and "somewhat" familiar responses, 77% had familiarity with supervised machine learning, 74% generative AI, and 73% NLP. With generative AI getting so much media attention, and NLP being the broader area of AI that encompasses generative AI, these results may indicate that stakeholders are understanding the topic on the basis of buzz, not hands-on work with the technologies.  

If defenders hope to get ahead of attackers, they will need to go beyond supervised learning algorithms trained on known attack patterns and generative AI. Instead, they’ll need to adopt a comprehensive toolkit comprised of multiple, varied AI approaches—including unsupervised algorithms that continuously learn from an organization’s specific data rather than relying on big data generalizations.  

Different types of AI

Different types of AI have different strengths and use cases in cyber security. It’s important to choose the right technique for what you’re trying to achieve.  

Supervised machine learning: Applied more often than any other type of AI in cyber security. Trained on human attack patterns and historical threat intelligence.  

Large language models (LLMs): Applies deep learning models trained on extremely large data sets to understand, summarize, and generate new content. Used in generative AI tools.  

Natural language processing (NLP): Applies computational techniques to process and understand human language.  

Unsupervised machine learning: Continuously learns from raw, unstructured data to identify deviations that represent true anomalies.  

What impact will generative AI have on the cybersecurity field?

More than half of security professionals (57%) believe that generative AI will have a bigger impact on their field over the next few years than other types of AI.

Chart showing the types of AI expected to impact security the most
Figure 1: Chart from Darktrace's State of AI in Cybersecurity Report

Security stakeholders are highly aware of generative AI and LLMs, viewing them as pivotal to the field's future. Generative AI excels at abstracting information, automating tasks, and facilitating human-computer interaction. However, LLMs can "hallucinate" due to training data errors and are vulnerable to prompt injection attacks. Despite improvements in securing LLMs, the best cyber defenses use a mix of AI types for enhanced accuracy and capability.

AI education is crucial as industry expectations for generative AI grow. Leaders and practitioners need to understand where and how to use AI while managing risks. As they learn more, there will be a shift from generative AI to broader AI applications.

Do security professionals fully understand the different types of AI in security products?

Only 26% of security professionals report a full understanding of the different types of AI in use within security products.

Confusion is prevalent in today’s marketplace. Our survey found that only 26% of respondents fully understand the AI types in their security stack, while 31% are unsure or confused by vendor claims. Nearly 65% believe generative AI is mainly used in cybersecurity, though it’s only useful for identifying phishing emails. This highlights a gap between user expectations and vendor delivery, with too much focus on generative AI.

Key findings include:

  • Executives and managers report higher understanding than practitioners.
  • Larger organizations have better understanding due to greater specialization.

As AI evolves, vendors are rapidly introducing new solutions faster than practitioners can learn to use them. There's a strong need for greater vendor transparency and more education for users to maximize the technology's value.

To help ease confusion around AI technologies in cybersecurity, Darktrace has released the CISO’s Guide to Cyber AI. A comprehensive white paper that categorizes the different applications of AI in cybersecurity. Download the White Paper here.  

Do security professionals believe generative AI alone is enough to stop zero-day threats?

No! 86% of survey participants believe generative AI alone is NOT enough to stop zero-day threats

This consensus spans all geographies, organization sizes, and roles, though executives are slightly less likely to agree. Asia-Pacific participants agree more, while U.S. participants agree less.

Despite expecting generative AI to have the most impact, respondents recognize its limited security use cases and its need to work alongside other AI types. This highlights the necessity for vendor transparency and varied AI approaches for effective security across threat prevention, detection, and response.

Stakeholders must understand how AI solutions work to ensure they offer advanced, rather than outdated, threat detection methods. The survey shows awareness that old methods are insufficient.

To access the full report, click here.

Continue reading
About the author
The Darktrace Community
Our ai. Your data.

Elevate your cyber defenses with Darktrace AI

Start your free trial
Darktrace AI protecting a business from cyber threats.