Cyber attacks are relentless and ever-evolving. Learn how cyber-criminals are using AI to augment their attacks at every stage of the kill chain.
Overview
The mind of an experienced and dedicated cyber-criminal works like that of an entrepreneur: the relentless pursuit of profit guides every move they make. At each step of the journey towards their objective, the same questions are asked: how can I minimize my time and resources? How can I mitigate against risk? What measures can I take which will return the best results?
Incorporating this ‘enterprise’ model into the cyber-criminal framework uncovers why attackers are turning to new technology in an attempt to maximize efficiency, and why a report from Forrester earlier this year revealed that 88% of security leaders now consider the nefarious use of AI in cyber activity to be inevitable. Over half of the responders to that same survey foresee AI attacks manifesting themselves to the public in the next twelve months – or think they are already occurring.
AI has already achieved breakthroughs in fields such as healthcare, facial recognition, voice assistance and many others. In the current cat-and-mouse game of cyber security, defenders have started to accept that augmenting their defenses with AI is necessary, with over 3,500 organizations using machine learning to protect their digital environments. But we have to be ready for the moment attackers themselves use open-source AI technology available today to supercharge their attacks.
Enhancing the attack life cycle
To a cyber-criminal ring, the benefits of leveraging AI in their attacks are at least four-fold:
It gives them an understanding of context
It helps to scale up operations
It makes attribution and detection harder
It ultimately increases their profitability
To best demonstrate how each of these factors surface themselves, we can break down the life cycle of a typical data exfiltration attempt, telling the story of how AI can augment the attacker during the campaign at every stage of the attack.
ReconnaissanceCAPTCHA breakerIntrusionShellphish and SNAP_RC2 establishmentFirstOrder and unsupervised clustering algorithmPrivilege escalationCeWL and neural networkLateral movementMITRE CALDERAMission accomplishedYahoo NSFW
Figure 1: The ‘AI toolbox’ attackers use to augment their attacks
Stage 1: Reconnaissance
In seeking to garner trust and make inroads into an organization, automated chatbots would first interact with employees via social media, leveraging profile pictures of non-existent people created by AI instead of re-using actual human photos. Once the chatbots have gained the trust of the victims at the target organization, the human attackers can gain valuable intelligence about its employees, while CAPTCHA-breakers are used for automated reconnaissance on the organization’s public-facing web pages.
Forrester estimates that AI-enabled ‘deep fakes’ will cost businesses a quarter of a billion dollars in losses in 2020.
Stage 2: Intrusion
This intelligence would then be used to craft convincing spear phishing attacks, whilst an adapted version of SNAP_R can be leveraged to create realistic tweets at scale – targeting several key employees. The tweets either trick the user into downloading malicious documents, or contain links to servers which facilitate exploit-kit attacks.
An autonomous vulnerability fuzzing engine based on Shellphish would be constantly crawling the victim’s perimeter – internet-facing servers and websites – and trying to find new vulnerabilities for an initial foothold.
Stage 3: Command and control
A popular hacking framework, Empire, allows attackers to ‘blend in’ with regular business operations, restricting command and control traffic to periods of peak activity. An agent using some form of automated decision-making engine for lateral movement might not even require command and control traffic to move laterally. Eliminating the need for command and control traffic drastically reduces the detection surface of existing malware.
Stage 4: Privilege escalation
At this stage, a password crawler like CeWL could collect target-specific keywords from internal websites and feed those keywords into a pre-trained neural network, essentially creating hundreds of realistic permutations of contextualized passwords at machine-speed. These can be automatically entered in period bursts so as to not alert the security team or trigger resets.
Stage 5: Lateral movement
Moving laterally and harvesting accounts and credentials involves identifying the optimal paths to accomplish the mission and minimize intrusion time. Parts of the attack planning can be accelerated by concepts such as from the CALDERA framework using automated planning AI methods. This would greatly reduce the time required to reach the final destination.
Stage 6: Data exfiltration
It is in this final stage where the role of offensive AI is most apparent. Instead of running a costly post-intrusion analysis operation and sifting through gigabytes of data, the attackers can leverage a neural network that pre-selects only relevant material for exfiltration. This neural network is pre-trained and therefore has a basic understanding of what valuable material constitutes and flags those for immediate exfiltration. The neural network could be based on something like Yahoo’s open-source project for content recognition.
Conclusion
Today’s attacks still require several humans behind the keyboard making guesses about the sorts of methods that will be most effective in their target network – it’s this human element that often allows defenders to neutralize attacks.
Offensive AI will make detecting and responding to attacks far more difficult. Open-source research and projects exist today which can be leveraged to augment every phase of the attack lifecycle. This means that the speed, scale, and contextualization of attacks will exponentially increase. Traditional security controls are already struggling to detect attacks that have never been seen before in the wild – be it malware without known signatures, new command and control domains, or individualized spear phishing emails. There is no chance that traditional tools will be able to cope with future attacks as this becomes the norm and easier to realize than ever before.
To stay ahead of this next wave of attacks, AI is becoming a necessary part of the defender’s stack, as no matter how well-trained or how well-staffed, humans alone will no longer be able to keep up. Hundreds of organizations are already using Autonomous Response to fight back against new strains of ransomware, insider threats, previously unknown techniques, tools and procedures, and many other threats. Cyber AI technology allows human responders to take stock and strategize from behind the front line. A new age in cyber defense is just beginning, and the effect of AI on this battleground is already proving fundamental.
Like this and want more?
Receive the latest blog in your inbox
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Newsletter
Enjoying the blog?
Sign up to receive the latest news and insights from the Darktrace newsletter – delivered directly to your inbox
Thanks for signing up!
Look out for your first newsletter, coming soon.
Oops! Something went wrong while submitting the form.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Author
Max Heinemeyer
Global Field CISO
Max is a cyber security expert with over a decade of experience in the field, specializing in a wide range of areas such as Penetration Testing, Red-Teaming, SIEM and SOC consulting and hunting Advanced Persistent Threat (APT) groups. At Darktrace, Max is closely involved with Darktrace’s strategic customers & prospects. He works with the R&D team at Darktrace, shaping research into new AI innovations and their various defensive and offensive applications. Max’s insights are regularly featured in international media outlets such as the BBC, Forbes and WIRED. Max holds an MSc from the University of Duisburg-Essen and a BSc from the Cooperative State University Stuttgart in International Business Information Systems.
Darktrace's Detection of State-Linked ShadowPad Malware
An integral part of cybersecurity is anomaly detection, which involves identifying unusual patterns or behaviors in network traffic that could indicate malicious activity, such as a cyber-based intrusion. However, attribution remains one of the ever present challenges in cybersecurity. Attribution involves the process of accurately identifying and tracing the source to a specific threat actor(s).
Given the complexity of digital networks and the sophistication of attackers who often use proxies or other methods to disguise their origin, pinpointing the exact source of a cyberattack is an arduous task. Threat actors can use proxy servers, botnets, sophisticated techniques, false flags, etc. Darktrace’s strategy is rooted in the belief that identifying behavioral anomalies is crucial for identifying both known and novel threat actor campaigns.
The ShadowPad cluster
Between July 2024 and November 2024, Darktrace observed a cluster of activity threads sharing notable similarities. The threads began with a malicious actor using compromised user credentials to log in to the target organization's Check Point Remote Access virtual private network (VPN) from an attacker-controlled, remote device named 'DESKTOP-O82ILGG'. In one case, the IP from which the initial login was carried out was observed to be the ExpressVPN IP address, 194.5.83[.]25. After logging in, the actor gained access to service account credentials, likely via exploitation of an information disclosure vulnerability affecting Check Point Security Gateway devices. Recent reporting suggests this could represent exploitation of CVE-2024-24919 [27,28]. The actor then used these compromised service account credentials to move laterally over RDP and SMB, with files related to the modular backdoor, ShadowPad, being delivered to the ‘C:\PerfLogs\’ directory of targeted internal systems. ShadowPad was seen communicating with its command-and-control (C2) infrastructure, 158.247.199[.]185 (dscriy.chtq[.]net), via both HTTPS traffic and DNS tunneling, with subdomains of the domain ‘cybaq.chtq[.]net’ being used in the compromised devices’ TXT DNS queries.
Figure 1: Darktrace’s Advanced Search data showing the VPN-connected device initiating RDP connections to a domain controller (DC). The device subsequently distributes likely ShadowPad-related payloads and makes DRSGetNCChanges requests to a second DC.
Figure 2: Event Log data showing a DC making DNS queries for subdomains of ‘cbaq.chtq[.]net’ to 158.247.199[.]185 after receiving SMB and RDP connections from the VPN-connected device, DESKTOP-O82ILGG.
Additional cases of ShadowPad were observed across Darktrace’s customer base in 2024. In some cases, common C2 infrastructure with the cluster discussed above was observed, with dscriy.chtq[.]net and cybaq.chtq[.]net both involved; however, no other common features were identified. These ShadowPad infections were observed between April and November 2024, with customers across multiple regions and sectors affected. Darktrace’s observations align with multiple other public reports that fit the timeframe of this campaign.
Darktrace has also observed other cases of ShadowPad without common infrastructure since September 2024, suggesting the use of this tool by additional threat actors.
The data theft thread
One of the Darktrace customers impacted by the ShadowPad cluster highlighted above was a European manufacturer. A distinct thread of activity occurred within this organization’s network several months after the ShadowPad intrusion, in October 2024.
The thread involved the internal distribution of highly masqueraded executable files via Sever Message Block (SMB) and WMI (Windows Management Instrumentation), the targeted collection of sensitive information from an internal server, and the exfiltration of collected information to a web of likely compromised sites. This observed thread of activity, therefore, consisted of three phrases: lateral movement, collection, and exfiltration.
The lateral movement phase began when an internal user device used an administrative credential to distribute files named ‘ProgramData\Oracle\java.log’ and 'ProgramData\Oracle\duxwfnfo' to the c$ share on another internal system.
Figure 3: Darktrace model alert highlighting an SMB write of a file named ‘ProgramData\Oracle\java.log’ to the c$ share on another device.
Over the next few days, Darktrace detected several other internal systems using administrative credentials to upload files with the following names to the c$ share on internal systems:
ProgramData\Adobe\ARM\webservices.dll
ProgramData\Adobe\ARM\wksprt.exe
ProgramData\Oracle\Java\wksprt.exe
ProgramData\Oracle\Java\webservices.dll
ProgramData\Microsoft\DRM\wksprt.exe
ProgramData\Microsoft\DRM\webservices.dll
ProgramData\Abletech\Client\webservices.dll
ProgramData\Abletech\Client\client.exe
ProgramData\Adobe\ARM\rzrmxrwfvp
ProgramData\3Dconnexion\3DxWare\3DxWare.exe
ProgramData\3Dconnexion\3DxWare\webservices.dll
ProgramData\IDMComp\UltraCompare\updater.exe
ProgramData\IDMComp\UltraCompare\webservices.dll
ProgramData\IDMComp\UltraCompare\imtrqjsaqmm
Figure 4: Cyber AI Analyst highlighting an SMB write of a file named ‘ProgramData\Adobe\ARM\webservices.dll’ to the c$ share on an internal system.
The threat actor appears to have abused the Microsoft RPC (MS-RPC) service, WMI, to execute distributed payloads, as evidenced by the ExecMethod requests to the IWbemServices RPC interface which immediately followed devices’ SMB uploads.
Figure 5: Cyber AI Analyst data highlighting a thread of activity starting with an SMB data upload followed by ExecMethod requests.
Several of the devices involved in these lateral movement activities, both on the source and destination side, were subsequently seen using administrative credentials to download tens of GBs of sensitive data over SMB from a specially selected server. The data gathering stage of the threat sequence indicates that the threat actor had a comprehensive understanding of the organization’s system architecture and had precise objectives for the information they sought to extract.
Immediately after collecting data from the targeted server, devices went on to exfiltrate stolen data to multiple sites. Several other likely compromised sites appear to have been used as general C2 infrastructure for this intrusion activity. The sites used by the threat actor for C2 and data exfiltration purport to be sites for companies offering a variety of service, ranging from consultancy to web design.
Figure 6: Screenshotof one of the likely compromised sites used in the intrusion.
At least 16 sites were identified as being likely data exfiltration or C2 sites used by this threat actor in their operation against this organization. The fact that the actor had such a wide web of compromised sites at their disposal suggests that they were well-resourced and highly prepared.
Figure 7: Darktrace model alert highlighting an internal device slowly exfiltrating data to the external endpoint, yasuconsulting[.]com.
Figure 8: Darktrace model alert highlighting an internal device downloading nearly 1 GB of data from an internal system just before uploading a similar volume of data to another suspicious endpoint, www.tunemmuhendislik[.]com
Cyber AI Analyst spotlight
Figure 9: Cyber AI Analyst identifying and piecing together the various steps of a ShadowPad intrusion.
Figure 10: Cyber AI Analyst Incident identifying and piecing together the various steps of the data theft activity.
As shown in the above figures, Cyber AI Analyst’s ability to thread together the different steps of these attack chains are worth highlighting.
In the ShadowPad attack chains, Cyber AI Analyst was able to identify SMB writes from the VPN subnet to the DC, and the C2 connections from the DC. It was also able to weave together this activity into a single thread representing the attacker’s progression.
Similarly, in the data exfiltration attack chain, Cyber AI Analyst identified and connected multiple types of lateral movement over SMB and WMI and external C2 communication to various external endpoints, linking them in a single, connected incident.
These Cyber AI Analyst actions enabled a quicker understanding of the threat actor sequence of events and, in some cases, faster containment.
Attribution puzzle
Publicly shared research into ShadowPad indicates that it is predominantly used as a backdoor in People’s Republic of China (PRC)-sponsored espionage operations [5][6][7][8][9][10]. Most publicly reported intrusions involving ShadowPad are attributed to the China-based threat actor, APT41 [11][12]. Furthermore, Google Threat Intelligence Group (GTIG) recently shared their assessment that ShadowPad usage is restricted to clusters associated with APT41 [13]. Interestingly, however, there have also been public reports of ShadowPad usage in unattributed intrusions [5].
The data theft activity that later occurred in the same Darktrace customer network as one of these ShadowPad compromises appeared to be the targeted collection and exfiltration of sensitive data. Such an objective indicates the activity may have been part of a state-sponsored operation. The tactics, techniques, and procedures (TTPs), artifacts, and C2 infrastructure observed in the data theft thread appear to resemble activity seen in previous Democratic People’s Republic of Korea (DPRK)-linked intrusion activities [15] [16] [17] [18] [19].
The distribution of payloads to the following directory locations appears to be a relatively common behavior in DPRK-sponsored intrusions.
Observed examples:
C:\ProgramData\Oracle\Java\
C:\ProgramData\Adobe\ARM\
C:\ProgramData\Microsoft\DRM\
C:\ProgramData\Abletech\Client\
C:\ProgramData\IDMComp\UltraCompare\
C:\ProgramData\3Dconnexion\3DxWare\
Additionally, the likely compromised websites observed in the data theft thread, along with some of the target URI patterns seen in the C2 communications to these sites, resemble those seen in previously reported DPRK-linked intrusion activities.
No clear evidence was found to link the ShadowPad compromise to the subsequent data theft activity that was observed on the network of the manufacturing customer. It should be noted, however, that no clear signs of initial access were found for the data theft thread – this could suggest the ShadowPad intrusion itself represents the initial point of entry that ultimately led to data exfiltration.
Motivation-wise, it seems plausible for the data theft thread to have been part of a DPRK-sponsored operation. DPRK is known to pursue targets that could potentially fulfil its national security goals and had been publicly reported as being active in months prior to this intrusion [21]. Furthermore, the timing of the data theft aligns with the ratification of the mutual defense treaty between DPRK and Russia and the subsequent accused activities [20].
Darktrace assesses with medium confidence that a nation-state, likely DPRK, was responsible, based on our investigation, the threat actor applied resources, patience, obfuscation, and evasiveness combined with external reporting, collaboration with the cyber community, assessing the attacker’s motivation and world geopolitical timeline, and undisclosed intelligence.
Conclusion
When state-linked cyber activity occurs within an organization’s environment, previously unseen C2 infrastructure and advanced evasion techniques will likely be used. State-linked cyber actors, through their resources and patience, are able to bypass most detection methods, leaving anomaly-based methods as a last line of defense.
Two threads of activity were observed within Darktrace’s customer base over the last year: The first operation involved the abuse of Check Point VPN credentials to log in remotely to organizations’ networks, followed by the distribution of ShadowPad to an internal domain controller. The second operation involved highly targeted data exfiltration from the network of one of the customers impacted by the previously mentioned ShadowPad activity.
Despite definitive attribution remaining unresolved, both the ShadowPad and data exfiltration activities were detected by Darktrace’s Self-Learning AI, with Cyber AI Analyst playing a significant role in identifying and piecing together the various steps of the intrusion activities.
Credit to Sam Lister (R&D Detection Analyst), Emma Foulger (Principal Cyber Analyst), Nathaniel Jones (VP), and the Darktrace Threat Research team.
Appendices
Darktrace / NETWORK model alerts
User / New Admin Credentials on Client
Anomalous Connection / Unusual Admin SMB Session
Compliance / SMB Drive Write
Device / Anomalous SMB Followed By Multiple Model Breaches
Survey findings: AI Cyber Threats are a Reality, the People are Acting Now
Artificial intelligence is changing the cybersecurity field as fast as any other, both on the offensive and defensive side. We surveyed over 1,500 cybersecurity professionals from around the world to uncover their attitudes, understanding, and priorities when it comes to AI cybersecurity in 2025. Our full report, unearthing some telling trends, is out now.
Nearly 74% of participants say AI-powered threats are a major challenge for their organization and 90% expect these threats to have a significant impact over the next one to two years, a slight increase from last year. These statistics highlight that AI is not just an emerging risk but a present and evolving one.
As attackers harness AI to automate and scale their operations, security teams must adapt just as quickly. Organizations that fail to prioritize AI-specific security measures risk falling behind, making proactive defense strategies more critical than ever.
Some of the most pressing AI-driven cyber threats include:
AI-powered social engineering: Attackers are leveraging AI to craft highly personalized and convincing phishing emails, making them harder to detect and more likely to bypass traditional defenses.
More advanced attacks at speed and scale: AI lowers the barrier for less skilled threat actors, allowing them to launch sophisticated attacks with minimal effort.
Attacks targeting AI systems: Cybercriminals are increasingly going after AI itself, compromising machine learning models, tampering with training data, and exploiting vulnerabilities in AI-driven applications and APIs.
Safe and secure use of AI
AI is having an effect on the cyber-threat landscape, but it also is starting to impact every aspect of a business – from marketing to HR to operations. The accessibility of AI tools for employees improves workflows, but also poses risks like data privacy violations, shadow AI, and violation of industry regulations.
How are security practitioners accommodating for this uptick in AI use across business?
Among survey participants 45% of security practitioners say they had already established a policy on the safe and secure use of AI and around 50% are in discussions to do so.
While almost all participants acknowledge that this is a topic that needs to be addressed, the gap between discussion and execution could underscore a need for greater insight, stronger leadership commitment, and adaptable security frameworks to keep pace with AI advancements in the workplace. The most popular actions taken are:
Implemented security controls to prevent unwanted exposure of corporate data when using AI technology (67%)
Implemented security controls to protect against other threats/risks associated with using AI technology (62%)
This year specifically, we see further action being taken with the implementation of security controls, training, and oversight.
For a more detailed breakdown that includes results based on industry and organizational size, download the full report here.
AI threats are rising, but security teams still face major challenges
78% of CISOs say AI-powered cyber-threats are already having a significant impact on their organization, a 5% increase from last year.
While cyber professionals feel more prepared for AI powered threats than they did 12 months ago, 45% still say their organization is not adequately prepared—down from 60% last year.
Despite this optimism, key challenges remain, including:
A shortage of personnel to manage tools and alerts
Gaps in knowledge and skills related to AI-driven countermeasures
Confidence in traditional security tools vs. new AI based tools
This year, 73% of survey participants expressed confidence in their security team’s proficiency in using AI within their tool stack, marking an increase from the previous year.
However, only 50% of participants have confidence in traditional cybersecurity tools to detect and block AI-powered threats. In contrast, 75% of participants are confident in AI-powered security solutions for detecting and blocking such threats and attacks.
As leading organizations continue to implement and optimize their use of AI, they are incorporating it into an increasing number of workflows. This growing familiarity with AI is likely to boost the confidence levels of practitioners even further.
The data indicates a clear trend towards greater reliance on AI-powered security solutions over traditional tools. As organizations become more adept at integrating AI into their operations, their confidence in these advanced technologies grows.
This shift underscores the importance of staying current with AI advancements and ensuring that security teams are well-trained in utilizing these tools effectively. The increasing confidence in AI-driven solutions reflects their potential to enhance cybersecurity measures and better protect against sophisticated threats.
The full report for Darktrace’s State of AI Cybersecurity is out now. Download the paper to dig deeper into these trends, and see how results differ by industry, region, organization size, and job title.