Blog
/
/
October 30, 2023

Exploring AI Threats: Package Hallucination Attacks

Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
30
Oct 2023
Learn how malicious actors exploit errors in generative AI tools to launch packet attacks. Read how Darktrace products detect and prevent these threats!

AI tools open doors for threat actors

On November 30, 2022, the free conversational language generation model ChatGPT was launched by OpenAI, an artificial intelligence (AI) research and development company. The launch of ChatGPT was the culmination of development ongoing since 2018 and represented the latest innovation in the ongoing generative AI boom and made the use of generative AI tools accessible to the general population for the first time.

ChatGPT is estimated to currently have at least 100 million users, and in August 2023 the site reached 1.43 billion visits [1]. Darktrace data indicated that, as of March 2023, 74% of active customer environments have employees using generative AI tools in the workplace [2].

However, with new tools come new opportunities for threat actors to exploit and use them maliciously, expanding their arsenal.

Much consideration has been given to mitigating the impacts of the increased linguistic complexity in social engineering and phishing attacks resulting from generative AI tool use, with Darktrace observing a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email™ customers from January to February 2023, corresponding with the widespread adoption of ChatGPT and its peers [3].

Less overall consideration, however, has been given to impacts stemming from errors intrinsic to generative AI tools. One of these errors is AI hallucinations.

What is an AI hallucination?

AI “hallucination” is a term which refers to the predictive elements of generative AI and LLMs’ AI model gives an unexpected or factually incorrect response which does not align with its machine learning training data [4]. This differs from regular and intended behavior for an AI model, which should provide a response based on the data it was trained upon.  

Why are AI hallucinations a problem?

Despite the term indicating it might be a rare phenomenon, hallucinations are far more likely than accurate or factual results as the AI models used in LLMs are merely predictive and focus on the most probable text or outcome, rather than factual accuracy.

Given the widespread use of generative AI tools in the workplace employees are becoming significantly more likely to encounter an AI hallucination. Furthermore, if these fabricated hallucination responses are taken at face value, they could cause significant issues for an organization.

Use of generative AI in software development

Software developers may use generative AI for recommendations on how to optimize their scripts or code, or to find packages to import into their code for various uses. Software developers may ask LLMs for recommendations on specific pieces of code or how to solve a specific problem, which will likely lead to a third-party package. It is possible that packages recommended by generative AI tools could represent AI hallucinations and the packages may not have been published, or, more accurately, the packages may not have been published prior to the date at which the training data for the model halts. If these hallucinations result in common suggestions of a non-existent package, and the developer copies the code snippet wholesale, this may leave the exchanges vulnerable to attack.

Research conducted by Vulcan revealed the prevalence of AI hallucinations when ChatGPT is asked questions related to coding. After sourcing a sample of commonly asked coding questions from Stack Overflow, a question-and-answer website for programmers, researchers queried ChatGPT (in the context of Node.js and Python) and reviewed its responses. In 20% of the responses provided by ChatGPT pertaining to Node.js at least one un-published package was included, whilst the figure sat at around 35% for Python [4].

Hallucinations can be unpredictable, but would-be attackers are able to find packages to create by asking generative AI tools generic questions and checking whether the suggested packages exist already. As such, attacks using this vector are unlikely to target specific organizations, instead posing more of a widespread threat to users of generative AI tools.

Malicious packages as attack vectors

Although AI hallucinations can be unpredictable, and responses given by generative AI tools may not always be consistent, malicious actors are able to discover AI hallucinations by adopting the approach used by Vulcan. This allows hallucinated packages to be used as attack vectors. Once a malicious actor has discovered a hallucination of an un-published package, they are able to create a package with the same name and include a malicious payload, before publishing it. This is known as a malicious package.

Malicious packages could also be recommended by generative AI tools in the form of pre-existing packages. A user may be recommended a package that had previously been confirmed to contain malicious content, or a package that is no longer maintained and, therefore, is more vulnerable to hijack by malicious actors.

In such scenarios it is not necessary to manipulate the training data (data poisoning) to achieve the desired outcome for the malicious actor, thus a complex and time-consuming attack phase can easily be bypassed.

An unsuspecting software developer may incorporate a malicious package into their code, rendering it harmful. Deployment of this code could then result in compromise and escalation into a full-blown cyber-attack.

Figure 1: Flow diagram depicting the initial stages of an AI Package Hallucination Attack.

For providers of Software-as-a-Service (SaaS) products, this attack vector may represent an even greater risk. Such organizations may have a higher proportion of employed software developers than other organizations of comparable size. A threat actor, therefore, could utilize this attack vector as part of a supply chain attack, whereby a malicious payload becomes incorporated into trusted software and is then distributed to multiple customers. This type of attack could have severe consequences including data loss, the downtime of critical systems, and reputational damage.

How could Darktrace detect an AI Package Hallucination Attack?

In June 2023, Darktrace introduced a range of DETECT™ and RESPOND™ models designed to identify the use of generative AI tools within customer environments, and to autonomously perform inhibitive actions in response to such detections. These models will trigger based on connections to endpoints associated with generative AI tools, as such, Darktrace’s detection of an AI Package Hallucination Attack would likely begin with the breaching of one of the following DETECT models:

  • Compliance / Anomalous Upload to Generative AI
  • Compliance / Beaconing to Rare Generative AI and Generative AI
  • Compliance / Generative AI

Should generative AI tool use not be permitted by an organization, the Darktrace RESPOND model ‘Antigena / Network / Compliance / Antigena Generative AI Block’ can be activated to autonomously block connections to endpoints associated with generative AI, thus preventing an AI Package Hallucination attack before it can take hold.

Once a malicious package has been recommended, it may be downloaded from GitHub, a platform and cloud-based service used to store and manage code. Darktrace DETECT is able to identify when a device has performed a download from an open-source repository such as GitHub using the following models:

  • Device / Anomalous GitHub Download
  • Device / Anomalous Script Download Followed By Additional Packages

Whatever goal the malicious package has been designed to fulfil will determine the next stages of the attack. Due to their highly flexible nature, AI package hallucinations could be used as an attack vector to deliver a large variety of different malware types.

As GitHub is a commonly used service by software developers and IT professionals alike, traditional security tools may not alert customer security teams to such GitHub downloads, meaning malicious downloads may go undetected. Darktrace’s anomaly-based approach to threat detection, however, enables it to recognize subtle deviations in a device’s pre-established pattern of life which may be indicative of an emerging attack.

Subsequent anomalous activity representing the possible progression of the kill chain as part of an AI Package Hallucination Attack could then trigger an Enhanced Monitoring model. Enhanced Monitoring models are high-fidelity indicators of potential malicious activity that are investigated by the Darktrace analyst team as part of the Proactive Threat Notification (PTN) service offered by the Darktrace Security Operation Center (SOC).

Conclusion

Employees are often considered the first line of defense in cyber security; this is particularly true in the face of an AI Package Hallucination Attack.

As the use of generative AI becomes more accessible and an increasingly prevalent tool in an attacker’s toolbox, organizations will benefit from implementing company-wide policies to define expectations surrounding the use of such tools. It is simple, yet critical, for example, for employees to fact check responses provided to them by generative AI tools. All packages recommended by generative AI should also be checked by reviewing non-generated data from either external third-party or internal sources. It is also good practice to adopt caution when downloading packages with very few downloads as it could indicate the package is untrustworthy or malicious.

As of September 2023, ChatGPT Plus and Enterprise users were able to use the tool to browse the internet, expanding the data ChatGPT can access beyond the previous training data cut-off of September 2021 [5]. This feature will be expanded to all users soon [6]. ChatGPT providing up-to-date responses could prompt the evolution of this attack vector, allowing attackers to publish malicious packages which could subsequently be recommended by ChatGPT.

It is inevitable that a greater embrace of AI tools in the workplace will be seen in the coming years as the AI technology advances and existing tools become less novel and more familiar. By fighting fire with fire, using AI technology to identify AI usage, Darktrace is uniquely placed to detect and take preventative action against malicious actors capitalizing on the AI boom.

Credit to Charlotte Thompson, Cyber Analyst, Tiana Kelly, Analyst Team Lead, London, Cyber Analyst

References

[1] https://seo.ai/blog/chatgpt-user-statistics-facts

[2] https://darktrace.com/news/darktrace-addresses-generative-ai-concerns

[3] https://darktrace.com/news/darktrace-email-defends-organizations-against-evolving-cyber-threat-landscape

[4] https://vulcan.io/blog/ai-hallucinations-package-risk?nab=1&utm_referrer=https%3A%2F%2Fwww.google.com%2F

[5] https://twitter.com/OpenAI/status/1707077710047216095

[6] https://www.reuters.com/technology/openai-says-chatgpt-can-now-browse-internet-2023-09-27/

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Author
Charlotte Thompson
Cyber Analyst
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst
Book a 1-1 meeting with one of our experts
Share this article

More in this series

No items found.

Blog

/

Network

/

March 18, 2025

Darktrace's Detection of State-Linked ShadowPad Malware

Default blog imageDefault blog image

An integral part of cybersecurity is anomaly detection, which involves identifying unusual patterns or behaviors in network traffic that could indicate malicious activity, such as a cyber-based intrusion. However, attribution remains one of the ever present challenges in cybersecurity. Attribution involves the process of accurately identifying and tracing the source to a specific threat actor(s).

Given the complexity of digital networks and the sophistication of attackers who often use proxies or other methods to disguise their origin, pinpointing the exact source of a cyberattack is an arduous task. Threat actors can use proxy servers, botnets, sophisticated techniques, false flags, etc. Darktrace’s strategy is rooted in the belief that identifying behavioral anomalies is crucial for identifying both known and novel threat actor campaigns.

The ShadowPad cluster

Between July 2024 and November 2024, Darktrace observed a cluster of activity threads sharing notable similarities. The threads began with a malicious actor using compromised user credentials to log in to the target organization's Check Point Remote Access virtual private network (VPN) from an attacker-controlled, remote device named 'DESKTOP-O82ILGG'.  In one case, the IP from which the initial login was carried out was observed to be the ExpressVPN IP address, 194.5.83[.]25. After logging in, the actor gained access to service account credentials, likely via exploitation of an information disclosure vulnerability affecting Check Point Security Gateway devices. Recent reporting suggests this could represent exploitation of CVE-2024-24919 [27,28]. The actor then used these compromised service account credentials to move laterally over RDP and SMB, with files related to the modular backdoor, ShadowPad, being delivered to the  ‘C:\PerfLogs\’ directory of targeted internal systems. ShadowPad was seen communicating with its command-and-control (C2) infrastructure, 158.247.199[.]185 (dscriy.chtq[.]net), via both HTTPS traffic and DNS tunneling, with subdomains of the domain ‘cybaq.chtq[.]net’ being used in the compromised devices’ TXT DNS queries.

Darktrace’s Advanced Search data showing the VPN-connected device initiating RDP connections to a domain controller (DC). The device subsequently distributes likely ShadowPad-related payloads and makes DRSGetNCChanges requests to a second DC.
Figure 1: Darktrace’s Advanced Search data showing the VPN-connected device initiating RDP connections to a domain controller (DC). The device subsequently distributes likely ShadowPad-related payloads and makes DRSGetNCChanges requests to a second DC.
Event Log data showing a DC making DNS queries for subdomains of ‘cbaq.chtq[.]net’ to 158.247.199[.]185 after receiving SMB and RDP connections from the VPN-connected device, DESKTOP-O82ILGG.
Figure 2: Event Log data showing a DC making DNS queries for subdomains of ‘cbaq.chtq[.]net’ to 158.247.199[.]185 after receiving SMB and RDP connections from the VPN-connected device, DESKTOP-O82ILGG.

Darktrace observed these ShadowPad activity threads within the networks of European-based customers in the manufacturing and financial sectors.  One of these intrusions was followed a few months later by likely state-sponsored espionage activity, as detailed in the investigation of the year in Darktrace’s Annual Threat Report 2024.

Related ShadowPad activity

Additional cases of ShadowPad were observed across Darktrace’s customer base in 2024. In some cases, common C2 infrastructure with the cluster discussed above was observed, with dscriy.chtq[.]net and cybaq.chtq[.]net both involved; however, no other common features were identified. These ShadowPad infections were observed between April and November 2024, with customers across multiple regions and sectors affected.  Darktrace’s observations align with multiple other public reports that fit the timeframe of this campaign.

Darktrace has also observed other cases of ShadowPad without common infrastructure since September 2024, suggesting the use of this tool by additional threat actors.

The data theft thread

One of the Darktrace customers impacted by the ShadowPad cluster highlighted above was a European manufacturer. A distinct thread of activity occurred within this organization’s network several months after the ShadowPad intrusion, in October 2024.

The thread involved the internal distribution of highly masqueraded executable files via Sever Message Block (SMB) and WMI (Windows Management Instrumentation), the targeted collection of sensitive information from an internal server, and the exfiltration of collected information to a web of likely compromised sites. This observed thread of activity, therefore, consisted of three phrases: lateral movement, collection, and exfiltration.

The lateral movement phase began when an internal user device used an administrative credential to distribute files named ‘ProgramData\Oracle\java.log’ and 'ProgramData\Oracle\duxwfnfo' to the c$ share on another internal system.  

Darktrace model alert highlighting an SMB write of a file named ‘ProgramData\Oracle\java.log’ to the c$ share on another device.
Figure 3: Darktrace model alert highlighting an SMB write of a file named ‘ProgramData\Oracle\java.log’ to the c$ share on another device.

Over the next few days, Darktrace detected several other internal systems using administrative credentials to upload files with the following names to the c$ share on internal systems:

ProgramData\Adobe\ARM\webservices.dll

ProgramData\Adobe\ARM\wksprt.exe

ProgramData\Oracle\Java\wksprt.exe

ProgramData\Oracle\Java\webservices.dll

ProgramData\Microsoft\DRM\wksprt.exe

ProgramData\Microsoft\DRM\webservices.dll

ProgramData\Abletech\Client\webservices.dll

ProgramData\Abletech\Client\client.exe

ProgramData\Adobe\ARM\rzrmxrwfvp

ProgramData\3Dconnexion\3DxWare\3DxWare.exe

ProgramData\3Dconnexion\3DxWare\webservices.dll

ProgramData\IDMComp\UltraCompare\updater.exe

ProgramData\IDMComp\UltraCompare\webservices.dll

ProgramData\IDMComp\UltraCompare\imtrqjsaqmm

Cyber AI Analyst highlighting an SMB write of a file named ‘ProgramData\Adobe\ARM\webservices.dll’ to the c$ share on an internal system.
Figure 4: Cyber AI Analyst highlighting an SMB write of a file named ‘ProgramData\Adobe\ARM\webservices.dll’ to the c$ share on an internal system.

The threat actor appears to have abused the Microsoft RPC (MS-RPC) service, WMI, to execute distributed payloads, as evidenced by the ExecMethod requests to the IWbemServices RPC interface which immediately followed devices’ SMB uploads.  

Cyber AI Analyst data highlighting a thread of activity starting with an SMB data upload followed by ExecMethod requests.
Figure 5: Cyber AI Analyst data highlighting a thread of activity starting with an SMB data upload followed by ExecMethod requests.

Several of the devices involved in these lateral movement activities, both on the source and destination side, were subsequently seen using administrative credentials to download tens of GBs of sensitive data over SMB from a specially selected server.  The data gathering stage of the threat sequence indicates that the threat actor had a comprehensive understanding of the organization’s system architecture and had precise objectives for the information they sought to extract.

Immediately after collecting data from the targeted server, devices went on to exfiltrate stolen data to multiple sites. Several other likely compromised sites appear to have been used as general C2 infrastructure for this intrusion activity. The sites used by the threat actor for C2 and data exfiltration purport to be sites for companies offering a variety of service, ranging from consultancy to web design.

Screenshot of one of the likely compromised sites used in the intrusion. 
Figure 6: Screenshot of one of the likely compromised sites used in the intrusion.

At least 16 sites were identified as being likely data exfiltration or C2 sites used by this threat actor in their operation against this organization. The fact that the actor had such a wide web of compromised sites at their disposal suggests that they were well-resourced and highly prepared.  

Darktrace model alert highlighting an internal device slowly exfiltrating data to the external endpoint, yasuconsulting[.]com.
Figure 7: Darktrace model alert highlighting an internal device slowly exfiltrating data to the external endpoint, yasuconsulting[.]com.
Darktrace model alert highlighting an internal device downloading nearly 1 GB of data from an internal system just before uploading a similar volume of data to another suspicious endpoint, www.tunemmuhendislik[.]com    
Figure 8: Darktrace model alert highlighting an internal device downloading nearly 1 GB of data from an internal system just before uploading a similar volume of data to another suspicious endpoint, www.tunemmuhendislik[.]com  

Cyber AI Analyst spotlight

Cyber AI Analyst identifying and piecing together the various steps of a ShadowPad intrusion.
Figure 9: Cyber AI Analyst identifying and piecing together the various steps of a ShadowPad intrusion.  
Cyber AI Analyst Incident identifying and piecing together the various steps of the data theft activity.
Figure 10: Cyber AI Analyst Incident identifying and piecing together the various steps of the data theft activity.

As shown in the above figures, Cyber AI Analyst’s ability to thread together the different steps of these attack chains are worth highlighting.

In the ShadowPad attack chains, Cyber AI Analyst was able to identify SMB writes from the VPN subnet to the DC, and the C2 connections from the DC. It was also able to weave together this activity into a single thread representing the attacker’s progression.

Similarly, in the data exfiltration attack chain, Cyber AI Analyst identified and connected multiple types of lateral movement over SMB and WMI and external C2 communication to various external endpoints, linking them in a single, connected incident.

These Cyber AI Analyst actions enabled a quicker understanding of the threat actor sequence of events and, in some cases, faster containment.

Attribution puzzle

Publicly shared research into ShadowPad indicates that it is predominantly used as a backdoor in People’s Republic of China (PRC)-sponsored espionage operations [5][6][7][8][9][10]. Most publicly reported intrusions involving ShadowPad  are attributed to the China-based threat actor, APT41 [11][12]. Furthermore, Google Threat Intelligence Group (GTIG) recently shared their assessment that ShadowPad usage is restricted to clusters associated with APT41 [13]. Interestingly, however, there have also been public reports of ShadowPad usage in unattributed intrusions [5].

The data theft activity that later occurred in the same Darktrace customer network as one of these ShadowPad compromises appeared to be the targeted collection and exfiltration of sensitive data. Such an objective indicates the activity may have been part of a state-sponsored operation. The tactics, techniques, and procedures (TTPs), artifacts, and C2 infrastructure observed in the data theft thread appear to resemble activity seen in previous Democratic People’s Republic of Korea (DPRK)-linked intrusion activities [15] [16] [17] [18] [19].

The distribution of payloads to the following directory locations appears to be a relatively common behavior in DPRK-sponsored intrusions.

Observed examples:

C:\ProgramData\Oracle\Java\  

C:\ProgramData\Adobe\ARM\  

C:\ProgramData\Microsoft\DRM\  

C:\ProgramData\Abletech\Client\  

C:\ProgramData\IDMComp\UltraCompare\  

C:\ProgramData\3Dconnexion\3DxWare\

Additionally, the likely compromised websites observed in the data theft thread, along with some of the target URI patterns seen in the C2 communications to these sites, resemble those seen in previously reported DPRK-linked intrusion activities.

No clear evidence was found to link the ShadowPad compromise to the subsequent data theft activity that was observed on the network of the manufacturing customer. It should be noted, however, that no clear signs of initial access were found for the data theft thread – this could suggest the ShadowPad intrusion itself represents the initial point of entry that ultimately led to data exfiltration.

Motivation-wise, it seems plausible for the data theft thread to have been part of a DPRK-sponsored operation. DPRK is known to pursue targets that could potentially fulfil its national security goals and had been publicly reported as being active in months prior to this intrusion [21]. Furthermore, the timing of the data theft aligns with the ratification of the mutual defense treaty between DPRK and Russia and the subsequent accused activities [20].

Darktrace assesses with medium confidence that a nation-state, likely DPRK, was responsible, based on our investigation, the threat actor applied resources, patience, obfuscation, and evasiveness combined with external reporting, collaboration with the cyber community, assessing the attacker’s motivation and world geopolitical timeline, and undisclosed intelligence.

Conclusion

When state-linked cyber activity occurs within an organization’s environment, previously unseen C2 infrastructure and advanced evasion techniques will likely be used. State-linked cyber actors, through their resources and patience, are able to bypass most detection methods, leaving anomaly-based methods as a last line of defense.

Two threads of activity were observed within Darktrace’s customer base over the last year: The first operation involved the abuse of Check Point VPN credentials to log in remotely to organizations’ networks, followed by the distribution of ShadowPad to an internal domain controller. The second operation involved highly targeted data exfiltration from the network of one of the customers impacted by the previously mentioned ShadowPad activity.

Despite definitive attribution remaining unresolved, both the ShadowPad and data exfiltration activities were detected by Darktrace’s Self-Learning AI, with Cyber AI Analyst playing a significant role in identifying and piecing together the various steps of the intrusion activities.  

Credit to Sam Lister (R&D Detection Analyst), Emma Foulger (Principal Cyber Analyst), Nathaniel Jones (VP), and the Darktrace Threat Research team.

Appendices

Darktrace / NETWORK model alerts

User / New Admin Credentials on Client

Anomalous Connection / Unusual Admin SMB Session

Compliance / SMB Drive Write  

Device / Anomalous SMB Followed By Multiple Model Breaches

Anomalous File / Internal / Unusual SMB Script Write

User / New Admin Credentials on Client  

Anomalous Connection / Unusual Admin SMB Session

Compliance / SMB Drive Write

Device / Anomalous SMB Followed By Multiple Model Breaches

Anomalous File / Internal / Unusual SMB Script Write

Device / New or Uncommon WMI Activity

Unusual Activity / Internal Data Transfer

Anomalous Connection / Download and Upload

Anomalous Server Activity / Rare External from Server

Compromise / Beacon to Young Endpoint

Compromise / Agent Beacon (Short Period)

Anomalous Server Activity / Anomalous External Activity from Critical Network Device

Anomalous Connection / POST to PHP on New External Host

Compromise / Sustained SSL or HTTP Increase

Compromise / Sustained TCP Beaconing Activity To Rare Endpoint

Anomalous Connection / Multiple Failed Connections to Rare Endpoint

Device / Multiple C2 Model Alerts

Anomalous Connection / Data Sent to Rare Domain

Anomalous Connection / Download and Upload

Unusual Activity / Unusual External Data Transfer

Anomalous Connection / Low and Slow Exfiltration

Anomalous Connection / Uncommon 1 GiB Outbound  

MITRE ATT&CK mapping

(Technique name – Tactic ID)

ShadowPad malware threads

Initial Access - Valid Accounts: Domain Accounts (T1078.002)

Initial Access - External Remote Services (T1133)

Privilege Escalation - Exploitation for Privilege Escalation (T1068)

Privilege Escalation - Valid Accounts: Default Accounts (T1078.001)

Defense Evasion - Masquerading: Match Legitimate Name or Location (T1036.005)

Lateral Movement - Remote Services: Remote Desktop Protocol (T1021.001)

Lateral Movement - Remote Services: SMB/Windows Admin Shares (T1021.002)

Command and Control - Proxy: Internal Proxy (T1090.001)

Command and Control - Application Layer Protocol: Web Protocols (T1071.001)

Command and Control - Encrypted Channel: Asymmetric Cryptography (T1573.002)

Command and Control - Application Layer Protocol: DNS (T1071.004)

Data theft thread

Resource Development - Compromise Infrastructure: Domains (T1584.001)

Privilege Escalation - Valid Accounts: Default Accounts (T1078.001)

Privilege Escalation - Valid Accounts: Domain Accounts (T1078.002)

Execution - Windows Management Instrumentation (T1047)

Defense Evasion - Masquerading: Match Legitimate Name or Location (T1036.005)

Defense Evasion - Obfuscated Files or Information (T1027)

Lateral Movement - Remote Services: SMB/Windows Admin Shares (T1021.002)

Collection - Data from Network Shared Drive (T1039)

Command and Control - Application Layer Protocol: Web Protocols (T1071.001)

Command and Control - Encrypted Channel: Asymmetric Cryptography (T1573.002)

Command and Control - Proxy: External Proxy (T1090.002)

Exfiltration - Exfiltration Over C2 Channel (T1041)

Exfiltration - Data Transfer Size Limits (T1030)

List of indicators of compromise (IoCs)

IP addresses and/or domain names (Mid-high confidence):

ShadowPad thread

- dscriy.chtq[.]net • 158.247.199[.]185 (endpoint of C2 comms)

- cybaq.chtq[.]net (domain name used for DNS tunneling)  

Data theft thread

- yasuconsulting[.]com (45.158.12[.]7)

- hobivan[.]net (94.73.151[.]72)

- mediostresbarbas.com[.]ar (75.102.23[.]3)

- mnmathleague[.]org (185.148.129[.]24)

- goldenborek[.]com (94.138.200[.]40)

- tunemmuhendislik[.]com (94.199.206[.]45)

- anvil.org[.]ph (67.209.121[.]137)

- partnerls[.]pl (5.187.53[.]50)

- angoramedikal[.]com (89.19.29[.]128)

- awork-designs[.]dk (78.46.20[.]225)

- digitweco[.]com (38.54.95[.]190)

- duepunti-studio[.]it (89.46.106[.]61)

- scgestor.com[.]br (108.181.92[.]71)

- lacapannadelsilenzio[.]it (86.107.36[.]15)

- lovetamagotchith[.]com (203.170.190[.]137)

- lieta[.]it (78.46.146[.]147)

File names (Mid-high confidence):

ShadowPad thread:

- perflogs\1.txt

- perflogs\AppLaunch.exe

- perflogs\F4A3E8BE.tmp

- perflogs\mscoree.dll

Data theft thread

- ProgramData\Oracle\java.log

- ProgramData\Oracle\duxwfnfo

- ProgramData\Adobe\ARM\webservices.dll

- ProgramData\Adobe\ARM\wksprt.exe

- ProgramData\Oracle\Java\wksprt.exe

- ProgramData\Oracle\Java\webservices.dll

- ProgramData\Microsoft\DRM\wksprt.exe

- ProgramData\Microsoft\DRM\webservices.dll

- ProgramData\Abletech\Client\webservices.dll

- ProgramData\Abletech\Client\client.exe

- ProgramData\Adobe\ARM\rzrmxrwfvp

- ProgramData\3Dconnexion\3DxWare\3DxWare.exe

- ProgramData\3Dconnexion\3DxWare\webservices.dll

- ProgramData\IDMComp\UltraCompare\updater.exe

- ProgramData\IDMComp\UltraCompare\webservices.dll

- ProgramData\IDMComp\UltraCompare\imtrqjsaqmm

- temp\HousecallLauncher64.exe

Attacker-controlled device hostname (Mid-high confidence)

- DESKTOP-O82ILGG

References  

[1] https://www.kaspersky.com/about/press-releases/shadowpad-how-attackers-hide-backdoor-in-software-used-by-hundreds-of-large-companies-around-the-world  

[2] https://media.kasperskycontenthub.com/wp-content/uploads/sites/43/2017/08/07172148/ShadowPad_technical_description_PDF.pdf

[3] https://blog.avast.com/new-investigations-in-ccleaner-incident-point-to-a-possible-third-stage-that-had-keylogger-capacities

[4] https://securelist.com/operation-shadowhammer-a-high-profile-supply-chain-attack/90380/

[5] https://assets.sentinelone.com/c/Shadowpad?x=P42eqA

[6] https://www.cyfirma.com/research/the-origins-of-apt-41-and-shadowpad-lineage/

[7] https://www.csoonline.com/article/572061/shadowpad-has-become-the-rat-of-choice-for-several-state-sponsored-chinese-apts.html

[8] https://global.ptsecurity.com/analytics/pt-esc-threat-intelligence/shadowpad-new-activity-from-the-winnti-group

[9] https://cymulate.com/threats/shadowpad-privately-sold-malware-espionage-tool/

[10] https://www.secureworks.com/research/shadowpad-malware-analysis

[11] https://blog.talosintelligence.com/chinese-hacking-group-apt41-compromised-taiwanese-government-affiliated-research-institute-with-shadowpad-and-cobaltstrike-2/

[12] https://hackerseye.net/all-blog-items/tails-from-the-shadow-apt-41-injecting-shadowpad-with-sideloading/

[13] https://cloud.google.com/blog/topics/threat-intelligence/scatterbrain-unmasking-poisonplug-obfuscator

[14] https://www.domaintools.com/wp-content/uploads/conceptualizing-a-continuum-of-cyber-threat-attribution.pdf

[15] https://www.nccgroup.com/es/research-blog/north-korea-s-lazarus-their-initial-access-trade-craft-using-social-media-and-social-engineering/  

[16] https://www.microsoft.com/en-us/security/blog/2021/01/28/zinc-attacks-against-security-researchers/

[17] https://www.microsoft.com/en-us/security/blog/2022/09/29/zinc-weaponizing-open-source-software/  

[18] https://www.welivesecurity.com/en/eset-research/lazarus-luring-employees-trojanized-coding-challenges-case-spanish-aerospace-company/  

[19] https://blogs.jpcert.or.jp/en/2021/01/Lazarus_malware2.html  

[20] https://usun.usmission.gov/joint-statement-on-the-unlawful-arms-transfer-by-the-democratic-peoples-republic-of-korea-to-russia/

[21] https://media.defense.gov/2024/Jul/25/2003510137/-1/-1/1/Joint-CSA-North-Korea-Cyber-Espionage-Advance-Military-Nuclear-Programs.PDF  

[22] https://kyivindependent.com/first-north-korean-troops-deployed-to-front-line-in-kursk-oblast-ukraines-military-intelligence-says/

[23] https://www.microsoft.com/en-us/security/blog/2024/12/04/frequent-freeloader-part-i-secret-blizzard-compromising-storm-0156-infrastructure-for-espionage/  

[24] https://www.microsoft.com/en-us/security/blog/2024/12/11/frequent-freeloader-part-ii-russian-actor-secret-blizzard-using-tools-of-other-groups-to-attack-ukraine/  

[25] https://www.sentinelone.com/labs/chamelgang-attacking-critical-infrastructure-with-ransomware/    

[26] https://thehackernews.com/2022/06/state-backed-hackers-using-ransomware.html/  

[27] https://blog.checkpoint.com/security/check-point-research-explains-shadow-pad-nailaolocker-and-its-protection/

[28] https://www.orangecyberdefense.com/global/blog/cert-news/meet-nailaolocker-a-ransomware-distributed-in-europe-by-shadowpad-and-plugx-backdoors

Continue reading
About the author
Sam Lister
SOC Analyst

Blog

/

AI

/

March 11, 2025

Survey findings: AI Cyber Threats are a Reality, the People are Acting Now

Default blog imageDefault blog image

Artificial intelligence is changing the cybersecurity field as fast as any other, both on the offensive and defensive side. We surveyed over 1,500 cybersecurity professionals from around the world to uncover their attitudes, understanding, and priorities when it comes to AI cybersecurity in 2025. Our full report, unearthing some telling trends, is out now.

Download the full report to explore these findings in depth

How is AI impacting the threat landscape?

state of ai in cybersecurity report graphic showing ai powered cyber threats having an impact on organizations

Nearly 74% of participants say AI-powered threats are a major challenge for their organization and 90% expect these threats to have a significant impact over the next one to two years, a slight increase from last year. These statistics highlight that AI is not just an emerging risk but a present and evolving one.

As attackers harness AI to automate and scale their operations, security teams must adapt just as quickly. Organizations that fail to prioritize AI-specific security measures risk falling behind, making proactive defense strategies more critical than ever.

Some of the most pressing AI-driven cyber threats include:

  • AI-powered social engineering: Attackers are leveraging AI to craft highly personalized and convincing phishing emails, making them harder to detect and more likely to bypass traditional defenses.
  • More advanced attacks at speed and scale: AI lowers the barrier for less skilled threat actors, allowing them to launch sophisticated attacks with minimal effort.
  • Attacks targeting AI systems: Cybercriminals are increasingly going after AI itself, compromising machine learning models, tampering with training data, and exploiting vulnerabilities in AI-driven applications and APIs.

Safe and secure use of AI

AI is having an effect on the cyber-threat landscape, but it also is starting to impact every aspect of a business – from marketing to HR to operations. The accessibility of AI tools for employees improves workflows, but also poses risks like data privacy violations, shadow AI, and violation of industry regulations.

How are security practitioners accommodating for this uptick in AI use across business?

Among survey participants 45% of security practitioners say they had already established a policy on the safe and secure use of AI and around 50% are in discussions to do so.

While almost all participants acknowledge that this is a topic that needs to be addressed, the gap between discussion and execution could underscore a need for greater insight, stronger leadership commitment, and adaptable security frameworks to keep pace with AI advancements in the workplace. The most popular actions taken are:

  1. Implemented security controls to prevent unwanted exposure of corporate data when using AI technology (67%)
  2. Implemented security controls to protect against other threats/risks associated with using AI technology (62%)

This year specifically, we see further action being taken with the implementation of security controls, training, and oversight.

For a more detailed breakdown that includes results based on industry and organizational size, download the full report here.

AI threats are rising, but security teams still face major challenges

78% of CISOs say AI-powered cyber-threats are already having a significant impact on their organization, a 5% increase from last year.

While cyber professionals feel more prepared for AI powered threats than they did 12 months ago, 45% still say their organization is not adequately prepared—down from 60% last year.

Despite this optimism, key challenges remain, including:

  • A shortage of personnel to manage tools and alerts
  • Gaps in knowledge and skills related to AI-driven countermeasures

Confidence in traditional security tools vs. new AI based tools

This year, 73% of survey participants expressed confidence in their security team’s proficiency in using AI within their tool stack, marking an increase from the previous year.

However, only 50% of participants have confidence in traditional cybersecurity tools to detect and block AI-powered threats. In contrast, 75% of participants are confident in AI-powered security solutions for detecting and blocking such threats and attacks.

As leading organizations continue to implement and optimize their use of AI, they are incorporating it into an increasing number of workflows. This growing familiarity with AI is likely to boost the confidence levels of practitioners even further.

The data indicates a clear trend towards greater reliance on AI-powered security solutions over traditional tools. As organizations become more adept at integrating AI into their operations, their confidence in these advanced technologies grows.

This shift underscores the importance of staying current with AI advancements and ensuring that security teams are well-trained in utilizing these tools effectively. The increasing confidence in AI-driven solutions reflects their potential to enhance cybersecurity measures and better protect against sophisticated threats.

State of AI report

Download the full report to explore these findings in depth

The full report for Darktrace’s State of AI Cybersecurity is out now. Download the paper to dig deeper into these trends, and see how results differ by industry, region, organization size, and job title.  

Continue reading
About the author
The Darktrace Community
Your data. Our AI.
Elevate your network security with Darktrace AI