Blog
/
AI
/
October 30, 2023

Exploring AI Threats: Package Hallucination Attacks

Learn how malicious actors exploit errors in generative AI tools to launch packet attacks. Read how Darktrace products detect and prevent these threats!
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
30
Oct 2023

AI tools open doors for threat actors

On November 30, 2022, the free conversational language generation model ChatGPT was launched by OpenAI, an artificial intelligence (AI) research and development company. The launch of ChatGPT was the culmination of development ongoing since 2018 and represented the latest innovation in the ongoing generative AI boom and made the use of generative AI tools accessible to the general population for the first time.

ChatGPT is estimated to currently have at least 100 million users, and in August 2023 the site reached 1.43 billion visits [1]. Darktrace data indicated that, as of March 2023, 74% of active customer environments have employees using generative AI tools in the workplace [2].

However, with new tools come new opportunities for threat actors to exploit and use them maliciously, expanding their arsenal.

Much consideration has been given to mitigating the impacts of the increased linguistic complexity in social engineering and phishing attacks resulting from generative AI tool use, with Darktrace observing a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email™ customers from January to February 2023, corresponding with the widespread adoption of ChatGPT and its peers [3].

Less overall consideration, however, has been given to impacts stemming from errors intrinsic to generative AI tools. One of these errors is AI hallucinations.

What is an AI hallucination?

AI “hallucination” is a term which refers to the predictive elements of generative AI and LLMs’ AI model gives an unexpected or factually incorrect response which does not align with its machine learning training data [4]. This differs from regular and intended behavior for an AI model, which should provide a response based on the data it was trained upon.  

Why are AI hallucinations a problem?

Despite the term indicating it might be a rare phenomenon, hallucinations are far more likely than accurate or factual results as the AI models used in LLMs are merely predictive and focus on the most probable text or outcome, rather than factual accuracy.

Given the widespread use of generative AI tools in the workplace employees are becoming significantly more likely to encounter an AI hallucination. Furthermore, if these fabricated hallucination responses are taken at face value, they could cause significant issues for an organization.

Use of generative AI in software development

Software developers may use generative AI for recommendations on how to optimize their scripts or code, or to find packages to import into their code for various uses. Software developers may ask LLMs for recommendations on specific pieces of code or how to solve a specific problem, which will likely lead to a third-party package. It is possible that packages recommended by generative AI tools could represent AI hallucinations and the packages may not have been published, or, more accurately, the packages may not have been published prior to the date at which the training data for the model halts. If these hallucinations result in common suggestions of a non-existent package, and the developer copies the code snippet wholesale, this may leave the exchanges vulnerable to attack.

Research conducted by Vulcan revealed the prevalence of AI hallucinations when ChatGPT is asked questions related to coding. After sourcing a sample of commonly asked coding questions from Stack Overflow, a question-and-answer website for programmers, researchers queried ChatGPT (in the context of Node.js and Python) and reviewed its responses. In 20% of the responses provided by ChatGPT pertaining to Node.js at least one un-published package was included, whilst the figure sat at around 35% for Python [4].

Hallucinations can be unpredictable, but would-be attackers are able to find packages to create by asking generative AI tools generic questions and checking whether the suggested packages exist already. As such, attacks using this vector are unlikely to target specific organizations, instead posing more of a widespread threat to users of generative AI tools.

Malicious packages as attack vectors

Although AI hallucinations can be unpredictable, and responses given by generative AI tools may not always be consistent, malicious actors are able to discover AI hallucinations by adopting the approach used by Vulcan. This allows hallucinated packages to be used as attack vectors. Once a malicious actor has discovered a hallucination of an un-published package, they are able to create a package with the same name and include a malicious payload, before publishing it. This is known as a malicious package.

Malicious packages could also be recommended by generative AI tools in the form of pre-existing packages. A user may be recommended a package that had previously been confirmed to contain malicious content, or a package that is no longer maintained and, therefore, is more vulnerable to hijack by malicious actors.

In such scenarios it is not necessary to manipulate the training data (data poisoning) to achieve the desired outcome for the malicious actor, thus a complex and time-consuming attack phase can easily be bypassed.

An unsuspecting software developer may incorporate a malicious package into their code, rendering it harmful. Deployment of this code could then result in compromise and escalation into a full-blown cyber-attack.

Figure 1: Flow diagram depicting the initial stages of an AI Package Hallucination Attack.

For providers of Software-as-a-Service (SaaS) products, this attack vector may represent an even greater risk. Such organizations may have a higher proportion of employed software developers than other organizations of comparable size. A threat actor, therefore, could utilize this attack vector as part of a supply chain attack, whereby a malicious payload becomes incorporated into trusted software and is then distributed to multiple customers. This type of attack could have severe consequences including data loss, the downtime of critical systems, and reputational damage.

How could Darktrace detect an AI Package Hallucination Attack?

In June 2023, Darktrace introduced a range of DETECT™ and RESPOND™ models designed to identify the use of generative AI tools within customer environments, and to autonomously perform inhibitive actions in response to such detections. These models will trigger based on connections to endpoints associated with generative AI tools, as such, Darktrace’s detection of an AI Package Hallucination Attack would likely begin with the breaching of one of the following DETECT models:

  • Compliance / Anomalous Upload to Generative AI
  • Compliance / Beaconing to Rare Generative AI and Generative AI
  • Compliance / Generative AI

Should generative AI tool use not be permitted by an organization, the Darktrace RESPOND model ‘Antigena / Network / Compliance / Antigena Generative AI Block’ can be activated to autonomously block connections to endpoints associated with generative AI, thus preventing an AI Package Hallucination attack before it can take hold.

Once a malicious package has been recommended, it may be downloaded from GitHub, a platform and cloud-based service used to store and manage code. Darktrace DETECT is able to identify when a device has performed a download from an open-source repository such as GitHub using the following models:

  • Device / Anomalous GitHub Download
  • Device / Anomalous Script Download Followed By Additional Packages

Whatever goal the malicious package has been designed to fulfil will determine the next stages of the attack. Due to their highly flexible nature, AI package hallucinations could be used as an attack vector to deliver a large variety of different malware types.

As GitHub is a commonly used service by software developers and IT professionals alike, traditional security tools may not alert customer security teams to such GitHub downloads, meaning malicious downloads may go undetected. Darktrace’s anomaly-based approach to threat detection, however, enables it to recognize subtle deviations in a device’s pre-established pattern of life which may be indicative of an emerging attack.

Subsequent anomalous activity representing the possible progression of the kill chain as part of an AI Package Hallucination Attack could then trigger an Enhanced Monitoring model. Enhanced Monitoring models are high-fidelity indicators of potential malicious activity that are investigated by the Darktrace analyst team as part of the Proactive Threat Notification (PTN) service offered by the Darktrace Security Operation Center (SOC).

Conclusion

Employees are often considered the first line of defense in cyber security; this is particularly true in the face of an AI Package Hallucination Attack.

As the use of generative AI becomes more accessible and an increasingly prevalent tool in an attacker’s toolbox, organizations will benefit from implementing company-wide policies to define expectations surrounding the use of such tools. It is simple, yet critical, for example, for employees to fact check responses provided to them by generative AI tools. All packages recommended by generative AI should also be checked by reviewing non-generated data from either external third-party or internal sources. It is also good practice to adopt caution when downloading packages with very few downloads as it could indicate the package is untrustworthy or malicious.

As of September 2023, ChatGPT Plus and Enterprise users were able to use the tool to browse the internet, expanding the data ChatGPT can access beyond the previous training data cut-off of September 2021 [5]. This feature will be expanded to all users soon [6]. ChatGPT providing up-to-date responses could prompt the evolution of this attack vector, allowing attackers to publish malicious packages which could subsequently be recommended by ChatGPT.

It is inevitable that a greater embrace of AI tools in the workplace will be seen in the coming years as the AI technology advances and existing tools become less novel and more familiar. By fighting fire with fire, using AI technology to identify AI usage, Darktrace is uniquely placed to detect and take preventative action against malicious actors capitalizing on the AI boom.

Credit to Charlotte Thompson, Cyber Analyst, Tiana Kelly, Analyst Team Lead, London, Cyber Analyst

References

[1] https://seo.ai/blog/chatgpt-user-statistics-facts

[2] https://darktrace.com/news/darktrace-addresses-generative-ai-concerns

[3] https://darktrace.com/news/darktrace-email-defends-organizations-against-evolving-cyber-threat-landscape

[4] https://vulcan.io/blog/ai-hallucinations-package-risk?nab=1&utm_referrer=https%3A%2F%2Fwww.google.com%2F

[5] https://twitter.com/OpenAI/status/1707077710047216095

[6] https://www.reuters.com/technology/openai-says-chatgpt-can-now-browse-internet-2023-09-27/

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Charlotte Thompson
Cyber Analyst
Written by
Tiana Kelly
Deputy Team Lead, London & Cyber Analyst

More in this series

No items found.

Blog

/

/

May 6, 2025

Combatting the Top Three Sources of Risk in the Cloud

woman working on laptopDefault blog imageDefault blog image

With cloud computing, organizations are storing data like intellectual property, trade secrets, Personally Identifiable Information (PII), proprietary code and statistics, and other sensitive information in the cloud. If this data were to be accessed by malicious actors, it could incur financial loss, reputational damage, legal liabilities, and business disruption.

Last year data breaches in solely public cloud deployments were the most expensive type of data breach, with an average of $5.17 million USD, a 13.1% increase from the year before.

So, as cloud usage continues to grow, the teams in charge of protecting these deployments must understand the associated cybersecurity risks.

What are cloud risks?

Cloud threats come in many forms, with one of the key types consisting of cloud risks. These arise from challenges in implementing and maintaining cloud infrastructure, which can expose the organization to potential damage, loss, and attacks.

There are three major types of cloud risks:

1. Misconfigurations

As organizations struggle with complex cloud environments, misconfiguration is one of the leading causes of cloud security incidents. These risks occur when cloud settings leave gaps between cloud security solutions and expose data and services to unauthorized access. If discovered by a threat actor, a misconfiguration can be exploited to allow infiltration, lateral movement, escalation, and damage.

With the scale and dynamism of cloud infrastructure and the complexity of hybrid and multi-cloud deployments, security teams face a major challenge in exerting the required visibility and control to identify misconfigurations before they are exploited.

Common causes of misconfiguration come from skill shortages, outdated practices, and manual workflows. For example, potential misconfigurations can occur around firewall zones, isolated file systems, and mount systems, which all require specialized skill to set up and diligent monitoring to maintain

2. Identity and Access Management (IAM) failures

IAM has only increased in importance with the rise of cloud computing and remote working. It allows security teams to control which users can and cannot access sensitive data, applications, and other resources.

Cybersecurity professionals ranked IAM skills as the second most important security skill to have, just behind general cloud and application security.

There are four parts to IAM: authentication, authorization, administration, and auditing and reporting. Within these, there are a lot of subcomponents as well, including but not limited to Single Sign-On (SSO), Two-Factor Authentication (2FA), Multi-Factor Authentication (MFA), and Role-Based Access Control (RBAC).

Security teams are faced with the challenge of allowing enough access for employees, contractors, vendors, and partners to complete their jobs while restricting enough to maintain security. They may struggle to track what users are doing across the cloud, apps, and on-premises servers.

When IAM is misconfigured, it increases the attack surface and can leave accounts with access to resources they do not need to perform their intended roles. This type of risk creates the possibility for threat actors or compromised accounts to gain access to sensitive company data and escalate privileges in cloud environments. It can also allow malicious insiders and users who accidentally violate data protection regulations to cause greater damage.

3. Cross-domain threats

The complexity of hybrid and cloud environments can be exploited by attacks that cross multiple domains, such as traditional network environments, identity systems, SaaS platforms, and cloud environments. These attacks are difficult to detect and mitigate, especially when a security posture is siloed or fragmented.  

Some attack types inherently involve multiple domains, like lateral movement and supply chain attacks, which target both on-premises and cloud networks.  

Challenges in securing against cross-domain threats often come from a lack of unified visibility. If a security team does not have unified visibility across the organization’s domains, gaps between various infrastructures and the teams that manage them can leave organizations vulnerable.

Adopting AI cybersecurity tools to reduce cloud risk

For security teams to defend against misconfigurations, IAM failures, and insecure APIs, they require a combination of enhanced visibility into cloud assets and architectures, better automation, and more advanced analytics. These capabilities can be achieved with AI-powered cybersecurity tools.

Such tools use AI and automation to help teams maintain a clear view of all their assets and activities and consistently enforce security policies.

Darktrace / CLOUD is a Cloud Detection and Response (CDR) solution that makes cloud security accessible to all security teams and SOCs by using AI to identify and correct misconfigurations and other cloud risks in public, hybrid, and multi-cloud environments.

It provides real-time, dynamic architectural modeling, which gives SecOps and DevOps teams a unified view of cloud infrastructures to enhance collaboration and reveal possible misconfigurations and other cloud risks. It continuously evaluates architecture changes and monitors real-time activity, providing audit-ready traceability and proactive risk management.

Real-time visibility into cloud assets and architectures built from network, configuration, and identity and access roles. In this unified view, Darktrace / CLOUD reveals possible misconfigurations and risk paths.
Figure 1: Real-time visibility into cloud assets and architectures built from network, configuration, and identity and access roles. In this unified view, Darktrace / CLOUD reveals possible misconfigurations and risk paths.

Darktrace / CLOUD also offers attack path modeling for the cloud. It can identify exposed assets and highlight internal attack paths to get a dynamic view of the riskiest paths across cloud environments, network environments, and between – enabling security teams to prioritize based on unique business risk and address gaps to prevent future attacks.  

Darktrace’s Self-Learning AI ensures continuous cloud resilience, helping teams move from reactive to proactive defense.

[related-resource]

Continue reading
About the author
Pallavi Singh
Product Marketing Manager, OT Security & Compliance

Blog

/

/

May 2, 2025

SocGholish: From loader and C2 activity to RansomHub deployment

laptop and hand typingDefault blog imageDefault blog image

Over the past year, a clear pattern has emerged across the threat landscape: ransomware operations are increasingly relying on compartmentalized affiliate models. In these models, initial access brokers (IABs) [6], malware loaders, and post-exploitation operators work together.

Due to those specialization roles, a new generation of loader campaigns has risen. Threat actors increasingly employ loader operators to quietly establish footholds on the target network. These entities then hand off access to ransomware affiliates. One loader that continues to feature prominently in such campaigns is SocGholish.

What is SocGholish?

SocGholish is a loader malware that has been utilized since at least 2017 [7].  It has long been associated with fake browser updates and JavaScript-based delivery methods on infected websites.

Threat actors often target outdated or poorly secured CMS-based websites like WordPress. Through unpatched plugins, or even remote code execution flaws, they inject malicious JavaScript into the site’s HTML, templates or external JS resources [8].  Historically, SocGholish has functioned as a first-stage malware loader, ultimately leading to deployment of Cobalt Strike beacons [9], and further facilitating access persistence to corporate environments. More recently, multiple security vendors have reported that infections involving SocGholish frequently lead to the deployment of RansomHub ransomware [3] [5].

This blog explores multiple instances within Darktrace's customer base where SocGholish deployment led to subsequent network compromises. Investigations revealed indicators of compromise (IoCs) similar to those identified by external security researchers, along with variations in attacker behavior post-deployment. Key innovations in post-compromise activities include credential access tactics targeting authentication mechanisms, particularly through the abuse of legacy protocols like WebDAV and SCF file interactions over SMB.

Initial access and execution

Since January 2025, Darktrace’s Threat Research team observed multiple cases in which threat actors leveraged the SocGholish loader for initial access. Malicious actors commonly deliver SocGholish by compromising legitimate websites by injecting malicious scripts into the HTML of the affected site. When the visitor lands on an infected site, they are typically redirected to a fake browser update page, tricking them into downloading a ZIP file containing a JavaScript-based loader [1] [2]. In one case, a targeted user appears to have visited the compromised website garagebevents[.]com (IP: 35.203.175[.]30), from which around 10 MB of data was downloaded.

Device Event Log showing connections to the compromised website, following by connections to the identified Keitaro TDS instances.
Figure 1: Device Event Log showing connections to the compromised website, following by connections to the identified Keitaro TDS instances.

Within milliseconds of the connection establishment, the user’s device initiated several HTTPS sessions over the destination port 443 to the external endpoint 176.53.147[.]97, linked to the following Keitaro TDS domains:

  • packedbrick[.]com
  • rednosehorse[.]com
  • blackshelter[.]org
  • blacksaltys[.]com

To evade detection, SocGholish uses highly obfuscated code and relies on traffic distribution systems (TDS) [3].  TDS is a tool used in digital and affiliate marketing to manage and distribute incoming web traffic based on predefined rules. More specifically, Keitaro is a premium self-hosted TDS frequently utilized by attackers as a payload repository for malicious scripts following redirects from compromised sites. In the previously noted example, it appears that the device connected to the compromised website, which then retrieved JavaScript code from the aforementioned Keitaro TDS domains. The script served by those instances led to connections to the endpoint virtual.urban-orthodontics[.]com (IP: 185.76.79[.]50), successfully completing SocGholish’s distribution.

Advanced Search showing connections to the compromised website, following by those to the identified Keitaro TDS instances.
Figure 2: Advanced Search showing connections to the compromised website, following by those to the identified Keitaro TDS instances.

Persistence

During some investigations, Darktrace researchers observed compromised devices initiating HTTPS connections to the endpoint files.pythonhosted[.]org (IP: 151.101.1[.]223), suggesting Python package downloads. External researchers have previously noted how attackers use Python-based backdoors to maintain access on compromised endpoints following initial access via SocGholish [5].

Credential access and lateral movement

Credential access – external

Darktrace researchers identified observed some variation in kill chain activities following initial access and foothold establishment. For example, Darktrace detected interesting variations in credential access techniques. In one such case, an affected device attempted to contact the rare external endpoint 161.35.56[.]33 using the Web Distributed Authoring and Versioning (WebDAV) protocol. WebDAV is an extension of the HTTP protocol that allows users to collaboratively edit and manage files on remote web servers. WebDAV enables remote shares to be mounted over HTTP or HTTPS, similar to how SMB operates, but using web-based protocols. Windows supports WebDAV natively, which means a UNC path pointing to an HTTP or HTTPS resource can trigger system-level behavior such as authentication.

In this specific case, the system initiated outbound connections using the ‘Microsoft-WebDAV-MiniRedir/10.0.19045’ user-agent, targeting the URI path of /s on the external endpoint 161.35.56[.]33. During these requests, the host attempted to initiate NTML authentication and even SMB sessions over the web, both of which failed. Despite the session failures, these attempts also indicate a form of forced authentication. Forced authentication exploits a default behavior in Windows where, upon encountering a UNC path, the system will automatically try to authenticate to the resource using NTML – often without any user interaction. Although no files were directly retrieved, the WebDAV server was still likely able to retrieve the user’s NTLM hash during the session establishment requests, which can later be used by the adversary to crack the password offline.

Credential access – internal

In another investigated incident, Darktrace observed a related technique utilized for credential access and lateral movement. This time, the infected host uploaded a file named ‘Thumbs.scf’ to multiple internal SMB network shares. Shell Command File ( SCF) is a legacy Windows file format used primarily for Windows Explorer shortcuts. These files contain instructions for rendering icons or triggering shell commands, and they can be executed implicitly when a user simply opens a folder containing the file – no clicks required.

The ‘Thumbs.scf’ file dropped by the attacker was crafted to exploit this behavior. Its contents included a [Shell] section with the Command=2 directive and an IconFile path pointing to a remote UNC resource on the same external endpoint, 161.35.56[.]33, seen in the previously described case – specifically, ‘\\161.35.56[.]33\share\icon.ico’. When a user on the internal network navigates to the folder containing the SCF file, their system will automatically attempt to load the icon. In doing so, the system issues a request to the specified UNC path, which again prompts Windows to initiate NTML authentication.

This pattern of activity implies that the attacker leveraged passive internal exposure; users who simply browsed a compromised share would unknowingly send their NTML hashes to an external attacker-controlled host. Unlike the WebDAV approach, which required initiating outbound communication from the infected host, this SCF method relies on internal users to interact with poisoned folders.

Figure 3: Contents of the file 'Thumbs.scf' showing the UNC resource hosted on the external endpoint.
Figure 3: Contents of the file 'Thumbs.scf' showing the UNC resource hosted on the external endpoint.

Command-and-control

Following initial compromise, affected devices would then attempt outbound connections using the TLS/SSL protocol over port 443 to different sets of command-and-control (C2) infrastructure associated with SocGholish. The malware frequently uses obfuscated JavaScript loaders to initiate its infection chain, and once dropped, the malware communicates back to its infrastructure over standard web protocols, typically using HTTPS over port 443. However, this set of connections would precede a second set of outbound connections, this time to infrastructure linked to RansomHub affiliates, possibly facilitating the deployed Python-based backdoor.

Connectivity to RansomHub infrastructure relied on defense evasion tactics, such as port-hopping. The idea behind port-hopping is to disguise C2 traffic by avoiding consistent patterns that might be caught by firewalls, and intrusion detection systems. By cycling through ephemeral ports, the malware increases its chances of slipping past basic egress filtering or network monitoring rules that only scrutinize common web traffic ports like 443 or 80. Darktrace analysts identified systems connecting to destination ports such as 2308, 2311, 2313 and more – all on the same destination IP address associated with the RansomHub C2 environment.

Figure 4: Advanced Search connection logs showing connections over destination ports that change rapidly.

Conclusion

Since the beginning of 2025, Darktrace analysts identified a campaign whereby ransomware affiliates leveraged SocGholish to establish network access in victim environments. This activity enabled multiple sets of different post exploitation activity. Credential access played a key role, with affiliates abusing WebDAV and NTML over SMB to trigger authentication attempts. The attackers were also able to plant SCF files internally to expose NTML hashes from users browsing shared folders. These techniques evidently point to deliberate efforts at early lateral movement and foothold expansion before deploying ransomware. As ransomware groups continue to refine their playbooks and work more closely with sophisticated loaders, it becomes critical to track not just who is involved, but how access is being established, expanded, and weaponized.

Credit to Chrisina Kreza (Cyber Analyst) and Adam Potter (Senior Cyber Analyst)

Appendices

Darktrace / NETWORK model alerts

·       Anomalous Connection / SMB Enumeration

·       Anomalous Connection / Multiple Connections to New External TCP Port

·       Anomalous Connection / Multiple Failed Connections to Rare Endpoint

·       Anomalous Connection / New User Agent to IP Without Hostname

·       Compliance / External Windows Communication

·       Compliance / SMB Drive Write

·       Compromise / Large DNS Volume for Suspicious Domain

·       Compromise / Large Number of Suspicious Failed Connections

·       Device / Anonymous NTML Logins

·       Device / External Network Scan

·       Device / New or Uncommon SMB Named Pipe

·       Device / SMB Lateral Movement

·       Device / Suspicious SMB Activity

·       Unusual Activity / Unusual External Activity

·       User / Kerberos Username Brute Force

MITRE ATT&CK mapping

·       Credential Access – T1187 Forced Authentication

·       Credential Access – T1110 Brute Force

·       Command and Control – T1071.001 Web Protocols

·       Command and Control – T1571 Non-Standard Port

·       Discovery – T1083 File and Directory Discovery

·       Discovery – T1018 Remote System Discovery

·       Discovery – T1046 Network Service Discovery

·       Discovery – T1135 Network Share Discovery

·       Execution – T1059.007 JavaScript

·       Lateral Movement – T1021.002 SMB/Windows Admin Shares

·       Resource Deployment – T1608.004 Drive-By Target

List of indicators of compromise (IoCs)

·       garagebevents[.]com – 35.203.175[.]30 – Possibly compromised website

·       packedbrick[.]com – 176.53.147[.]97 – Keitaro TDS Domains used for SocGholish Delivery

·       rednosehorse[.]com – 176.53.147[.]97 – Keitaro TDS Domains used for SocGholish Delivery

·       blackshelter[.]org – 176.53.147[.]97 – Keitaro TDS Domains used for SocGholish Delivery

·       blacksaltys[.]com – 176.53.147[.]97 – Keitaro TDS Domains used for SocGholish Delivery

·       virtual.urban-orthodontics[.]com – 185.76.79[.]50

·       msbdz.crm.bestintownpro[.]com – 166.88.182[.]126 – SocGholish C2

·       185.174.101[.]240 – RansomHub Python C2

·       185.174.101[.]69 – RansomHub Python C2

·       108.181.182[.]143 – RansomHub Python C2

References

[1] https://www.checkpoint.com/cyber-hub/threat-prevention/what-is-malware/socgholish-malware/

[2] https://intel471.com/blog/threat-hunting-case-study-socgholish

[3] https://www.trendmicro.com/en_us/research/25/c/socgholishs-intrusion-techniques-facilitate-distribution-of-rans.html

[4] https://www.proofpoint.com/us/blog/threat-insight/update-fake-updates-two-new-actors-and-new-mac-malware

[5] https://www.guidepointsecurity.com/blog/ransomhub-affiliate-leverage-python-based-backdoor/

[6] https://www.cybereason.com/blog/how-do-initial-access-brokers-enable-ransomware-attacks

[7] https://attack.mitre.org/software/S1124/

[8] https://expel.com/blog/incident-report-spotting-socgholish-wordpress-injection/

[9] https://www.esentire.com/blog/socgholish-to-cobalt-strike-in-10-minutes

Continue reading
About the author
Christina Kreza
Cyber Analyst
Your data. Our AI.
Elevate your network security with Darktrace AI