Blog

Email

Threat Finds

How Darktrace Antigena Email Caught A Fearware Email Attack

Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
11
Mar 2020
11
Mar 2020
Darktrace effectively detects and neutralizes fearware attacks evading gateway security tools. Learn more about how Antigena Email outsmarts cyber-criminals.

The cyber-criminals behind email attacks are well-researched and highly responsive to human behaviors and emotions, often seeking to evoke a specific reaction by leveraging topical information and current news. It’s therefore no surprise that attackers have attempted to latch onto COVID-19 in their latest effort to convince users to open their emails and click on seemingly benign links.

The latest email trend involves attackers who claim to be from the Center for Disease Control and Prevention, purporting to have emergency information about COVID-19. This is typical of a recent trend we’re calling ‘fearware’: cyber-criminals exploit a collective sense of fear and urgency, and coax users into clicking a malicious attachment or link. While the tactic is common, the actual campaigns contain terms and content that’s unique. There are a few patterns in the emails we’ve seen, but none reliably predictable enough to create hard and fast rules that will stop emails with new wording without causing false positives.

For example, looking for the presence of “CDC” in the email sender would easily fail when the emails begin to use new wording, like “WHO”. We’ve also seen a mismatch of links and their display text – with display text that reads “https://cdc.gov/[random-path]” while the actual link is a completely arbitrary URL. Looking for a pattern match on this would likely lead to false positives and would serve as a weak indicator at best.

The majority of these emails, especially the early ones, passed most of our customers’ existing defenses including Mimecast, Proofpoint, and Microsoft’s ATP, and were approved to be delivered directly to the end user’s inbox. Fortunately, these emails were immediately identified and actioned by Antigena Email, Darktrace’s Autonomous Response technology for the inbox.

Gateways: The Current Approach

Most organizations employ Secure Email Gateways (SEGs), like Mimecast or Proofpoint, which serve as an inline middleman between the email sender and the recipient’s email provider. SEGs have largely just become spam-detection engines, as these emails are obvious to spot when seen at scale. They can identify low-hanging fruit (i.e. emails easily detectable as malicious), but they fail to detect and respond when attacks become personalized or deviate even slightly from previously-seen attacks.

Figure 1: A high-level diagram depicting an Email Secure Gateway’s inline position.

SEGs tend to use lists of ‘known-bad’ IPs, domains, and file hashes to determine an email’s threat level – inherently failing to stop novel attacks when they use IPs, domains, or files which are new and have not yet been triaged or reported as malicious.

When advanced detection methods are used in gateway technologies, such as anomaly detection or machine learning, these are performed after the emails have been delivered, and require significant volumes of near-identical emails to trigger. The end result is very often to take an element from one of these emails and simply deny-list it.

When a SEG can’t make the determination on these factors, they may resort to a technique known as sandboxing, which creates an isolated environment for testing links and attachments seen in emails. Alternatively, they may turn to basic levels of anomaly detection that are inadequate due to their lack of context of data outside of emails. For sandboxing, most advanced threats now typically employ evasion techniques like an activation time that waits until a certain date before executing. When deployed, the sandboxing attempts see a harmless file, not recognizing the sleeping attack waiting within.

Figure 2: This email was registered only 2 hours prior to an email we processed.

Taking a sample COVID-19 email seen in a Darktrace customer’s environment, we saw a mix of domains used in what appears to be an attempt to avoid pattern detection. It would be improbable to have the domains used on a list of ‘known-bad’ domains anywhere at the time of the first email, as it was received a mere two hours after the domain was registered.

Figure 3: While other defenses failed to block these emails, Antigena Email immediately marked them as 100% unusual and held them back from delivery.

Antigena Email sits behind all other defenses, meaning we only see emails when those defenses fail to block a malicious email or deem an email is safe for delivery. In the above COVID-19 case, the first 5 emails were marked by MS ATP with a spam confidence score of 1, indicating Microsoft scanned the email and it was determined to be clean – so Microsoft took no action whatsoever.

The Cat and Mouse Game

Cyber-criminals are permanently in flux, quickly moving to outsmart security teams and bypass current defenses. Recognizing email as the easiest entry point into an organization, they are capitalizing on the inadequate detection of existing tools by mass-producing personalized emails through factory-style systems that machine-research, draft, and send with minimal human interaction.

Domains are cheap, proxies are cheap, and morphing files slightly to change the entire fingerprint of a file is easy – rendering any list of ‘known-bads’ as outdated within seconds.

Cyber AI: The New Approach

A new approach is required that relies on business context and an inside-out understanding of a corporation, rather than analyzing emails in isolation.

An Immune System Approach

Darktrace’s core technology uses AI to detect unusual patterns of behavior in the enterprise. The AI is able to do this successfully by following the human immune system’s core principles: develop an innate sense of ‘self’, and use that understanding to detect abnormal activity indicative of a threat.

In order to identify threats across the entire enterprise, the AI is able to understand normal patterns of behavior beyond just the network. This is crucial when working towards a goal of full business understanding. There’s a clear connection between activity in, for example, a SaaS application and a corresponding network event, or an event in the cloud and a corresponding event elsewhere within the business.

There’s an explicit relationship between what people do on their computers and the emails they send and receive. Having the context that a user has just visited a website before they receive an email from the same domain lends credibility to that email: it’s very common to visit a website, subscribe to a mailing list, and then receive an email within a few minutes. On the contrary, receiving an email from a brand-new sender, containing a link that nobody in the organization has ever been to, lends support to the fact that the link is likely no good and that perhaps the email should be removed from the user’s inbox.

Enterprise-Wide Context

Darktrace’s Antigena Email extends this interplay of data sources to the inbox, providing unique detection capabilities by leveraging full business context to inform email decisions.

The design of Antigena Email provides a fundamental shift in email security – from where the tool sits to how it understands and processes data. Unlike SEGs, which sit inline and process emails only as they first pass through and never again, Antigena Email sits passively, ingesting data that is journaled to it. The technology doesn’t need to wait until a domain is fingerprinted or sandboxed, or until it is associated with a campaign that has a famous name and all the buzz.

Antigena Email extends its unique position of not sitting inline to email re-assessment, processing emails millions of times instead of just once, enabling actions to be taken well after delivery. A seemingly benign email with popular links may become more interesting over time if there’s an event within the enterprise that was determined to have originated via an email, perhaps when a trusted site becomes compromised. While Antigena Network will mitigate the new threat on the network, Antigena Email will neutralize the emails that contain links associated with those found in the original email.

Figure 4: Antigena Email sits passively off email providers, continuously re-assessing and issuing updated actions as new data is introduced.

When an email first arrives, Antigena Email extracts its raw metadata, processes it multiple times at machine speed, and then many millions of times subsequently as new evidence is introduced (typically based on events seen throughout the business). The system corroborates what it is seeing with what it has previously understood to be normal throughout the corporate environment. For example, when domains are extracted from envelope information or links in the email body, they’re compared against the popularity of the domain on the company’s network.

Figure 5: The link above was determined to be 100% rare for the enterprise.

Dissecting the above COVID-19 linked email, we can extract some of the data made available in the Antigena Email user interface to see why Darktrace thought the email was so unusual. The domain in the ‘From’ address is rare, which is supplemental contextual information derived from data across the customer’s entire digital environment, not limited to just email but including network data as well. The emails’ KCE, KCD, and RCE indicate that it was the first time the sender had been seen in any email: there had been no correspondence with the sender in any way, and the email address had never been seen in the body of any email.

Figure 6: KCE, KCD, and RCE scores indicate no sender history with the organization.

Correlating the above, Antigena Email deemed these emails 100% anomalous to the business and immediately removed them from the recipients’ inboxes. The platform did this for the very first email, and every email thereafter – not a single COVID-19-based email got by Antigena Email.

Conclusion

Cyber AI does not distinguish ‘good’ from ‘bad’; rather whether an event is likely to belong or not. The technology looks only to compare data with the learnt patterns of activity in the environment, incorporating the new email (alongside its own scoring of the email) into its understanding of day-to-day context for the organization.

By asking questions like “Does this email appear to belong?” or “Is there an existing relationship between the sender and recipient?”, the AI can accurately discern the threat posed by a given email, and incorporate these findings into future modelling. A model cannot be trained to think just because the corporation received a higher volume of emails from a specific sender, these emails are all of a sudden considered normal for the environment. By weighing human interaction with the emails or domains to make decisions on math-modeling reincorporation, Cyber AI avoids this assumption, unless there’s legitimate correspondence from within the corporation back out to the sender.

The inbox has traditionally been the easiest point of entry into an organization. But the fundamental differences in approach offered by Cyber AI drastically increase Antigena Email’s detection capability when compared with gateway tools. Customers with and without email gateways in place have therefore seen a noticeable curbing of their email problem. In the continuous cat-and-mouse game with their adversaries, security teams augmenting their defenses with Cyber AI are finally regaining the advantage.

INSIDE THE SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
AUTHOR
ABOUT ThE AUTHOR
Dan Fein
VP, Product

Based in New York, Dan joined Darktrace’s technical team in 2015, helping customers quickly achieve a complete and granular understanding of Darktrace’s product suite. Dan has a particular focus on Darktrace/Email, ensuring that it is effectively deployed in complex digital environments, and works closely with the development, marketing, sales, and technical teams. Dan holds a Bachelor’s degree in Computer Science from New York University.

Book a 1-1 meeting with one of our experts
share this article
USE CASES
No items found.
PRODUCT SPOTLIGHT
No items found.
COre coverage
No items found.

More in this series

No items found.

Blog

Thought Leadership

The State of AI in Cybersecurity: Understanding AI Technologies

Default blog imageDefault blog image
24
Jul 2024

About the State of AI Cybersecurity Report

Darktrace surveyed 1,800 CISOs, security leaders, administrators, and practitioners from industries around the globe. Our research was conducted to understand how the adoption of new AI-powered offensive and defensive cybersecurity technologies are being managed by organizations.

This blog continues the conversation from “The State of AI in Cybersecurity: Unveiling Global Insights from 1,800 Security Practitioners”. This blog will focus on security professionals’ understanding of AI technologies in cybersecurity tools.

To access download the full report, click here.

How familiar are security professionals with supervised machine learning

Just 31% of security professionals report that they are “very familiar” with supervised machine learning.

Many participants admitted unfamiliarity with various AI types. Less than one-third felt "very familiar" with the technologies surveyed: only 31% with supervised machine learning and 28% with natural language processing (NLP).

Most participants were "somewhat" familiar, ranging from 46% for supervised machine learning to 36% for generative adversarial networks (GANs). Executives and those in larger organizations reported the highest familiarity.

Combining "very" and "somewhat" familiar responses, 77% had familiarity with supervised machine learning, 74% generative AI, and 73% NLP. With generative AI getting so much media attention, and NLP being the broader area of AI that encompasses generative AI, these results may indicate that stakeholders are understanding the topic on the basis of buzz, not hands-on work with the technologies.  

If defenders hope to get ahead of attackers, they will need to go beyond supervised learning algorithms trained on known attack patterns and generative AI. Instead, they’ll need to adopt a comprehensive toolkit comprised of multiple, varied AI approaches—including unsupervised algorithms that continuously learn from an organization’s specific data rather than relying on big data generalizations.  

Different types of AI

Different types of AI have different strengths and use cases in cyber security. It’s important to choose the right technique for what you’re trying to achieve.  

Supervised machine learning: Applied more often than any other type of AI in cyber security. Trained on human attack patterns and historical threat intelligence.  

Large language models (LLMs): Applies deep learning models trained on extremely large data sets to understand, summarize, and generate new content. Used in generative AI tools.  

Natural language processing (NLP): Applies computational techniques to process and understand human language.  

Unsupervised machine learning: Continuously learns from raw, unstructured data to identify deviations that represent true anomalies.  

What impact will generative AI have on the cybersecurity field?

More than half of security professionals (57%) believe that generative AI will have a bigger impact on their field over the next few years than other types of AI.

Chart showing the types of AI expected to impact security the most
Figure 1: Chart from Darktrace's State of AI in Cybersecurity Report

Security stakeholders are highly aware of generative AI and LLMs, viewing them as pivotal to the field's future. Generative AI excels at abstracting information, automating tasks, and facilitating human-computer interaction. However, LLMs can "hallucinate" due to training data errors and are vulnerable to prompt injection attacks. Despite improvements in securing LLMs, the best cyber defenses use a mix of AI types for enhanced accuracy and capability.

AI education is crucial as industry expectations for generative AI grow. Leaders and practitioners need to understand where and how to use AI while managing risks. As they learn more, there will be a shift from generative AI to broader AI applications.

Do security professionals fully understand the different types of AI in security products?

Only 26% of security professionals report a full understanding of the different types of AI in use within security products.

Confusion is prevalent in today’s marketplace. Our survey found that only 26% of respondents fully understand the AI types in their security stack, while 31% are unsure or confused by vendor claims. Nearly 65% believe generative AI is mainly used in cybersecurity, though it’s only useful for identifying phishing emails. This highlights a gap between user expectations and vendor delivery, with too much focus on generative AI.

Key findings include:

  • Executives and managers report higher understanding than practitioners.
  • Larger organizations have better understanding due to greater specialization.

As AI evolves, vendors are rapidly introducing new solutions faster than practitioners can learn to use them. There's a strong need for greater vendor transparency and more education for users to maximize the technology's value.

To help ease confusion around AI technologies in cybersecurity, Darktrace has released the CISO’s Guide to Cyber AI. A comprehensive white paper that categorizes the different applications of AI in cybersecurity. Download the White Paper here.  

Do security professionals believe generative AI alone is enough to stop zero-day threats?

No! 86% of survey participants believe generative AI alone is NOT enough to stop zero-day threats

This consensus spans all geographies, organization sizes, and roles, though executives are slightly less likely to agree. Asia-Pacific participants agree more, while U.S. participants agree less.

Despite expecting generative AI to have the most impact, respondents recognize its limited security use cases and its need to work alongside other AI types. This highlights the necessity for vendor transparency and varied AI approaches for effective security across threat prevention, detection, and response.

Stakeholders must understand how AI solutions work to ensure they offer advanced, rather than outdated, threat detection methods. The survey shows awareness that old methods are insufficient.

To access the full report, click here.

Continue reading
About the author
The Darktrace Community

Blog

Inside the SOC

Jupyter Ascending: Darktrace’s Investigation of the Adaptive Jupyter Information Stealer

Default blog imageDefault blog image
18
Jul 2024

What is Malware as a Service (MaaS)?

Malware as a Service (MaaS) is a model where cybercriminals develop and sell or lease malware to other attackers.

This approach allows individuals or groups with limited technical skills to launch sophisticated cyberattacks by purchasing or renting malware tools and services. MaaS is often provided through online marketplaces on the dark web, where sellers offer various types of malware, including ransomware, spyware, and trojans, along with support services such as updates and customer support.

The Growing MaaS Marketplace

The Malware-as-a-Service (MaaS) marketplace is rapidly expanding, with new strains of malware being regularly introduced and attracting waves of new and previous attackers. The low barrier for entry, combined with the subscription-like accessibility and lucrative business model, has made MaaS a prevalent tool for cybercriminals. As a result, MaaS has become a significant concern for organizations and their security teams, necessitating heightened vigilance and advanced defense strategies.

Examples of Malware as a Service

  • Ransomware as a Service (RaaS): Providers offer ransomware kits that allow users to launch ransomware attacks and share the ransom payments with the service provider.
  • Phishing as a Service: Services that provide phishing kits, including templates and email lists, to facilitate phishing campaigns.
  • Botnet as a Service: Renting out botnets to perform distributed denial-of-service (DDoS) attacks or other malicious activities.
  • Information Stealer: Information stealers are a type of malware specifically designed to collect sensitive data from infected systems, such as login credentials, credit card numbers, personal identification information, and other valuable data.

How does information stealer malware work?

Information stealers are an often-discussed type MaaS tool used to harvest personal and proprietary information such as administrative credentials, banking information, and cryptocurrency wallet details. This information is then exfiltrated from target networks via command-and-control (C2) communication, allowing threat actors to monetize the data. Information stealers have also increasingly been used as an initial access vector for high impact breaches including ransomware attacks, employing both double and triple extortion tactics.

After investigating several prominent information stealers in recent years, the Darktrace Threat Research team launched an investigation into indicators of compromise (IoCs) associated with another variant in late 2023, namely the Jupyter information stealer.

What is Jupyter information stealer and how does it work?

The Jupyter information stealer (also known as Yellow Cockatoo, SolarMarker, and Polazert) was first observed in the wild in late 2020. Multiple variants have since become part of the wider threat landscape, however, towards the end of 2023 a new variant was observed. This latest variant achieved greater stealth and updated its delivery method, targeting browser extensions such as Edge, Firefox, and Chrome via search engine optimization (SEO) poisoning and malvertising. This then redirects users to download malicious files that typically impersonate legitimate software, and finally initiates the infection and the attack chain for Jupyter [3][4]. In recently noted cases, users download malicious executables for Jupyter via installer packages created using InnoSetup – an open-source compiler used to create installation packages in the Windows OS.

The latest release of Jupyter reportedly takes advantage of signed digital certificates to add credibility to downloaded executables, further supplementing its already existing tactics, techniques and procedures (TTPs) for detection evasion and sophistication [4]. Jupyter does this while still maintaining features observed in other iterations, such as dropping files into the %TEMP% folder of a system and using PowerShell to decrypt and load content into memory [4]. Another reported feature includes backdoor functionality such as:

  • C2 infrastructure
  • Ability to download and execute malware
  • Execution of PowerShell scripts and commands
  • Injecting shellcode into legitimate windows applications

Darktrace Coverage of Jupyter information stealer

In September 2023, Darktrace’s Threat Research team first investigated Jupyter and discovered multiple IoCs and TTPs associated with the info-stealer across the customer base. Across most investigated networks during this time, Darktrace observed the following activity:

  • HTTP POST requests over destination port 80 to rare external IP addresses (some of these connections were also made via port 8089 and 8090 with no prior hostname lookup).
  • HTTP POST requests specifically to the root directory of a rare external endpoint.
  • Data streams being sent to unusual external endpoints
  • Anomalous PowerShell execution was observed on numerous affected networks.

Taking a further look at the activity patterns detected, Darktrace identified a series of HTTP POST requests within one customer’s environment on December 7, 2023. The HTTP POST requests were made to the root directory of an external IP address, namely 146.70.71[.]135, which had never previously been observed on the network. This IP address was later reported to be malicious and associated with Jupyter (SolarMarker) by open-source intelligence (OSINT) [5].

Device Event Log indicating several connections from the source device to the rare external IP address 146.70.71[.]135 over port 80.
Figure 1: Device Event Log indicating several connections from the source device to the rare external IP address 146.70.71[.]135 over port 80.

This activity triggered the Darktrace / NETWORK model, ‘Anomalous Connection / Posting HTTP to IP Without Hostname’. This model alerts for devices that have been seen posting data out of the network to rare external endpoints without a hostname. Further investigation into the offending device revealed a significant increase in external data transfers around the time Darktrace alerted the activity.

This External Data Transfer graph demonstrates a spike in external data transfer from the internal device indicated at the top of the graph on December 7, 2023, with a time lapse shown of one week prior.
Figure 2: This External Data Transfer graph demonstrates a spike in external data transfer from the internal device indicated at the top of the graph on December 7, 2023, with a time lapse shown of one week prior.

Packet capture (PCAP) analysis of this activity also demonstrates possible external data transfer, with the device observed making a POST request to the root directory of the malicious endpoint, 146.70.71[.]135.

PCAP of a HTTP POST request showing streams of data being sent to the endpoint, 146.70.71[.]135.
Figure 3: PCAP of a HTTP POST request showing streams of data being sent to the endpoint, 146.70.71[.]135.

In other cases investigated by the Darktrace Threat Research team, connections to the rare external endpoint 67.43.235[.]218 were detected on port 8089 and 8090. This endpoint was also linked to Jupyter information stealer by OSINT sources [6].

Darktrace recognized that such suspicious connections represented unusual activity and raised several model alerts on multiple customer environments, including ‘Compromise / Large Number of Suspicious Successful Connections’ and ‘Anomalous Connection / Multiple Connections to New External TCP Port’.

In one instance, a device that was observed performing many suspicious connections to 67.43.235[.]218 was later observed making suspicious HTTP POST connections to other malicious IP addresses. This included 2.58.14[.]246, 91.206.178[.]109, and 78.135.73[.]176, all of which had been linked to Jupyter information stealer by OSINT sources [7] [8] [9].

Darktrace further observed activity likely indicative of data streams being exfiltrated to Jupyter information stealer C2 endpoints.

Graph displaying the significant increase in the number of HTTP POST requests with No Get made by an affected device, likely indicative of Jupyter information stealer C2 activity.
Figure 4: Graph displaying the significant increase in the number of HTTP POST requests with No Get made by an affected device, likely indicative of Jupyter information stealer C2 activity.

In several cases, Darktrace was able to leverage customer integrations with other security vendors to add additional context to its own model alerts. For example, numerous customers who had integrated Darktrace with Microsoft Defender received security integration alerts that enriched Darktrace’s model alerts with additional intelligence, linking suspicious activity to Jupyter information stealer actors.

The security integration model alerts ‘Security Integration / Low Severity Integration Detection’ and (right image) ‘Security Integration / High Severity Integration Detection’, linking suspicious activity observed by Darktrace with Jupyter information stealer (SolarMarker).
Figure 5: The security integration model alerts ‘Security Integration / Low Severity Integration Detection’ and (right image) ‘Security Integration / High Severity Integration Detection’, linking suspicious activity observed by Darktrace with Jupyter information stealer (SolarMarker).

Conclusion

The MaaS ecosystems continue to dominate the current threat landscape and the increasing sophistication of MaaS variants, featuring advanced defense evasion techniques, poses significant risks once deployed on target networks.

Leveraging anomaly-based detections is crucial for staying ahead of evolving MaaS threats like Jupyter information stealer. By adopting AI-driven security tools like Darktrace / NETWORK, organizations can more quickly identify and effectively detect and respond to potential threats as soon as they emerge. This is especially crucial given the rise of stealthy information stealing malware strains like Jupyter which cannot only harvest and steal sensitive data, but also serve as a gateway to potentially disruptive ransomware attacks.

Credit to Nahisha Nobregas (Senior Cyber Analyst), Vivek Rajan (Cyber Analyst)

References

1.     https://www.paloaltonetworks.com/cyberpedia/what-is-multi-extortion-ransomware

2.     https://flashpoint.io/blog/evolution-stealer-malware/

3.     https://blogs.vmware.com/security/2023/11/jupyter-rising-an-update-on-jupyter-infostealer.html

4.     https://www.morphisec.com/hubfs/eBooks_and_Whitepapers/Jupyter%20Infostealer%20WEB.pdf

5.     https://www.virustotal.com/gui/ip-address/146.70.71.135

6.     https://www.virustotal.com/gui/ip-address/67.43.235.218/community

7.     https://www.virustotal.com/gui/ip-address/2.58.14.246/community

8.     https://www.virustotal.com/gui/ip-address/91.206.178.109/community

9.     https://www.virustotal.com/gui/ip-address/78.135.73.176/community

Appendices

Darktrace Model Detections

  • Anomalous Connection / Posting HTTP to IP Without Hostname
  • Compromise / HTTP Beaconing to Rare Destination
  • Unusual Activity / Unusual External Data to New Endpoints
  • Compromise / Slow Beaconing Activity To External Rare
  • Compromise / Large Number of Suspicious Successful Connections
  • Anomalous Connection / Multiple Failed Connections to Rare Endpoint
  • Compromise / Excessive Posts to Root
  • Compromise / Sustained SSL or HTTP Increase
  • Security Integration / High Severity Integration Detection
  • Security Integration / Low Severity Integration Detection
  • Anomalous Connection / Multiple Connections to New External TCP Port
  • Unusual Activity / Unusual External Data Transfer

AI Analyst Incidents:

  • Unusual Repeated Connections
  • Possible HTTP Command and Control to Multiple Endpoints
  • Possible HTTP Command and Control

List of IoCs

Indicators – Type – Description

146.70.71[.]135

IP Address

Jupyter info-stealer C2 Endpoint

91.206.178[.]109

IP Address

Jupyter info-stealer C2 Endpoint

146.70.92[.]153

IP Address

Jupyter info-stealer C2 Endpoint

2.58.14[.]246

IP Address

Jupyter info-stealer C2 Endpoint

78.135.73[.]176

IP Address

Jupyter info-stealer C2 Endpoint

217.138.215[.]105

IP Address

Jupyter info-stealer C2 Endpoint

185.243.115[.]88

IP Address

Jupyter info-stealer C2 Endpoint

146.70.80[.]66

IP Address

Jupyter info-stealer C2 Endpoint

23.29.115[.]186

IP Address

Jupyter info-stealer C2 Endpoint

67.43.235[.]218

IP Address

Jupyter info-stealer C2 Endpoint

217.138.215[.]85

IP Address

Jupyter info-stealer C2 Endpoint

193.29.104[.]25

IP Address

Jupyter info-stealer C2 Endpoint

Continue reading
About the author
Nahisha Nobregas
SOC Analyst
Our ai. Your data.

Elevate your cyber defenses with Darktrace AI

Start your free trial
Darktrace AI protecting a business from cyber threats.