Blog

Inside the SOC

Multi-Account Compromise in Office 365

Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
25
May 2022
25
May 2022
Learn how internal phishing can compromise accounts swiftly & how Darktrace/Apps can prevent future attacks effectively.

In February 2022, Darktrace detected the compromise of three SaaS accounts within a customer’s Office 365 environment. This incident provides an effective use case for highlighting how Darktrace/Apps and Darktrace/Email can work together to alert to unusual logins, app permission changes, new email rules and outbound spam. It also emphasizes an instance where Darktrace RESPOND/Apps could have been set to autonomous mode and stopped additional compromise.

Account Compromise Timeline

February 9 2022

Account A was logged into from a rare IP from Nigeria with the BAV2ROPC user agent which is commonly associated with SaaS account attacks. BAV2ROPC stands for ‘Basic Authentication Version 2 Resource Owner Password Credential’ and is commonly used by old email apps such as iOS Mail. It is often seen in SaaS/email account compromises where accounts have ‘legacy authentication’ enabled. This is because, even if multi-factor authentication (MFA) is activated, legacy protocols like IMAP/POP3 are not configured for MFA and so do not result in an MFA notification being sent.[1][2]

Account A then created a new email rule which was named as a single full stop. Attackers commonly create new email rules to give themselves persistent access by using the ability to forward certain emails to external email accounts they own. This means that even if the account’s password is changed or MFA is turned on, the attacker keeps getting the forwarded emails as long as the rule remains in place. In this case, the attacker configured the new email rule using the following fields and features:

  • AlwaysDeleteOutlookRulesBlob – hides any warning messages when using Outlook on the web or Powershell to edit inbox rules. It is likely that the attacker had a set list of commands to run and didn’t want to be slowed down in the exploitation of the account by having to click confirmation messages.
  • Force – hides warning or confirmation messages.
  • MoveToFolder – moves emails to a folder. This is often used to move bounced emails away from the inbox in order to hide the fact the account is being used to send emails by the attacker.
  • Name – specifies the name of the rule, in this case a single full stop.
  • SubjectOrBodyContainsWords – emails with key words are actioned.
  • StopProcessingRules – determines whether subsequent rules are processed if the conditions of this rule are met. It is likely in this case the attacker set this to false so that any subsequent rules would still be processed to avoid raising suspicion.

Account A was then observed giving permission to the email management app Spike. This was likely to allow the rapid automated exploitation of the compromised account. Attackers want to speed up this process to reduce the time between account compromise and malicious use of the account, thus reducing the time security teams have to respond.

Figure 1: Screenshot from SaaS console showing the timeline of giving consent to the email management application Spike and the creation of the new inbox rule

The account was then observed sending 794 emails over a 15 minute period to both internal and external recipients. These emails shared similar qualities including the same subject line and related phishing links. This mass spam was likely due to the attacker wanting to compromise as many accounts and credentials as possible within the shortest timeframe. The domain of the link sent in the emails was spikenow[.]com and was hidden by the text ‘View Shared Link’. This suggests that the attacker used Spike to send the emails and host the phishing link.

Figure 2: Screenshot of AGE UI showing the spike in outbound messages from the compromised account – the messages all appear to be the same format
Figure 3: Screenshot from Darktrace/Email of the link and text that masked the link: ‘View Shared File’

Within 15 minutes of this large volume of outbound email from Account A, Account B was accessed from the same rare IP located in Nigeria. Account B also created a new email rule which was named a single full stop. In addition to the previous rules, the following rules were observed:

  • From – specifies that emails from certain addresses will be processed by the rule.
  • MarkAsRead – specifies that emails are to be marked as read.

Due to the short timeframe between the phishing emails and the anomalous behavior from Account B, it is possible that Account B was an initial phishing victim.

Figure 4: Screenshot of the SaaS console showing Account B login failures, then successful login and inbox rule creation from the rare Nigerian IP

February 10 2022

The next day, a third account (Account C) was also accessed from the same rare IP. This occurred on two occasions, once with the user agent Mozilla/5.0 and once with BAV2ROPC. After the login at 13:08 with BAV2ROPC, the account gave the same permission as Account A to the email management app Spike. It then created what appears to be the same email rule, named a single full stop. As with Account B, it is possible that this account was compromised by one of the phishing emails sent by Account A.

Figure 5: Timeline of key incidents with Darktrace/Apps actions

Whilst the motive of the threat actor was unclear, this may have been the result of:

  • Credential harvesting for future use against the organization or to sell to a third party.
  • Possible impersonation of compromised users on professional websites (LinkedIn, Indeed) to phish further company accounts:
  • Fake accounts of one user were discovered on LinkedIn.
  • Emails registering for Indeed for this same user were seen during compromise.

How did the attack bypass the rest of the security stack?

  • Compromised Office 365 credentials, combined with the use of the user agent BAV2ROPC meant MFA could not stop the suspicious login.
  • RESPOND was in Human Confirmation Mode and was therefore not confirmed to take autonomous action, showing only the detections. Disabling Account A would likely have prevented the phishing emails and the subsequent compromise of Accounts B and C.
  • The organization was not signed up to Darktrace Proactive Threat Notifications or Ask The Expert services which could have allowed further triage from Darktrace SOC analysts.

Cyber AI Analyst Investigates

Darktrace’s Cyber AI Analyst automates investigations at speed and scale, prioritizing relevant incidents and creating actionable insights, allowing security teams to rapidly understand and act against a threat.

In this case, AI Analyst automatically investigated all three account compromises, saving time for the customer’s security team and allowing them to quickly investigate the incident themselves in more detail. The technology also highlighted some of the viewed files by the compromised accounts which was not immediately obvious from the model breaches alone.

Figure 6: Screenshot of AI Analyst for Account A
Figure 7: Screenshot of AI Analyst for Account B
Figure 8: Screenshot of AI Analyst for Account C

Darktrace RESPOND (Antigena) actions

The organization in question did not have RESPOND/Apps configured in Active Mode, and so it did not take any action in this case. The table below shows the critical defensive actions RESPOND would have taken.[3]

Nonetheless, we can see what actions RESPOND would have taken, and when, had the technology been enabled.

The above tables illustrate that all three users would have been disabled during the incident had RESPOND been active. The highlighted row shows that Account A would have been disabled when the internal phishing emails were sent and possibly then prevented the cascade of compromised email accounts (B and C).

Conclusion

SaaS accounts greatly increase a company’s attack surface. Not only is exploitation of compromised accounts quick, but a single compromised account can easily lead to further compromises via an internal phishing campaign. Together this reinforces the ongoing need for autonomous and proactive security to complement existing IT teams and reduce threats at the point of compromise. Whilst disabling ‘legacy authentication’ for all accounts and providing MFA would give some extra protection, Darktrace/Apps has the ability to block all further infection.

Credit to: Adam Stevens and Anthony Wong for their contributions.

Appendix

List of Darktrace Model Detections

User A – February 9 2022

  • 04:55:51 UTC | SaaS / Access / Suspicious Login User-Agent
  • 04:55:51 UTC | SaaS / Access / Unusual External Source for SaaS Credential Use
  • 04:55:52 UTC | Antigena / SaaS / Antigena Suspicious SaaS and Email Activity Block
  • 04:55:52 UTC | Antigena / SaaS / Antigena Suspicious SaaS Activity Block
  • 14:16:48 UTC | SaaS / Compliance / New Email Rule
  • 14:16:48 UTC | SaaS / Compromise / Unusual Login and New Email Rule
  • 14:16:49 UTC | Antigena / SaaS / Antigena Significant Compliance Activity Block
  • 14:16:49 UTC | Antigena / SaaS / Antigena Suspicious SaaS Activity Block
  • 14:45:06 UTC | IaaS / Admin / Azure Application Administration Activities
  • 14:45:07 UTC | SaaS / Admin / OAuth Permission Grant
  • 14:45:07 UTC | Device / Multiple Model Breaches
  • 14:45:08 UTC | SaaS / Compliance / Multiple Unusual SaaS Activities
  • 15:03:25 UTC | SaaS / Email Nexus / Possible Outbound Email Spam
  • 15:03:25 UTC | SaaS / Compromise / Unusual Login and Outbound Email Spam

User B – February 9 2022

  • 15:18:21 UTC | SaaS / Compliance / New Email Rule
  • 15:18:21 UTC | SaaS / Compromise / Unusual Login and New Email Rule
  • 15:18:22 UTC | Antigena / SaaS / Antigena Significant Compliance Activity Block
  • 15:18:22 UTC | Antigena / SaaS / Antigena Suspicious SaaS Activity Block

User C – February 10 2022

  • 14:25:20 UTC | SaaS / Admin / OAuth Permission Grant
  • 14:38:09 UTC | SaaS / Compliance / New Email Rule
  • 14:38:09 UTC | SaaS / Compromise / Unusual Login and New Email Rule
  • 14:38:10 UTC | Antigena / SaaS / Antigena Significant Compliance Activity Block
  • 14:38:10 UTC | Antigena / SaaS / Antigena Suspicious SaaS Activity Block

Refrences

1. https://www.ncsc.gov.uk/guidance/phishing#section_3

2. https://www.bleepingcomputer.com/news/security/microsoft-scammers-bypass-office-365-mfa-in-bec-attacks/

3. https://customerportal.darktrace.com/product-guides/main/antigena-saas-inhibitors

INSIDE THE SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
AUTHOR
ABOUT ThE AUTHOR
Laura Leyland
Cyber Analyst
Book a 1-1 meeting with one of our experts
share this article
USE CASES
No items found.
PRODUCT SPOTLIGHT
No items found.
COre coverage
No items found.

More in this series

No items found.

Blog

Thought Leadership

The State of AI in Cybersecurity: Understanding AI Technologies

Default blog imageDefault blog image
24
Jul 2024

About the State of AI Cybersecurity Report

Darktrace surveyed 1,800 CISOs, security leaders, administrators, and practitioners from industries around the globe. Our research was conducted to understand how the adoption of new AI-powered offensive and defensive cybersecurity technologies are being managed by organizations.

This blog continues the conversation from “The State of AI in Cybersecurity: Unveiling Global Insights from 1,800 Security Practitioners”. This blog will focus on security professionals’ understanding of AI technologies in cybersecurity tools.

To access download the full report, click here.

How familiar are security professionals with supervised machine learning

Just 31% of security professionals report that they are “very familiar” with supervised machine learning.

Many participants admitted unfamiliarity with various AI types. Less than one-third felt "very familiar" with the technologies surveyed: only 31% with supervised machine learning and 28% with natural language processing (NLP).

Most participants were "somewhat" familiar, ranging from 46% for supervised machine learning to 36% for generative adversarial networks (GANs). Executives and those in larger organizations reported the highest familiarity.

Combining "very" and "somewhat" familiar responses, 77% had familiarity with supervised machine learning, 74% generative AI, and 73% NLP. With generative AI getting so much media attention, and NLP being the broader area of AI that encompasses generative AI, these results may indicate that stakeholders are understanding the topic on the basis of buzz, not hands-on work with the technologies.  

If defenders hope to get ahead of attackers, they will need to go beyond supervised learning algorithms trained on known attack patterns and generative AI. Instead, they’ll need to adopt a comprehensive toolkit comprised of multiple, varied AI approaches—including unsupervised algorithms that continuously learn from an organization’s specific data rather than relying on big data generalizations.  

Different types of AI

Different types of AI have different strengths and use cases in cyber security. It’s important to choose the right technique for what you’re trying to achieve.  

Supervised machine learning: Applied more often than any other type of AI in cyber security. Trained on human attack patterns and historical threat intelligence.  

Large language models (LLMs): Applies deep learning models trained on extremely large data sets to understand, summarize, and generate new content. Used in generative AI tools.  

Natural language processing (NLP): Applies computational techniques to process and understand human language.  

Unsupervised machine learning: Continuously learns from raw, unstructured data to identify deviations that represent true anomalies.  

What impact will generative AI have on the cybersecurity field?

More than half of security professionals (57%) believe that generative AI will have a bigger impact on their field over the next few years than other types of AI.

Chart showing the types of AI expected to impact security the most
Figure 1: Chart from Darktrace's State of AI in Cybersecurity Report

Security stakeholders are highly aware of generative AI and LLMs, viewing them as pivotal to the field's future. Generative AI excels at abstracting information, automating tasks, and facilitating human-computer interaction. However, LLMs can "hallucinate" due to training data errors and are vulnerable to prompt injection attacks. Despite improvements in securing LLMs, the best cyber defenses use a mix of AI types for enhanced accuracy and capability.

AI education is crucial as industry expectations for generative AI grow. Leaders and practitioners need to understand where and how to use AI while managing risks. As they learn more, there will be a shift from generative AI to broader AI applications.

Do security professionals fully understand the different types of AI in security products?

Only 26% of security professionals report a full understanding of the different types of AI in use within security products.

Confusion is prevalent in today’s marketplace. Our survey found that only 26% of respondents fully understand the AI types in their security stack, while 31% are unsure or confused by vendor claims. Nearly 65% believe generative AI is mainly used in cybersecurity, though it’s only useful for identifying phishing emails. This highlights a gap between user expectations and vendor delivery, with too much focus on generative AI.

Key findings include:

  • Executives and managers report higher understanding than practitioners.
  • Larger organizations have better understanding due to greater specialization.

As AI evolves, vendors are rapidly introducing new solutions faster than practitioners can learn to use them. There's a strong need for greater vendor transparency and more education for users to maximize the technology's value.

To help ease confusion around AI technologies in cybersecurity, Darktrace has released the CISO’s Guide to Cyber AI. A comprehensive white paper that categorizes the different applications of AI in cybersecurity. Download the White Paper here.  

Do security professionals believe generative AI alone is enough to stop zero-day threats?

No! 86% of survey participants believe generative AI alone is NOT enough to stop zero-day threats

This consensus spans all geographies, organization sizes, and roles, though executives are slightly less likely to agree. Asia-Pacific participants agree more, while U.S. participants agree less.

Despite expecting generative AI to have the most impact, respondents recognize its limited security use cases and its need to work alongside other AI types. This highlights the necessity for vendor transparency and varied AI approaches for effective security across threat prevention, detection, and response.

Stakeholders must understand how AI solutions work to ensure they offer advanced, rather than outdated, threat detection methods. The survey shows awareness that old methods are insufficient.

To access the full report, click here.

Continue reading
About the author
The Darktrace Community

Blog

Inside the SOC

Jupyter Ascending: Darktrace’s Investigation of the Adaptive Jupyter Information Stealer

Default blog imageDefault blog image
18
Jul 2024

What is Malware as a Service (MaaS)?

Malware as a Service (MaaS) is a model where cybercriminals develop and sell or lease malware to other attackers.

This approach allows individuals or groups with limited technical skills to launch sophisticated cyberattacks by purchasing or renting malware tools and services. MaaS is often provided through online marketplaces on the dark web, where sellers offer various types of malware, including ransomware, spyware, and trojans, along with support services such as updates and customer support.

The Growing MaaS Marketplace

The Malware-as-a-Service (MaaS) marketplace is rapidly expanding, with new strains of malware being regularly introduced and attracting waves of new and previous attackers. The low barrier for entry, combined with the subscription-like accessibility and lucrative business model, has made MaaS a prevalent tool for cybercriminals. As a result, MaaS has become a significant concern for organizations and their security teams, necessitating heightened vigilance and advanced defense strategies.

Examples of Malware as a Service

  • Ransomware as a Service (RaaS): Providers offer ransomware kits that allow users to launch ransomware attacks and share the ransom payments with the service provider.
  • Phishing as a Service: Services that provide phishing kits, including templates and email lists, to facilitate phishing campaigns.
  • Botnet as a Service: Renting out botnets to perform distributed denial-of-service (DDoS) attacks or other malicious activities.
  • Information Stealer: Information stealers are a type of malware specifically designed to collect sensitive data from infected systems, such as login credentials, credit card numbers, personal identification information, and other valuable data.

How does information stealer malware work?

Information stealers are an often-discussed type MaaS tool used to harvest personal and proprietary information such as administrative credentials, banking information, and cryptocurrency wallet details. This information is then exfiltrated from target networks via command-and-control (C2) communication, allowing threat actors to monetize the data. Information stealers have also increasingly been used as an initial access vector for high impact breaches including ransomware attacks, employing both double and triple extortion tactics.

After investigating several prominent information stealers in recent years, the Darktrace Threat Research team launched an investigation into indicators of compromise (IoCs) associated with another variant in late 2023, namely the Jupyter information stealer.

What is Jupyter information stealer and how does it work?

The Jupyter information stealer (also known as Yellow Cockatoo, SolarMarker, and Polazert) was first observed in the wild in late 2020. Multiple variants have since become part of the wider threat landscape, however, towards the end of 2023 a new variant was observed. This latest variant achieved greater stealth and updated its delivery method, targeting browser extensions such as Edge, Firefox, and Chrome via search engine optimization (SEO) poisoning and malvertising. This then redirects users to download malicious files that typically impersonate legitimate software, and finally initiates the infection and the attack chain for Jupyter [3][4]. In recently noted cases, users download malicious executables for Jupyter via installer packages created using InnoSetup – an open-source compiler used to create installation packages in the Windows OS.

The latest release of Jupyter reportedly takes advantage of signed digital certificates to add credibility to downloaded executables, further supplementing its already existing tactics, techniques and procedures (TTPs) for detection evasion and sophistication [4]. Jupyter does this while still maintaining features observed in other iterations, such as dropping files into the %TEMP% folder of a system and using PowerShell to decrypt and load content into memory [4]. Another reported feature includes backdoor functionality such as:

  • C2 infrastructure
  • Ability to download and execute malware
  • Execution of PowerShell scripts and commands
  • Injecting shellcode into legitimate windows applications

Darktrace Coverage of Jupyter information stealer

In September 2023, Darktrace’s Threat Research team first investigated Jupyter and discovered multiple IoCs and TTPs associated with the info-stealer across the customer base. Across most investigated networks during this time, Darktrace observed the following activity:

  • HTTP POST requests over destination port 80 to rare external IP addresses (some of these connections were also made via port 8089 and 8090 with no prior hostname lookup).
  • HTTP POST requests specifically to the root directory of a rare external endpoint.
  • Data streams being sent to unusual external endpoints
  • Anomalous PowerShell execution was observed on numerous affected networks.

Taking a further look at the activity patterns detected, Darktrace identified a series of HTTP POST requests within one customer’s environment on December 7, 2023. The HTTP POST requests were made to the root directory of an external IP address, namely 146.70.71[.]135, which had never previously been observed on the network. This IP address was later reported to be malicious and associated with Jupyter (SolarMarker) by open-source intelligence (OSINT) [5].

Device Event Log indicating several connections from the source device to the rare external IP address 146.70.71[.]135 over port 80.
Figure 1: Device Event Log indicating several connections from the source device to the rare external IP address 146.70.71[.]135 over port 80.

This activity triggered the Darktrace / NETWORK model, ‘Anomalous Connection / Posting HTTP to IP Without Hostname’. This model alerts for devices that have been seen posting data out of the network to rare external endpoints without a hostname. Further investigation into the offending device revealed a significant increase in external data transfers around the time Darktrace alerted the activity.

This External Data Transfer graph demonstrates a spike in external data transfer from the internal device indicated at the top of the graph on December 7, 2023, with a time lapse shown of one week prior.
Figure 2: This External Data Transfer graph demonstrates a spike in external data transfer from the internal device indicated at the top of the graph on December 7, 2023, with a time lapse shown of one week prior.

Packet capture (PCAP) analysis of this activity also demonstrates possible external data transfer, with the device observed making a POST request to the root directory of the malicious endpoint, 146.70.71[.]135.

PCAP of a HTTP POST request showing streams of data being sent to the endpoint, 146.70.71[.]135.
Figure 3: PCAP of a HTTP POST request showing streams of data being sent to the endpoint, 146.70.71[.]135.

In other cases investigated by the Darktrace Threat Research team, connections to the rare external endpoint 67.43.235[.]218 were detected on port 8089 and 8090. This endpoint was also linked to Jupyter information stealer by OSINT sources [6].

Darktrace recognized that such suspicious connections represented unusual activity and raised several model alerts on multiple customer environments, including ‘Compromise / Large Number of Suspicious Successful Connections’ and ‘Anomalous Connection / Multiple Connections to New External TCP Port’.

In one instance, a device that was observed performing many suspicious connections to 67.43.235[.]218 was later observed making suspicious HTTP POST connections to other malicious IP addresses. This included 2.58.14[.]246, 91.206.178[.]109, and 78.135.73[.]176, all of which had been linked to Jupyter information stealer by OSINT sources [7] [8] [9].

Darktrace further observed activity likely indicative of data streams being exfiltrated to Jupyter information stealer C2 endpoints.

Graph displaying the significant increase in the number of HTTP POST requests with No Get made by an affected device, likely indicative of Jupyter information stealer C2 activity.
Figure 4: Graph displaying the significant increase in the number of HTTP POST requests with No Get made by an affected device, likely indicative of Jupyter information stealer C2 activity.

In several cases, Darktrace was able to leverage customer integrations with other security vendors to add additional context to its own model alerts. For example, numerous customers who had integrated Darktrace with Microsoft Defender received security integration alerts that enriched Darktrace’s model alerts with additional intelligence, linking suspicious activity to Jupyter information stealer actors.

The security integration model alerts ‘Security Integration / Low Severity Integration Detection’ and (right image) ‘Security Integration / High Severity Integration Detection’, linking suspicious activity observed by Darktrace with Jupyter information stealer (SolarMarker).
Figure 5: The security integration model alerts ‘Security Integration / Low Severity Integration Detection’ and (right image) ‘Security Integration / High Severity Integration Detection’, linking suspicious activity observed by Darktrace with Jupyter information stealer (SolarMarker).

Conclusion

The MaaS ecosystems continue to dominate the current threat landscape and the increasing sophistication of MaaS variants, featuring advanced defense evasion techniques, poses significant risks once deployed on target networks.

Leveraging anomaly-based detections is crucial for staying ahead of evolving MaaS threats like Jupyter information stealer. By adopting AI-driven security tools like Darktrace / NETWORK, organizations can more quickly identify and effectively detect and respond to potential threats as soon as they emerge. This is especially crucial given the rise of stealthy information stealing malware strains like Jupyter which cannot only harvest and steal sensitive data, but also serve as a gateway to potentially disruptive ransomware attacks.

Credit to Nahisha Nobregas (Senior Cyber Analyst), Vivek Rajan (Cyber Analyst)

References

1.     https://www.paloaltonetworks.com/cyberpedia/what-is-multi-extortion-ransomware

2.     https://flashpoint.io/blog/evolution-stealer-malware/

3.     https://blogs.vmware.com/security/2023/11/jupyter-rising-an-update-on-jupyter-infostealer.html

4.     https://www.morphisec.com/hubfs/eBooks_and_Whitepapers/Jupyter%20Infostealer%20WEB.pdf

5.     https://www.virustotal.com/gui/ip-address/146.70.71.135

6.     https://www.virustotal.com/gui/ip-address/67.43.235.218/community

7.     https://www.virustotal.com/gui/ip-address/2.58.14.246/community

8.     https://www.virustotal.com/gui/ip-address/91.206.178.109/community

9.     https://www.virustotal.com/gui/ip-address/78.135.73.176/community

Appendices

Darktrace Model Detections

  • Anomalous Connection / Posting HTTP to IP Without Hostname
  • Compromise / HTTP Beaconing to Rare Destination
  • Unusual Activity / Unusual External Data to New Endpoints
  • Compromise / Slow Beaconing Activity To External Rare
  • Compromise / Large Number of Suspicious Successful Connections
  • Anomalous Connection / Multiple Failed Connections to Rare Endpoint
  • Compromise / Excessive Posts to Root
  • Compromise / Sustained SSL or HTTP Increase
  • Security Integration / High Severity Integration Detection
  • Security Integration / Low Severity Integration Detection
  • Anomalous Connection / Multiple Connections to New External TCP Port
  • Unusual Activity / Unusual External Data Transfer

AI Analyst Incidents:

  • Unusual Repeated Connections
  • Possible HTTP Command and Control to Multiple Endpoints
  • Possible HTTP Command and Control

List of IoCs

Indicators – Type – Description

146.70.71[.]135

IP Address

Jupyter info-stealer C2 Endpoint

91.206.178[.]109

IP Address

Jupyter info-stealer C2 Endpoint

146.70.92[.]153

IP Address

Jupyter info-stealer C2 Endpoint

2.58.14[.]246

IP Address

Jupyter info-stealer C2 Endpoint

78.135.73[.]176

IP Address

Jupyter info-stealer C2 Endpoint

217.138.215[.]105

IP Address

Jupyter info-stealer C2 Endpoint

185.243.115[.]88

IP Address

Jupyter info-stealer C2 Endpoint

146.70.80[.]66

IP Address

Jupyter info-stealer C2 Endpoint

23.29.115[.]186

IP Address

Jupyter info-stealer C2 Endpoint

67.43.235[.]218

IP Address

Jupyter info-stealer C2 Endpoint

217.138.215[.]85

IP Address

Jupyter info-stealer C2 Endpoint

193.29.104[.]25

IP Address

Jupyter info-stealer C2 Endpoint

Continue reading
About the author
Nahisha Nobregas
SOC Analyst
Our ai. Your data.

Elevate your cyber defenses with Darktrace AI

Start your free trial
Darktrace AI protecting a business from cyber threats.