Blog
/

Ransomware

/
October 21, 2020

Protecting Healthcare Organizations from Maze Ransomware

Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
21
Oct 2020
Discover how Darktrace detected and protected a healthcare organization from a Maze ransomware attack. Stay informed and protect your data today.

Ransomware, with more severe consequences and against increasingly high-stakes targets, continues to cause chaos and disruption to organizations globally. Earlier this year saw a surge in a strain of ransomware known as ‘Maze’, which shut down operations at leading optical products provider Canon and wreaked havoc in Fortune 500 companies like Cognizant.

Ransomware targeting healthcare

Just last month, news of a woman in Germany dying after a ransomware attack on the Dusseldorf University Hospital hit the headlines, confirming that the threat to people is no longer theoretical.

Ransomware affects all industries but 2020 has seen cyber-criminals increasingly hit essential services like healthcare, local government and critical infrastructure – intentionally or as collateral damage. As the stakes rise, so too does the need to understand how to prevent these devastating and pervasive attacks.

Once deployed, ransomware can spread laterally through an organization’s digital infrastructure in seconds, taking entire systems offline in minutes. Attackers often strike at night or at weekends, when they know security teams’ response time will be slower. Machine-speed attacks require machine-speed defenses that can detect and respond to this threat without human guidance, and autonomously block the threat.

This blog explains how AI detects and stops ransomware by learning ‘normal’ across the digital estate – from email and SaaS applications to the network, cloud, IoT and industrial control systems – by looking at an example of a Maze ransomware attack caught by Darktrace in a customer’s environment.

Darktrace’s Immune System detected the threat as soon as it emerged, but as the Autonomous Response capability was configured in passive mode, neutralizing the threat still required human action. This means that attackers were able to move laterally across the organization at speed and began to encrypt files before the security team stepped in. In active mode, Antigena Network would have contained the activity in its earliest stages.

How does Darktrace detect ransomware like Maze?

As soon as Darktrace is deployed – whether virtually or on-premise – the AI begins to learn the ‘pattern of life’ for every user and device across the organization. This enables the technology to detect anomalous activity indicative of a cyber-threat. It does this without relying on hard-coded rules and signatures; an approach that requires a ‘Patient Zero’ before updating these lists and containing subsequent identical threats. When it comes to a novel instance of ransomware spreading across an organization and infecting hundreds of devices in seconds, such an approach becomes useless.

With an understanding of the organization’s ‘pattern of life’, Darktrace’s AI recognizes unusual activity in real time. Such activity might include:

ActivityDarktrace detectionsUnusual downloads from C2 serversEXE from Rare Destination / Masqueraded File TransferBrute forcing publicly accessible RDP serversIncoming RDP brute force modelsBrute forcing access to web portal user accounts with weak passwords or lacking MFAVarious brute force modelsC2 via Cobalt Strike / Empire PowershellSSL Beaconing to Rare Endpoint / Empire Powershell and Cobalt Strike modelsNetwork scanning for reconnaissance & EternalBlue exploitSuspicious Network Scan model known to download Advanced IP Scanner after successful exploitMimikatz usage for privilege escalationUnusual Admin SMB Session / Unusual RDP Admin Session (Procdump, PingCastle, and Bloodhound)Psexec / ‘Living off the Land’ for lateral movementUnusual Remote Command Execution / Unusual PSexec / Unusual DCE RPCData exfiltration to C2 serversData Sent to Rare Domain / Unusual Internal Download / Unusual External UploadEncryptionSuspicious SMB Activity / Additional File Extensions AppendedExfiltration of passwords through various cloud storage servicesData Sent to New External DomainRDP tunnels using NgrokOutbound RDP / Various beaconing models

In addition, Darktrace is able to identify attempts to brute force access on Internet-facing servers. It can also detect specific searches for passwords stored in plain text as well as various password manager databases.

Maze ransomware analysis

Figure 1: A timeline of the attack

Most recently, Darktrace’s AI detected a case of Maze ransomware targeting a healthcare organization. Darktrace’s Immune System spotted every stage of the attack lifecycle within seconds, and the Cyber AI Analyst immediately launched an automated investigation of the full incident, surfacing a natural-language, actionable summary for the security team.

The initial infection vector was spear phishing. Maze is frequently delivered to healthcare organizations using pandemic-themed phishing emails. Darktrace also offers AI-powered email security that understands normal behavior for every Microsoft 365 user and spots anomalies that are indicative of phishing, but in the absence of this protection, the emails were waved through by traditional gateways.

The attacker began engaging in network scanning activity and enumeration to escalate access within the Research and Development subnet. Darktrace’s AI detected a successful compromise of admin level credentials, unusual RDP activities and multiple Kerberos authentication attempts.

Darktrace detected the attacker uploading a domain controller, before batch files were written to multiple file shares, which were used for the encryption process.

An infected device then connected to a suspicious domain that is associated to Maze mazedecrypt[.]top and the TOR browser bundle was downloaded, likely for C2 purposes. A large volume of sensitive data from the R&D subnet was then uploaded to a rare domain. This is typical of Maze ransomware, which is seen as a ‘double threat’ in that it not only seeks to encrypt critical files but also sends a copy of them back to the attacker.

This form of attack, also known as doxware, then provides the attacker with leverage in the possible event that the organization refused to pay the ransom – they can sell the data on the Dark Web, or threaten to leak intellectual property to competitors, for instance.

Real-time automated investigations with Cyber AI Analyst

Throughout the attack lifecycle, multiple high-fidelity alerts were generated by Darktrace AI and this prompted the Cyber AI Analyst to automatically launch an investigation in the background, stitching together the different events into a single, comprehensive security incident, which it then displayed for human review in a single screen.

Figure 2: The data exfiltration to a rare external domain

Figure 3: Darktrace’s user interface highlighting the unusual activity and model breaches on a domain controller directly linked with the ransomware attack

Targeted, double-threat attacks like Maze ransomware are on the rise and extremely dangerous – and they are increasingly targeting high-stakes environments. Thousands of organizations are turning to AI, not only to detect and investigate on ransomware intrusions as demonstrated above, but to autonomously respond to events as they occur. Ransomware attacks like these show organizations why autonomous response in active mode is not just a nice to have – but necessary – as fast-moving threats demand machine-speed responses.

In a previous blog, we looked at a novel zero-day ransomware attack that slipped through legacy security tools – but Antigena Network was configured in active mode, autonomously stopping the threat in its tracks. This unique capability is becoming crucial for organizations in every industry who find themselves targeted by increasingly sophisticated attack methods.

Thanks to Darktrace analyst Adam Stevens for his insights on the above threat find.

Learn more about Autonomous Response

Darktrace model detections

  • Device / Suspicious Network Scan Activity
  • Device / Network Scan
  • Device / ICMP Address Scan
  • Unusual Activity / Unusual Internal Connections
  • Device / Multiple Lateral Movement Model Breaches
  • Experimental / Executable Uploaded to DC
  • Compromise / Ransomware::Suspicious SMB Activity
  • Compromise / Ransomware::Ransom or Offensive Words Written to SMB
  • Compliance / SMB Drive Write
  • Compliance / High Priority Compliance Model Breach
  • Anomalous Connection / SMB Enumeration
  • Device / Suspicious File Writes to Multiple Hidden SMB Shares
  • Device / New or Unusual Remote Command Execution
  • Anomalous Connection / New or Uncommon Service Control
  • Anomalous Connection / SMB Enumeration
  • Experimental / Possible RPC Execution
  • Anomalous Connection / High Volume of New or Uncommon Service Control
  • Experimental / Possible Ransom Note
  • Anomalous File / Internal::Additional Extension Appended to SMB File
  • Compliance / Tor Package Download
  • Device / Suspicious Domain
  • Device / Long Agent Connection to New Endpoint
  • Anomalous Connection / Data Sent to Rare Domain

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Author
Max Heinemeyer
Global Field CISO

Max is a cyber security expert with over a decade of experience in the field, specializing in a wide range of areas such as Penetration Testing, Red-Teaming, SIEM and SOC consulting and hunting Advanced Persistent Threat (APT) groups. At Darktrace, Max is closely involved with Darktrace’s strategic customers & prospects. He works with the R&D team at Darktrace, shaping research into new AI innovations and their various defensive and offensive applications. Max’s insights are regularly featured in international media outlets such as the BBC, Forbes and WIRED. Max holds an MSc from the University of Duisburg-Essen and a BSc from the Cooperative State University Stuttgart in International Business Information Systems.

Book a 1-1 meeting with one of our experts
Share this article

More in this series

No items found.

Blog

/

January 30, 2025

/
No items found.

Reimagining Your SOC: Overcoming Alert Fatigue with AI-Led Investigations  

Default blog imageDefault blog image

The efficiency of a Security Operations Center (SOC) hinges on its ability to detect, analyze and respond to threats effectively. With advancements in AI and automation, key early SOC team metrics such as Mean Time to Detect (MTTD) have seen significant improvements:

  • 96% of defenders believing AI-powered solutions significantly boost the speed and efficiency of prevention, detection, response, and recovery.
  • Organizations leveraging AI and automation can shorten their breach lifecycle by an average of 108 days compared to those without these technologies.

While tool advances have improved performance and effectiveness in the detection phase, this has not been as beneficial to the next step of the process where initial alerts are investigated further to determine their relevance and how they relate to other activities. This is often measured with the metric Mean Time to Analysis (MTTA), although some SOC teams operate a two-level process with teams for initial triage to filter out more obviously uninteresting alerts and for more detailed analysis of the remainder. SOC teams continue to grapple with alert fatigue, overwhelmed analysts, and inefficient triage processes, preventing them from achieving the operational efficiency necessary for a high-performing SOC.

Addressing this core inefficiency requires extending AI's capabilities beyond detection to streamline and optimize the following investigative workflows that underpin effective analysis.

Challenges with SOC alert investigation

Detecting cyber threats is only the beginning of a much broader challenge of SOC efficiency. The real bottleneck often lies in the investigation process.

Detection tools and techniques have evolved significantly with the use of machine learning methods, improving early threat detection. However, after a detection pops up, human analysts still typically step in to evaluate the alert, gather context, and determine whether it’s a true threat or a false alarm and why. If it is a threat, further investigation must be performed to understand the full scope of what may be a much larger problem. This phase, measured by the mean time to analysis, is critical for swift incident response.

Challenges with manual alert investigation:

  • Too many alerts
  • Alerts lack context
  • Cognitive load sits with analysts
  • Insufficient talent in the industry
  • Fierce competition for experienced analysts

For many organizations, investigation is where the struggle of efficiency intensifies. Analysts face overwhelming volumes of alerts, a lack of consolidated context, and the mental strain of juggling multiple systems. With a worldwide shortage of 4 million experienced level two and three SOC analysts, the cognitive burden placed on teams is immense, often leading to alert fatigue and missed threats.

Even with advanced systems in place not all potential detections are investigated. In many cases, only a quarter of initial alerts are triaged (or analyzed). However, the issue runs deeper. Triaging occurs after detection engineering and alert tuning, which often disable many alerts that could potentially reveal true threats but are not accurate enough to justify the time and effort of the security team. This means some potential threats slip through unnoticed.

Understanding alerts in the SOC: Stopping cyber incidents is hard

Let’s take a look at the cyber-attack lifecycle and the steps involved in detecting and stopping an attack:

First we need a trace of an attack…

The attack will produce some sort of digital trace. Novel attacks, insider threats, and attacker techniques such as living-off-the-land can make attacker activities extremely hard to distinguish.

A detection is created…

Then we have to detect the trace, for example some beaconing to a rare domain. Initial detection alerts being raised underpin the MTTD (mean time to detection). Reducing this initial unseen duration is where we have seen significant improvement with modern threat detection tools.

When it comes to threat detection, the possibilities are vast. Your initial lead could come from anything: an alert about unusual network activity, a potential known malware detection, or an odd email. Once that lead comes in, it’s up to your security team to investigate further and determine if this is this a legitimate threat or a false alarm and what the context is behind the alert.

Investigation begins…

It doesn’t just stop at a detection. Typically, humans also need to look at the alert, investigate, understand, analyze, and conclude whether this is a genuine threat that needs a response. We normally measure this as MTTA (mean time to analyze).

Conducting the investigation effectively requires a high degree of skill and efficiency, as every second counts in mitigating potential damage. Security teams must analyze the available data, correlate it across multiple sources, and piece together the timeline of events to understand the full scope of the incident. This process involves navigating through vast amounts of information, identifying patterns, and discerning relevant details. All while managing the pressure of minimizing downtime and preventing further escalation.

Containment begins…

Once we confirm something as a threat, and the human team determines a response is required and understand the scope, we need to contain the incident. That's normally the MTTC (mean time to containment) and can be further split into immediate and more permanent measures.

For more about how AI-led solutions can help in the containment stage read here: Autonomous Response: Streamlining Cybersecurity and Business Operations

The challenge is not only in 1) detecting threats quickly, but also 2) triaging and investigating them rapidly and with precision, and 3) prioritizing the most critical findings to avoid missed opportunities. Effective investigation demands a combination of advanced tools, robust workflows, and the expertise to interpret and act on the insights they generate. Without these, organizations risk delaying critical containment and response efforts, leaving them vulnerable to greater impacts.

While there are further steps (remediation, and of course complete recovery) here we will focus on investigation.

Developing an AI analyst: How Darktrace replicates human investigation

Darktrace has been working on understanding the investigative process of a skilled analyst since 2017. By conducting internal research between Darktrace expert SOC analysts and machine learning engineers, we developed a formalized understanding of investigative processes. This understanding formed the basis of a multi-layered AI system that systematically investigates data, taking advantage of the speed and breadth afforded by machine systems.

With this research we found that the investigative process often revolves around iterating three key steps: hypothesis creation, data collection, and results evaluation.

All these details are crucial for an analyst to determine the nature of a potential threat. Similarly, they are integral components of our Cyber AI Analyst which is an integral component across our product suite. In doing so, Darktrace has been able to replicate the human-driven approach to investigating alerts using machine learning speed and scale.

Here’s how it works:

  • When an initial or third-party alert is triggered, the Cyber AI Analyst initiates a forensic investigation by building multiple hypotheses and gathering relevant data to confirm or refute the nature of suspicious activity, iterating as necessary, and continuously refining the original hypothesis as new data emerges throughout the investigation.
  • Using a combination of machine learning including supervised and unsupervised methods, NLP and graph theory to assess activity, this investigation engine conducts a deep analysis with incidents raised to the human team only when the behavior is deemed sufficiently concerning.
  • After classification, the incident information is organized and processed to generate the analysis summary, including the most important descriptive details, and priority classification, ensuring that critical alerts are prioritized for further action by the human-analyst team.
  • If the alert is deemed unimportant, the complete analysis process is made available to the human team so that they can see what investigation was performed and why this conclusion was drawn.
Darktrace cyber ai analyst workflow, how it works

To illustrate this via example, if a laptop is beaconing to a rare domain, the Cyber AI Analyst would create hypotheses including whether this could be command and control traffic, data exfiltration, or something else. The AI analyst then collects data, analyzes it, makes decisions, iterates, and ultimately raises a new high-level incident alert describing and detailing its findings for human analysts to review and follow up.

For more information on Darktrace’s Cyber AI Analyst click here!

Unlocking an efficient SOC

To create a mature and proactive SOC, addressing the inefficiencies in the alert investigation process is essential. By extending AI's capabilities beyond detection, SOC teams can streamline and optimize investigative workflows, reducing alert fatigue and enhancing analyst efficiency.

This holistic approach not only improves Mean Time to Analysis (MTTA) but also ensures that SOCs are well-equipped to handle the evolving threat landscape. Embracing AI augmentation and automation in every phase of threat management will pave the way for a more resilient and proactive security posture, ultimately leading to a high-performing SOC that can effectively safeguard organizational assets.

Every relevant alert is investigated

The Cyber AI Analyst is not a generative AI system, or an XDR or SEIM aggregator that simply prompts you on what to do next. It uses a multi-layered combination of many different specialized AI methods to investigate every relevant alert from across your enterprise, native, 3rd party, and manual triggers, operating at machine speed and scale. This also positively affects detection engineering and alert tuning, because it does not suffer from fatigue when presented with low accuracy but potentially valuable alerts.

Retain and improve analyst skills

Transferring most analysis processes to AI systems can risk team skills if they don't maintain or build them and if the AI doesn't explain its process. This can reduce the ability to challenge or build on AI results and cause issues if the AI is unavailable. The Cyber AI Analyst, by revealing its investigation process, data gathering, and decisions, promotes and improves these skills. Its deep understanding of cyber incidents can be used for skill training and incident response practice by simulating incidents for security teams to handle.

Create time for cyber risk reduction

Human cybersecurity professionals excel in areas that require critical thinking, strategic planning, and nuanced decision-making. With alert fatigue minimized and investigations streamlined, your analysts can avoid the tedious data collection and analysis stages and instead focus on critical decision-making tasks such as implementing recovery actions and performing threat hunting.

Stay tuned for part 3/3

Part 3/3 in the Reimagine your SOC series explores the preventative security solutions market and effective risk management strategies.

Coming soon!

Continue reading
About the author
Brittany Woodsmall
Product Marketing Manager, AI & Attack Surface

Blog

/

January 29, 2025

/

Inside the SOC

Bytesize Security: Insider Threats in Google Workspace

Default blog imageDefault blog image

What is an insider threat?

An insider threat is a cyber risk originating from within an organization. These threats can involve actions such as an employee inadvertently clicking on a malicious link (e.g., a phishing email) or an employee with malicious intent conducting data exfiltration for corporate sabotage.

Insiders often exploit their knowledge and access to legitimate corporate tools, presenting a continuous risk to organizations. Defenders must protect their digital estate against threats from both within and outside the organization.

For example, in the summer of 2024, Darktrace / IDENTITY successfully detected a user in a customer environment attempting to steal sensitive data from a trusted Google Workspace service. Despite the use of a legitimate and compliant corporate tool, Darktrace identified anomalies in the user’s behavior that indicated malicious intent.

Attack overview: Insider threat

In June 2024, Darktrace detected unusual activity involving the Software-as-a-Service (SaaS) account of a former employee from a customer organization. This individual, who had recently left the company, was observed downloading a significant amount of data in the form of a “.INDD” file (an Adobe InDesign document typically used to create page layouts [1]) from Google Drive.

While the use of Google Drive and other Google Workspace platforms was not unexpected for this employee, Darktrace identified that the user had logged in from an unfamiliar and suspicious IPv6 address before initiating the download. This anomaly triggered a model alert in Darktrace / IDENTITY, flagging the activity as potentially malicious.

A Model Alert in Darktrace / IDENTITY showing the unusual “.INDD” file being downloaded from Google Workspace.
Figure 1: A Model Alert in Darktrace / IDENTITY showing the unusual “.INDD” file being downloaded from Google Workspace.

Following this detection, the customer reached out to Darktrace’s Security Operations Center (SOC) team via the Security Operations Support service for assistance in triaging and investigating the incident further. Darktrace’s SOC team conducted an in-depth investigation, enabling the customer to identify the exact moment of the file download, as well as the contents of the stolen documents. The customer later confirmed that the downloaded files contained sensitive corporate data, including customer details and payment information, likely intended for reuse or sharing with a new employer.

In this particular instance, Darktrace’s Autonomous Response capability was not active, allowing the malicious insider to successfully exfiltrate the files. If Autonomous Response had been enabled, Darktrace would have immediately acted upon detecting the login from an unusual (in this case 100% rare) location by logging out and disabling the SaaS user. This would have provided the customer with the necessary time to review the activity and verify whether the user was authorized to access their SaaS environments.

Conclusion

Insider threats pose a significant challenge for traditional security tools as they involve internal users who are expected to access SaaS platforms. These insiders have preexisting knowledge of the environment, sensitive data, and how to make their activities appear normal, as seen in this case with the use of Google Workspace. This familiarity allows them to avoid having to use more easily detectable intrusion methods like phishing campaigns.

Darktrace’s anomaly detection capabilities, which focus on identifying unusual activity rather than relying on specific rules and signatures, enable it to effectively detect deviations from a user’s expected behavior. For instance, an unusual login from a new location, as in this example, can be flagged even if the subsequent malicious activity appears innocuous due to the use of a trusted application like Google Drive.

Credit to Vivek Rajan (Cyber Analyst) and Ryan Traill (Analyst Content Lead)

Appendices

Darktrace Model Detections

SaaS / Resource::Unusual Download Of Externally Shared Google Workspace File

References

[1]https://www.adobe.com/creativecloud/file-types/image/vector/indd-file.html

MITRE ATT&CK Mapping

Technqiue – Tactic – ID

Data from Cloud Storage Object – COLLECTION -T1530

Continue reading
About the author
Vivek Rajan
Cyber Analyst
Your data. Our AI.
Elevate your network security with Darktrace AI