Blog
/
Proactive Security
/
June 25, 2024

Let the Dominos Fall! SOC and IR Metrics for ROI

Vendors are scrambling to compare MTTD metrics laid out in the latest MITRE Engenuity ATT&CK® Evaluations. But this analysis is reductive, ignoring the fact that in cybersecurity, there are far more metrics that matter.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
John Bradshaw
Sr. Director, Technical Marketing
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
25
Jun 2024

One of the most enjoyable discussions (and debates) I engage in is the topic of Security Operations Center (SOC) and Incident Response (IR) metrics to measure and validate an organization’s Return on Investment (ROI). The debate part comes in when I hear vendor experts talking about “the only” SOC metrics that matter, and only list the two most well-known, while completely ignoring metrics that have a direct causal relationship.

In this blog, I will discuss what I believe are the SOC/IR metrics that matter, how each one has a direct impact on the others, and why organizations should ensure they are working towards the goal of why these metrics are measured in the first place: Reduction of Risk and Costs.

Reduction of Risk and Costs

Every security solution and process an organization puts in place should reduce the organization’s risk of a breach, exposure by an insider threat, or loss of productivity. How an organization realizes net benefits can be in several ways:

  • Improved efficiencies can result in SOC/IR staff focusing on other areas such as advanced threat hunting rather than churning through alerts on their security consoles. It may also help organizations dealing with the lack of skilled security staff by using Artificial Intelligence (AI) and automated processes.
  • A well-oiled SOC/IR team that has greatly reduced or even eliminated mundane tasks attracts, motivates, and retains talent resulting in reduced hiring and training costs.
  • The direct impact of a breach such as a ransomware attack can be devastating. According to the 2024 Data Breach Investigations Report by Verizon, MGM Resorts International reported the ALPHV ransomware cost the company approximately $100 million[1].
  • Failure to take appropriate steps to protect the organization can result in regulatory fines; and if an organization has, or is considering, purchasing Cyber Insurance, can result in declined coverage or increased premiums.

How does an organization demonstrate they are taking proactive measures to prevent breaches? That is where it's important to understand the nine (yes, nine) key metrics, and how each one directly influences the others, play their roles.

Metrics in the Incident Response Timeline

Let’s start with a review of the key steps in the Incident Response Timeline:

Seven of the nine key metrics are in the IR timeline, while two of the metrics occur before you ever have an incident. They occur in the Pre-Detection Stage.

Pre-Detection stage metrics are:

  • Preventions Per Intrusion Attempt (PPIA)
  • False Positive Reduction Rate (FPRR)

Next is the Detect and Investigate stage, there are three metrics to consider:

  • Mean Time to Detection (MTTD)
  • Mean Time to Triage (MTTT)
  • Mean Time to Understanding (MTTU)

This is followed by the Remediation stage, there are two metrics here:

  • Mean Time to Containment (MTTC)
  • Mean Time to Remediation / Recovery (MTTR)

Finally, there is the Risk Reduction stage, there are two metrics:

  • Mean Time to Advice (MTTA)
  • Mean Time to Implementation (MTTI)

Pre-Detection Stage

Preventions Per Intrusion Attempt

PPIA is defined as stopping any intrusion attempt at the earliest possible stage. Your network Intrusion Prevention System (IPS) blocks vulnerability exploits, your e-mail security solution intercepts and removes messages with malicious attachments or links, your egress firewall blocks unauthorized login attempts, etc. The adversary doesn’t get beyond Step 1 in the attack life cycle.

This metric is the first domino. Every organization should strive to improve on this metric every day. Why? For every intrusion attempt you stop right out of the gate, you eliminate the actions for every other metric. There is no incident to detect, triage, investigate, remediate, or analyze post-incident for ways to improve your security posture.

When I think about PPIA, I always remember back to a discussion with a former mentor, Tim Crothers, who discussed the benefits of focusing on Prevention Failure Detection.

The concept is that as you layer your security defenses, your PPIA moves ever closer to 100% (no one has ever reached 100%). This narrows the field of fire for adversaries to breach into your organization. This is where novel, unknown, and permuted threats live and breathe. This is where solutions utilizing Unsupervised Machine Learning excel in raising anomalous alerts – indications of potential compromise involving one of these threats. Unsupervised ML also raises alerts on anomalous activity generated by known threats and can raise detections before many signature-based solutions. Most organizations struggle to find strong permutations of known threats, insider threats, supply chain attacks, attacks utilizing n-day and 0-day exploits. Moving PPIA ever closer to 100% also frees your team up for conducting threat hunting activities – utilizing components of your SOC that collect and store telemetry to query for potential compromises based on hypothesis the team raises. It also significantly reduces the alerts your team must triage and investigate – solving many of the issues outlined at the start of this paper.

False Positive Reduction Rate

Before we discuss FPRR, I should clarify how I define False Positives (FPs). Many define FPs as an alert that is in error (i.e.: your EDR alerts on malware that turns out to be AV signature files). While that is a FP, I extend the definition to include any alert that did not require triage / investigation and distracts the SOC/IR team (meaning they conducted some level of triage / investigation).

This metric is the second domino. Why is this metric important? Every alert your team exerts time and effort on that is a non-issue distracts them from alerts that matter. One of the major issues that has resonated in the security industry for decades is that SOCs are inundated with alerts and cannot clear the backlog. When it comes to PPIA + FPRR, I have seen analysts spend time investigating alerts that were blocked out of the gate while their screen continued to fill up with more. You must focus on Prevention Failure Detection to get ahead of the backlog.

Detect and Investigate Stages

Mean Time to Detection

MTTD, or “Dwell Time”, has decreased dramatically over the past 12 years. From well over a year to 16 days in 2023[2]. MTTD is measured from the earliest possible point you could detect the intrusion to the moment you actually detect it.

This third domino is important because the longer an adversary remains undetected, the more the odds increase they will complete their mission objective. It also makes the tasks of triage and investigation more difficult as analysts must piece together more activity and adversaries may be erasing evidence along the way – or your storage retention does not cover the breach timeline.

Many solutions focusing solely on MTTD can actually create the very problem SOCs are looking to solve.  That is, they generate so much alerting that they flood the console, email, or text messaging app causing an unmanageable queue of alerts (this is the problem XDR solutions were designed to resolve by focusing on incidents rather than alerts).

Mean Time to Triage

MTTT involves SOCs that utilize Level 1 (aka Triage) analysts to render an “escalate / do not escalate” alert verdict accurately. Accuracy is important because Triage Analysts typically are staff new to cyber security (recent grad / certification) and may over escalate (afraid to miss something important) or under escalate (not recognize signs of a successful breach). Because of this, a small MTTT does not always equate to successful handling of incidents.

This metric is important because keeping your senior staff focused on progressing incidents in a timely manner (and not expending time on false positives) should reduce stress and required headcount.

Mean Time to Understanding

MTTU deals with understanding the complete nature of the incident being investigated. This is different than MTTT which only deals with whether the issue merits escalation to senior analysts. It is then up to the senior analysts to determine the scope of the incident, and if you are a follower of my UPSET Investigation Framework, you know understanding the full scope involves:

U = All compromised accounts

P = Persistence Mechanisms used

S = All systems involved (organization, adversary, and intermediaries)

E = Endgame (or mission objective)

T = Techniques, Tactics, Procedures (TTPs) utilized by the adversary

MTTU is important because this information is critical before any containment or remediation actions are taken. Leave a stone unturned, and you alert the adversary that you are onto them and possibly fail to close an avenue of access.

Remediation Stages

Mean Time to Containment

MTTC deals with neutralizing the threat. You may not have kicked the adversary out, but you have halted their progress to their mission objective and ability to inflict further damage. This may be through use of isolation capabilities, termination of malicious processes, or firewall blocks.

MTTC is important, especially with ransomware attacks where every second counts. Faster containment responses can result in reduced / eliminated disruption to business operations or loss of data.

Mean Time to Remediation / Recovery

The full scope of the incident is understood, the adversary has been halted in their tracks, no malicious processes are running on any systems in your organization. Now is the time to put things back to right. MTTR deals with the time involved in restoring business operations to pre-incident stage. It means all remnants of changes made by the adversary (persistence, account alterations, programs installed, etc.) are removed; all disrupted systems are restored to operations (i.e.: ransomware encrypted systems are recovered from backups / snapshots), compromised user accounts are reset, etc.

MTTR is important because it informs senior management of how fast the organization can recover from an incident. Disaster Recovery and Business Continuity plans play a major role in improving this score.

Risk Reduction Stages

Mean Time to Advice

After the dust has settled from the incident, the job is not done. MTTA deals with identifying and assessing the specific areas (vulnerabilities, misconfigurations, lack of security controls) that permitted the adversary to advance to the point where detection occurred (and any actions beyond). The SOC and IR teams should then compile a list of recommendations to present to management to improve the security posture of the organization so the same attack path cannot be used.

Mean Time to Implement

Once recommendations are delivered to management, how long does it take to implement them? MTTI tracks this timeline because none of it matters if you don’t fix the holes that led to the breach.

Nine Dominos

There are the nine dominos of SOC / IR metrics I recommend helping organizations know if they are on the right track to reduce risk, costs and improve morale / retention of the security teams. You may not wish to track all nine, but understanding how each metric impacts the others can provide visibility into why you are not seeing expected improvements when you implement a new security solution or change processes.

Improving prevention and reducing false positives can make huge positive impacts on your incident response timeline. Utilizing solutions that get you to resolution quicker allows the team to focus on recommendations and risk reduction strategies.

Whichever metrics you choose to track, just be sure the dominos fall in your favor.

References

[1] 2024 Verizon Data Breach Investigations Report, p83

[2] Mandiant M-Trends 2023

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
John Bradshaw
Sr. Director, Technical Marketing

More in this series

No items found.

Blog

/

/

May 7, 2025

Anomaly-based threat hunting: Darktrace's approach in action

person working on laptopDefault blog imageDefault blog image

What is threat hunting?

Threat hunting in cybersecurity involves proactively and iteratively searching through networks and datasets to detect threats that evade existing automated security solutions. It is an important component of a strong cybersecurity posture.

There are several frameworks that Darktrace analysts use to guide how threat hunting is carried out, some of which are:

  • MITRE Attack
  • Tactics, Techniques, Procedures (TTPs)
  • Diamond Model for Intrusion Analysis
  • Adversary, Infrastructure, Victims, Capabilities
  • Threat Hunt Model – Six Steps
  • Purpose, Scope, Equip, Plan, Execute, Feedback
  • Pyramid of Pain

These frameworks are important in baselining how to run a threat hunt. There are also a combination of different methods that allow defenders diversity– regardless of whether it is a proactive or reactive threat hunt. Some of these are:

  • Hypothesis-based threat hunting
  • Analytics-driven threat hunting
  • Automated/machine learning hunting
  • Indicator of Compromise (IoC) hunting
  • Victim-based threat hunting

Threat hunting with Darktrace

At its core, Darktrace is an anomaly-based detection tool. It combines various machine learning types that allows it to characterize what constitutes ‘normal’, based on the analysis of many different measures of a device or actor’s behavior. Those types of learning are then curated into what are called models.

Darktrace models leverage anomaly detection and integrate outputs from Darktrace Deep Packet Inspection, telemetry inputs, and additional modules, creating tailored activity detection.

This dynamic understanding allows Darktrace to identify, with a high degree of precision, events or behaviors that are both anomalous and unlikely to be benign.  On top of machine learning models for detection, there is also the ability to change and create models showcasing the tool’s diversity. The Model Editor allows security teams to specify values, priorities, thresholds, and actions they want to detect. That means a team can create custom detection models based on specific use cases or business requirements. Teams can also increase the priority of existing detections based on their own risk assessments to their environment.

This level of dexterity is particularly useful when conducting a threat hunt. As described above, and in previous ‘Inside the SOC’ blogs such a threat hunt can be on a specific threat actor, specific sector, or a  hypothesis-based threat hunt combined with ‘experimenting’ with some of Darktrace’s models.

Conducting a threat hunt in the energy sector with experimental models

In Darktrace’s recent Threat Research report “AI & Cybersecurity: The state of cyber in UK and US energy sectors” Darktrace’s Threat Research team crafted hypothesis-driven threat hunts, building experimental models and investigating existing models to test them and detect malicious activity across Darktrace customers in the energy sector.

For one of the hunts, which hypothesised utilization of PerfectData software and multi-factor authentication (MFA) bypass to compromise user accounts and destruct data, an experimental model was created to detect a Software-as-a-Service (SaaS) user performing activity relating to 'PerfectData Software’, known to allow a threat actor to exfiltrate whole mailboxes as a PST file. Experimental model alerts caused by this anomalous activity were analyzed, in conjunction with existing SaaS and email-related models that would indicate a multi-stage attack in line with the hypothesis.

Whilst hunting, Darktrace researchers found multiple model alerts for this experimental model associated with PerfectData software usage, within energy sector customers, including an oil and gas investment company, as well as other sectors. Upon further investigation, it was also found that in June 2024, a malicious actor had targeted a renewable energy infrastructure provider via a PerfectData Software attack and demonstrated intent to conduct an Operational Technology (OT) attack.

The actor  logged into Azure AD from a rare US IP address. They then granted Consent to ‘eM Client’ from the same IP. Shortly after, the actor granted ‘AddServicePrincipal’ via Azure  to PerfectData Software. Two days later, the actor created a  new email rule from a London IP to move emails to an RSS Feed Folder, stop processing rules, and mark emails as read. They then accessed mail items in the “\Sent” folder from a malicious IP belonging to anonymization network,  Private Internet Access Virtual Private Network (PIA VPN). The actor then conducted mass email deletions , deleting multiple instances of emails with subject “[Name] shared "[Company Name] Proposal" With You” from the  “\Sent folder”. The emails’ subject suggests the email likely contains a link to file storage for phishing purposes. The mass deletion likely represented an attempt to obfuscate a potential outbound phishing email campaign.

The Darktrace Model Alert that triggered for the mass deletes of the likely phishing email containing a file storage link.
Figure 1: The Darktrace Model Alert that triggered for the mass deletes of the likely phishing email containing a file storage link.

A month later, the same user was observed downloading mass mLog CSV files related to proprietary and Operational Technology information. In September, three months after the initial attack, another mass download of operational files occurred by this actor, pertaining to operating instructions and measurements, The observed patience and specific file downloads seemingly demonstrated an intent to conduct or research possible OT attack vectors. An attack on OT could have significant impacts including operational downtime, reputational damage, and harm to everyday operations. Darktrace alerted the impacted customer once findings were verified, and subsequent actions were taken by the internal security team to prevent further malicious activity.

Conclusion

Harnessing the power of different tools in a security stack is a key element to cyber defense. The above hypothesis-based threat hunt and custom demonstrated intent to conduct an experimental model creation demonstrates different threat hunting approaches, how Darktrace’s approach can be operationalized, and that proactive threat hunting can be a valuable complement to traditional security controls and is essential for organizations facing increasingly complex threat landscapes.

Credit to Nathaniel Jones (VP, Security & AI Strategy, Field CISO at Darktrace) and Zoe Tilsiter (EMEA Consultancy Lead)

Continue reading
About the author
Nathaniel Jones
VP, Security & AI Strategy, Field CISO

Blog

/

/

May 6, 2025

Combatting the Top Three Sources of Risk in the Cloud

woman working on laptopDefault blog imageDefault blog image

With cloud computing, organizations are storing data like intellectual property, trade secrets, Personally Identifiable Information (PII), proprietary code and statistics, and other sensitive information in the cloud. If this data were to be accessed by malicious actors, it could incur financial loss, reputational damage, legal liabilities, and business disruption.

Last year data breaches in solely public cloud deployments were the most expensive type of data breach, with an average of $5.17 million USD, a 13.1% increase from the year before.

So, as cloud usage continues to grow, the teams in charge of protecting these deployments must understand the associated cybersecurity risks.

What are cloud risks?

Cloud threats come in many forms, with one of the key types consisting of cloud risks. These arise from challenges in implementing and maintaining cloud infrastructure, which can expose the organization to potential damage, loss, and attacks.

There are three major types of cloud risks:

1. Misconfigurations

As organizations struggle with complex cloud environments, misconfiguration is one of the leading causes of cloud security incidents. These risks occur when cloud settings leave gaps between cloud security solutions and expose data and services to unauthorized access. If discovered by a threat actor, a misconfiguration can be exploited to allow infiltration, lateral movement, escalation, and damage.

With the scale and dynamism of cloud infrastructure and the complexity of hybrid and multi-cloud deployments, security teams face a major challenge in exerting the required visibility and control to identify misconfigurations before they are exploited.

Common causes of misconfiguration come from skill shortages, outdated practices, and manual workflows. For example, potential misconfigurations can occur around firewall zones, isolated file systems, and mount systems, which all require specialized skill to set up and diligent monitoring to maintain

2. Identity and Access Management (IAM) failures

IAM has only increased in importance with the rise of cloud computing and remote working. It allows security teams to control which users can and cannot access sensitive data, applications, and other resources.

Cybersecurity professionals ranked IAM skills as the second most important security skill to have, just behind general cloud and application security.

There are four parts to IAM: authentication, authorization, administration, and auditing and reporting. Within these, there are a lot of subcomponents as well, including but not limited to Single Sign-On (SSO), Two-Factor Authentication (2FA), Multi-Factor Authentication (MFA), and Role-Based Access Control (RBAC).

Security teams are faced with the challenge of allowing enough access for employees, contractors, vendors, and partners to complete their jobs while restricting enough to maintain security. They may struggle to track what users are doing across the cloud, apps, and on-premises servers.

When IAM is misconfigured, it increases the attack surface and can leave accounts with access to resources they do not need to perform their intended roles. This type of risk creates the possibility for threat actors or compromised accounts to gain access to sensitive company data and escalate privileges in cloud environments. It can also allow malicious insiders and users who accidentally violate data protection regulations to cause greater damage.

3. Cross-domain threats

The complexity of hybrid and cloud environments can be exploited by attacks that cross multiple domains, such as traditional network environments, identity systems, SaaS platforms, and cloud environments. These attacks are difficult to detect and mitigate, especially when a security posture is siloed or fragmented.  

Some attack types inherently involve multiple domains, like lateral movement and supply chain attacks, which target both on-premises and cloud networks.  

Challenges in securing against cross-domain threats often come from a lack of unified visibility. If a security team does not have unified visibility across the organization’s domains, gaps between various infrastructures and the teams that manage them can leave organizations vulnerable.

Adopting AI cybersecurity tools to reduce cloud risk

For security teams to defend against misconfigurations, IAM failures, and insecure APIs, they require a combination of enhanced visibility into cloud assets and architectures, better automation, and more advanced analytics. These capabilities can be achieved with AI-powered cybersecurity tools.

Such tools use AI and automation to help teams maintain a clear view of all their assets and activities and consistently enforce security policies.

Darktrace / CLOUD is a Cloud Detection and Response (CDR) solution that makes cloud security accessible to all security teams and SOCs by using AI to identify and correct misconfigurations and other cloud risks in public, hybrid, and multi-cloud environments.

It provides real-time, dynamic architectural modeling, which gives SecOps and DevOps teams a unified view of cloud infrastructures to enhance collaboration and reveal possible misconfigurations and other cloud risks. It continuously evaluates architecture changes and monitors real-time activity, providing audit-ready traceability and proactive risk management.

Real-time visibility into cloud assets and architectures built from network, configuration, and identity and access roles. In this unified view, Darktrace / CLOUD reveals possible misconfigurations and risk paths.
Figure 1: Real-time visibility into cloud assets and architectures built from network, configuration, and identity and access roles. In this unified view, Darktrace / CLOUD reveals possible misconfigurations and risk paths.

Darktrace / CLOUD also offers attack path modeling for the cloud. It can identify exposed assets and highlight internal attack paths to get a dynamic view of the riskiest paths across cloud environments, network environments, and between – enabling security teams to prioritize based on unique business risk and address gaps to prevent future attacks.  

Darktrace’s Self-Learning AI ensures continuous cloud resilience, helping teams move from reactive to proactive defense.

[related-resource]

Continue reading
About the author
Pallavi Singh
Product Marketing Manager, OT Security & Compliance
Your data. Our AI.
Elevate your network security with Darktrace AI