How to Protect your Organization Against Microsoft Teams Phishing Attacks

Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
May 2024
May 2024
In recent months, we’ve seen a dramatic rise in the number of attacks using Microsoft Teams as a threat vector. This blog will explore why Teams is becoming such a popular entry point, how built-in and market security offerings fail to address sophisticated Teams threats, and why behavioral AI is the solution to early detection of Teams-based social engineering and account compromise.

The problem: Microsoft Teams phishing attacks are on the rise

Around 83% of Fortune 500 companies rely on Microsoft Office products and services1, with Microsoft Teams and Microsoft SharePoint in particular emerging as critical platforms to the business operations of the everyday workplace. Researchers across the threat landscape have begun to observe these legitimate services being leveraged more and more by malicious actors as an initial access method.

As Teams becomes a more prominent feature of the workplace many employees rely on it for daily internal and external communication, even surpassing email usage in some organizations. As Microsoft2 states, "Teams changes your relationship with email. When your whole group is working in Teams, it means you'll all get fewer emails. And you'll spend less time in your inbox, because you'll use Teams for more of your conversations."

However, Teams can be exploited to send targeted phishing messages to individuals either internally or externally, while appearing legitimate and safe. Users might receive an external message request from a Teams account claiming to be an IT support service or otherwise affiliated with the organization. Once a user has accepted, the threat actor can launch a social engineering campaign or deliver a malicious payload. As a primarily internal tool there is naturally less training and security awareness around Teams – due to the nature of the channel it is assumed to be a trusted source, meaning that social engineering is already one step ahead.

Screenshot of a Microsoft Teams message request from a Midnight Blizzard-controlled account (courtesy of Microsoft)
Figure 1: Screenshot of a Microsoft Teams message request from a Midnight Blizzard-controlled account (courtesy of Microsoft)

Microsoft Teams Phishing Examples

Microsoft has identified several major phishing attacks using Teams within the past year.

In July 2023, Microsoft announced that the threat actor known as Midnight Blizzard – identified by the United States as a Russian state-sponsored group – had launched a series of phishing campaigns via Teams with the aim of stealing user credentials. These attacks used previously compromised Microsoft 365 accounts and set up new domain names that impersonated legitimate IT support organizations. The threat actors then used social engineering tactics to trick targeted users into sharing their credentials via Teams, enabling them to access sensitive data.  

At a similar time, threat actor Storm-0324 was observed sending phishing lures via Teams containing links to malicious SharePoint-hosted files. The group targeted organizations that allow Teams users to interact and share files externally. Storm-0324’s goal is to gain initial access to hand over to other threat actors to pursue more dangerous follow-on attacks like ransomware.

For a more in depth look at how Darktrace stops Microsoft Teams phishing read our blog: Don’t Take the Bait: How Darktrace Keeps Microsoft Teams Phishing Attacks at Bay

The market: Existing Microsoft Teams security solutions are insufficient

Microsoft’s native Teams security focuses on payloads, namely links and attachments, as the principal malicious component of any phishing. These payloads are relatively straightforward to detect with their experience in anti-virus, sandboxing, and IOCs. However, this approach is unable to intervene before the stage at which payloads are delivered, before the user even gets the chance to accept or deny an external message request. At the same time, it risks missing more subtle threats that don’t include attachments or links – like early stage phishing, which is pure social engineering – or completely new payloads.

Equally, the market offering for Teams security is limited. Security solutions available on the market are always payload-focused, rather than taking into account the content and context in which a link or attachment is sent. Answering questions like:

  • Does it make sense for these two accounts to speak to each other?
  • Are there any linguistic indicators of inducement?

Furthermore, they do not correlate with email to track threats across multiple communication environments which could signal a wider campaign. Effectively, other market solutions aren’t adding extra value – they are protecting against the same types of threats that Microsoft is already covering by default.

The other aspect of Teams security that native and market solutions fail to address is the account itself. As well as focusing on Teams threats, it’s important to analyze messages to understand the normal mode of communication for a user, and spot when a user’s Teams activity might signal account takeover.

The solution: How Darktrace protects Microsoft Teams against sophisticated threats

With its biggest update to Darktrace/Email ever, Darktrace now offers support for Microsoft Teams. With that, we are bringing the same AI philosophy that protects your email and accounts to your messaging environment.  

Our Self-Learning AI looks at content and context for every communication, whether that’s sent in an email or Teams message. It looks at actual user behavior, including language patterns, relationship history of sender and recipient, tone and payloads, to understand if a message poses a threat. This approach allows Darktrace to detect threats such as social engineering and payloadless attacks using visibility and forensic capabilities that Microsoft security doesn’t currently offer, as well as early symptoms of account compromise.  

Unlike market solutions, Darktrace doesn’t offer a siloed approach to Teams security. Data and signals from Teams are shared across email to inform detection, and also with the wider Darktrace ActiveAI security platform. By correlating information from email and Teams with network and apps security, Darktrace is able to better identify suspicious Teams activity and vice versa.  

Interested in the other ways Darktrace/Email augments threat detection? Read our latest blog on how improving the quality of end-user reporting can decrease the burden on the SOC. To find our more about Darktrace's enduring partnership with Microsoft, click here.


[1] Essential Microsoft Office Statistics in 2024

[2] Microsoft blog, Microsoft Teams and email, living in harmony, 2024

Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Carlos Gray
Product Manager

Carlos Gonzalez Gray is a Product Marketing Manager at Darktrace. Based in the Madrid Office, Carlos engages with the global product team to ensure each product supports the company’s overall strategy and goals throughout their entire lifecycle. Previous to his position in the product team, Carlos worked as a Cyber Technology Specialist where he specialized in the OT sector protecting critical infrastructure.  His background as a consultant in Spain to IBEX 35 companies led him to become well versed in matters of compliance, auditing and data privacy as well. Carlos holds an Honors BA in Political Science and a Masters in Cybersecurity from IE University.

Book a 1-1 meeting with one of our experts
share this article
No items found.
COre coverage

More in this series

No items found.


Inside the SOC

Following up on our Conversation: Detecting & Containing a LinkedIn Phishing Attack with Darktrace

Default blog imageDefault blog image
Jun 2024

Note: Real organization, domain and user names have been modified and replaced with fictitious names to maintain anonymity.  

Social media cyber-attacks

Social media is a known breeding ground for cyber criminals to easily connect with a near limitless number of people and leverage the wealth of personal information shared on these platforms to defraud the general public.  Analysis suggests even the most tech savvy ‘digital natives’ are vulnerable to impersonation scams over social media, as criminals weaponize brands and trends, using the promise of greater returns to induce sensitive information sharing or fraudulent payments [1].

LinkedIn phishing

As the usage of a particular social media platform increases, cyber criminals will find ways to exploit the increasing user base, and this trend has been observed with the rise in LinkedIn scams in recent years [2].  LinkedIn is the dominant professional networking site, with a forecasted 84.1million users by 2027 [3].  This platform is data-driven, so users are encouraged to share information publicly, including personal life updates, to boost visibility and increase job prospects [4] [5].  While this helps legitimate recruiters to gain a good understanding of the user, an attacker could also leverage the same personal content to increase the sophistication and success of their social engineering attempts.  

Darktrace detection of LinkedIn phishing

Darktrace detected a Software-as-a-Service (SaaS) compromise affecting a construction company, where the attack vector originated from LinkedIn (outside the monitoring of corporate security tools), but then pivoted to corporate email where a credential harvesting payload was delivered, providing the attacker with credentials to access a corporate file storage platform.  

Because LinkedIn accounts are typically linked to an individual’s personal email and are most commonly accessed via the mobile application [6] on personal devices that are not monitored by security teams, it can represent an effective initial access point for attackers looking to establish an initial relationship with their target. Moreover, user behaviors to ignore unsolicited emails from new or unknown contacts are less frequently carried over to platforms like LinkedIn, where interactions with ‘weak ties’ as opposed to ‘strong ties’ are a better predictor of job mobility [7]. Had this attack been allowed to continue, the threat actor could have leveraged access to further information from the compromised business cloud account to compromise other high value accounts, exfiltrate sensitive data, or defraud the organization.

LinkedIn phishing attack details


The initial reconnaissance and social engineering occurred on LinkedIn and was thus outside the purview of corporate security tools, Darktrace included.

However, the email domain “hausconstruction[.]com” used by the attacker in subsequent communications appears to be a spoofed domain impersonating a legitimate construction company “haus[.]com”, suggesting the attacker may have also impersonated an employee of this construction company on LinkedIn.  In addition to spoofing the domain, the attacker seemingly went further to register “” on a commercial web hosting platform.  This is a technique used frequently not just to increase apparent legitimacy, but also to bypass traditional security tools since newly registered domains will have no prior threat intelligence, making them more likely to evade signature and rules-based detections [8].  In this instance, open-source intelligence (OSINT) sources report that the domain was created several months earlier, suggesting this may have been part of a targeted attack on construction companies.  

Initial Intrusion

It was likely that during the correspondence over LinkedIn, the target user was solicited into following up over email regarding a prospective construction project, using their corporate email account.  In a probable attempt to establish a precedent of bi-directional correspondence so that subsequent malicious emails would not be flagged by traditional security tools, the attacker did not initially include suspicious links, attachments or use solicitous or inducive language within their initial emails.

Example of bi-directional email correspondence between the target and the attacker impersonating a legitimate employee of the construction company
Figure 1: Example of bi-directional email correspondence between the target and the attacker impersonating a legitimate employee of the construction company
Cyber AI Analyst investigation into one of the initial emails the target received from the attacker.
Figure 2: Cyber AI Analyst investigation into one of the initial emails the target received from the attacker.  

To accomplish the next stage of their attack, the attacker shared a link, hidden behind the inducing text “VIEW ALL FILES”, to a malicious file using the Hightail cloud storage service. This is also a common method employed by attackers to evade detection, as this method of file sharing does not involve attachments that can be scanned by traditional security tools, and legitimate cloud storage services are less likely to be blocked.

OSINT analysis on the malicious link link shows the file hosted on Hightail was a HTML file with the associated message “Following up on our LinkedIn conversation”.  Further analysis suggests the file contained obfuscated Javascript that, once opened, would automatically redirect the user to a malicious domain impersonating a legitimate Microsoft login page for credential harvesting purposes.  

The malicious HTML file containing obfuscated Javascript, where the highlighted string references the malicious credential harvesting domain.
Figure 3: The malicious HTML file containing obfuscated Javascript, where the highlighted string references the malicious credential harvesting domain.
Screenshot of fraudulent Microsoft Sign In page hosted on the malicous credential harvesting domain.
Figure 4: Screenshot of fraudulent Microsoft Sign In page hosted on the malicious credential harvesting domain.

Although there was prior email correspondence with the attacker, this email was not automatically deemed safe by Darktrace and was further analyzed for unusual properties and unusual communications for the recipient and the recipient’s peer group.  

Darktrace determined that:

  • It was unusual for this file storage solution to be referenced in communications to the user and the wider network
  • Textual properties of the email body suggested a high level of inducement from the sender, with a high level of focus on the phishing link.
  • The full link contained suspicious properties suggesting it is high risk.
Darktrace’s analysis of the phishing email, presenting key information about the unusual characteristics of this email, information on highlighted content, and an overview of actions that were initially applied.
Figure 5: Darktrace’s analysis of the phishing email, presenting key information about the unusual characteristics of this email, information on highlighted content, and an overview of actions that were initially applied.  

Based on these anomalies, Darktrace initially moved the phishing email to the junk folder and locked the link, preventing the user from directly accessing the malicious file hosted on Hightail.  However, the customer’s security team released the email, likely upon end-user request, allowing the target user to access the file and ultimately enter their credentials into that credential harvesting domain.

Darktrace alerts triggered by the malicious phishing email and the corresponding Autonomous Response actions.
Figure 6: Darktrace alerts triggered by the malicious phishing email and the corresponding Autonomous Response actions.

Lateral Movement

Correspondence between the attacker and target continued for two days after the credential harvesting payload was delivered.  Five days later, Darktrace detected an unusual login using multi-factor authentication (MFA) from a rare external IP and ASN that coincided with Darktrace/Email logs showing access to the credential harvesting link.

This attempt to bypass MFA, known as an Office365 Shell WCSS attack, was likely achieved by inducing the target to enter their credentials and legitimate MFA token into the fake Microsoft login page. This was then relayed to Microsoft by the attacker and used to obtain a legitimate session. The attacker then reused the legitimate token to log into Exchange Online from a different IP and registered the compromised device for MFA.

Screenshot within Darktrace/Email of the phishing email that was released by the security team, showing the recipient clicked the link to file storage where the malicious payload was stored.
Figure 7: Screenshot within Darktrace/Email of the phishing email that was released by the security team, showing the recipient clicked the link to file storage where the malicious payload was stored.
Event Log showing a malicious login and MFA bypass at 17:57:16, shortly after the link was clicked.  Highlighted in green is activity from the legitimate user prior to the malicious login, using Edge.
Figure 8: Event Log showing a malicious login and MFA bypass at 17:57:16, shortly after the link was clicked.  Highlighted in green is activity from the legitimate user prior to the malicious login, using Edge. Highlighted in orange and red is the malicious activity using Chrome.

The IP addresses used by the attacker appear to be part of anonymization infrastructure, but are not associated with any known indicators of compromise (IoCs) that signature-based detections would identify [9] [10].

In addition to  logins being observed within half an hour of each other from multiple geographically impossible locations (San Francisco and Phoenix), the unexpected usage of Chrome browser, compared to Edge browser previously used, provided Darktrace with further evidence that this activity was unlikely to originate from the legitimate user.  Although the user was a salesperson who frequently travelled for their role, Darktrace’s Self-Learning AI understood that the multiple logins from these locations was highly unusual at the user and group level, and coupled with the subsequent unexpected account modification, was a likely indicator of account compromise.  

Accomplish mission

Although the email had been manually released by the security team, allowing the attack to propagate, additional layers of defense were triggered as Darktrace's Autonomous Response initiated “Disable User” actions upon detection of the multiple unusual logins and the unauthorized registration of security information.  

However, the customer had configured Autonomous Response to require human confirmation, therefore no actions were taken until the security team manually approved them over two hours later. In that time, access to mail items and other SharePoint files from the unusual IP address was detected, suggesting a potential loss of confidentiality to business data.

Advanced Search query showing several FilePreviewed and MailItemsAccessed events from either the IPs used by the attacker, or using the software Chrome.  Note some of the activity originated from Microsoft IPs which may be whitelisted by traditional security tools.
Figure 9: Advanced Search query showing several FilePreviewed and MailItemsAccessed events from either the IPs used by the attacker, or using the software Chrome.  Note some of the activity originated from Microsoft IPs which may be whitelisted by traditional security tools.

However, it appears that the attacker was able to maintain access to the compromised account, as login and mail access events from 199.231.85[.]153 continued to be observed until the afternoon of the next day.  


This incident demonstrates the necessity of AI to security teams, with Darktrace’s ActiveAI Security Platform detecting a sophisticated phishing attack where human judgement fell short and initiated a real-time response when security teams could not physically respond as fast.  

Security teams are very familiar with social engineering and impersonation attempts, but these attacks remain highly prevalent due to the widespread adoption of technologies that enable these techniques to be deployed with great sophistication and ease.  In particular, the popularity of information-rich platforms like LinkedIn that are geared towards connecting with unknown people make it an attractive initial access point for malicious attackers.

In the second half of 2023 alone, over 200 thousand fake profiles were reported by members on LinkedIn [11].  Fake profiles can be highly sophisticated, use professional images, contain compelling descriptions, reference legitimate company listings and present believable credentials.  

It is unrealistic to expect end users to defend themselves against such sophisticated impersonation attempts. Moreover, it is extremely difficult for human defenders to recognize every fraudulent interaction amidst a sea of fake profiles. Instead, defenders should leverage AI, which can conduct autonomous investigations without human biases and limitations. AI-driven security can ensure successful detection of fraudulent or malicious activity by learning what real users and devices look like and identifying deviations from their learned behaviors that may indicate an emerging threat.


Darktrace Model Detections


SaaS / Compromise / SaaS Anomaly Following Anomalous Login

SaaS / Compromise / Unusual Login and Account Update

SaaS / Unusual Activity / Multiple Unusual External Sources For SaaS Credential

SaaS / Access / Unusual External Source for SaaS Credential Use

SaaS / Compliance / M365 Security Information Modified


Antigena / SaaS / Antigena Suspicious SaaS Activity Block

Antigena / SaaS / Antigena Unusual Activity Block


·      Link / High Risk Link + Low Sender Association

·      Link / New Correspondent Classified Link

·      Link / Watched Link Type

·      Antigena Anomaly

·      Association / Unknown Sender

·      History / New Sender

·      Link / Link to File Storage

·      Link / Link to File Storage + Unknown Sender

·      Link / Low Link Association

List of IoCs

·      142.252.106[.]251 - IP            - Possible malicious IP used by attacker during cloud account compromise

·      199.231.85[.]153 – IP - Probable malicious IP used by attacker during cloud account compromise

·      vukoqo.hebakyon[.]com – Endpoint - Credential harvesting endpoint


·      Resource Development - T1586 - Compromise Accounts

·      Resource Development - T1598.003 – Spearphishing Link

·      Persistence - T1078.004 - Cloud Accounts

·      Persistence - T1556.006 - Modify Authentication Process: Multi-Factor Authentication

·      Reconnaissance - T1593.001 – Social Media

·      Reconnaissance - T1598 – Phishing for Information

·      Reconnaissance - T1589.001 – Credentials

·      Reconnaissance - T1591.002 – Business Relationships

·      Collection - T1111 – Multifactor Authentication Interception

·      Collection - T1539 – Steal Web Session Cookie

·      Lateral Movement - T1021.007 – Cloud Services

·      Lateral Movement - T1213.002 - Sharepoint


[1] Jessica Barker, Hacked: The secrets behind cyber attacks, (London: Kogan Page, 2024), p. 130-146.











Continue reading
About the author
Nicole Wong
Cyber Security Analyst


No items found.

Let the Dominos Fall! SOC and IR Metrics for ROI

Default blog imageDefault blog image
Jun 2024

One of the most enjoyable discussions (and debates) I engage in is the topic of Security Operations Center (SOC) and Incident Response (IR) metrics to measure and validate an organization’s Return on Investment (ROI). The debate part comes in when I hear vendor experts talking about “the only” SOC metrics that matter, and only list the two most well-known, while completely ignoring metrics that have a direct causal relationship.

In this blog, I will discuss what I believe are the SOC/IR metrics that matter, how each one has a direct impact on the others, and why organizations should ensure they are working towards the goal of why these metrics are measured in the first place: Reduction of Risk and Costs.

Reduction of Risk and Costs

Every security solution and process an organization puts in place should reduce the organization’s risk of a breach, exposure by an insider threat, or loss of productivity. How an organization realizes net benefits can be in several ways:

  • Improved efficiencies can result in SOC/IR staff focusing on other areas such as advanced threat hunting rather than churning through alerts on their security consoles. It may also help organizations dealing with the lack of skilled security staff by using Artificial Intelligence (AI) and automated processes.
  • A well-oiled SOC/IR team that has greatly reduced or even eliminated mundane tasks attracts, motivates, and retains talent resulting in reduced hiring and training costs.
  • The direct impact of a breach such as a ransomware attack can be devastating. According to the 2024 Data Breach Investigations Report by Verizon, MGM Resorts International reported the ALPHV ransomware cost the company approximately $100 million[1].
  • Failure to take appropriate steps to protect the organization can result in regulatory fines; and if an organization has, or is considering, purchasing Cyber Insurance, can result in declined coverage or increased premiums.

How does an organization demonstrate they are taking proactive measures to prevent breaches? That is where it's important to understand the nine (yes, nine) key metrics, and how each one directly influences the others, play their roles.

Metrics in the Incident Response Timeline

Let’s start with a review of the key steps in the Incident Response Timeline:

Seven of the nine key metrics are in the IR timeline, while two of the metrics occur before you ever have an incident. They occur in the Pre-Detection Stage.

Pre-Detection stage metrics are:

  • Preventions Per Intrusion Attempt (PPIA)
  • False Positive Reduction Rate (FPRR)

Next is the Detect and Investigate stage, there are three metrics to consider:

  • Mean Time to Detection (MTTD)
  • Mean Time to Triage (MTTT)
  • Mean Time to Understanding (MTTU)

This is followed by the Remediation stage, there are two metrics here:

  • Mean Time to Containment (MTTC)
  • Mean Time to Remediation / Recovery (MTTR)

Finally, there is the Risk Reduction stage, there are two metrics:

  • Mean Time to Advice (MTTA)
  • Mean Time to Implementation (MTTI)

Pre-Detection Stage

Preventions Per Intrusion Attempt

PPIA is defined as stopping any intrusion attempt at the earliest possible stage. Your network Intrusion Prevention System (IPS) blocks vulnerability exploits, your e-mail security solution intercepts and removes messages with malicious attachments or links, your egress firewall blocks unauthorized login attempts, etc. The adversary doesn’t get beyond Step 1 in the attack life cycle.

This metric is the first domino. Every organization should strive to improve on this metric every day. Why? For every intrusion attempt you stop right out of the gate, you eliminate the actions for every other metric. There is no incident to detect, triage, investigate, remediate, or analyze post-incident for ways to improve your security posture.

When I think about PPIA, I always remember back to a discussion with a former mentor, Tim Crothers, who discussed the benefits of focusing on Prevention Failure Detection.

The concept is that as you layer your security defenses, your PPIA moves ever closer to 100% (no one has ever reached 100%). This narrows the field of fire for adversaries to breach into your organization. This is where novel, unknown, and permuted threats live and breathe. This is where solutions utilizing Unsupervised Machine Learning excel in raising anomalous alerts – indications of potential compromise involving one of these threats. Unsupervised ML also raises alerts on anomalous activity generated by known threats and can raise detections before many signature-based solutions. Most organizations struggle to find strong permutations of known threats, insider threats, supply chain attacks, attacks utilizing n-day and 0-day exploits. Moving PPIA ever closer to 100% also frees your team up for conducting threat hunting activities – utilizing components of your SOC that collect and store telemetry to query for potential compromises based on hypothesis the team raises. It also significantly reduces the alerts your team must triage and investigate – solving many of the issues outlined at the start of this paper.

False Positive Reduction Rate

Before we discuss FPRR, I should clarify how I define False Positives (FPs). Many define FPs as an alert that is in error (i.e.: your EDR alerts on malware that turns out to be AV signature files). While that is a FP, I extend the definition to include any alert that did not require triage / investigation and distracts the SOC/IR team (meaning they conducted some level of triage / investigation).

This metric is the second domino. Why is this metric important? Every alert your team exerts time and effort on that is a non-issue distracts them from alerts that matter. One of the major issues that has resonated in the security industry for decades is that SOCs are inundated with alerts and cannot clear the backlog. When it comes to PPIA + FPRR, I have seen analysts spend time investigating alerts that were blocked out of the gate while their screen continued to fill up with more. You must focus on Prevention Failure Detection to get ahead of the backlog.

Detect and Investigate Stages

Mean Time to Detection

MTTD, or “Dwell Time”, has decreased dramatically over the past 12 years. From well over a year to 16 days in 2023[2]. MTTD is measured from the earliest possible point you could detect the intrusion to the moment you actually detect it.

This third domino is important because the longer an adversary remains undetected, the more the odds increase they will complete their mission objective. It also makes the tasks of triage and investigation more difficult as analysts must piece together more activity and adversaries may be erasing evidence along the way – or your storage retention does not cover the breach timeline.

Many solutions focusing solely on MTTD can actually create the very problem SOCs are looking to solve.  That is, they generate so much alerting that they flood the console, email, or text messaging app causing an unmanageable queue of alerts (this is the problem XDR solutions were designed to resolve by focusing on incidents rather than alerts).

Mean Time to Triage

MTTT involves SOCs that utilize Level 1 (aka Triage) analysts to render an “escalate / do not escalate” alert verdict accurately. Accuracy is important because Triage Analysts typically are staff new to cyber security (recent grad / certification) and may over escalate (afraid to miss something important) or under escalate (not recognize signs of a successful breach). Because of this, a small MTTT does not always equate to successful handling of incidents.

This metric is important because keeping your senior staff focused on progressing incidents in a timely manner (and not expending time on false positives) should reduce stress and required headcount.

Mean Time to Understanding

MTTU deals with understanding the complete nature of the incident being investigated. This is different than MTTT which only deals with whether the issue merits escalation to senior analysts. It is then up to the senior analysts to determine the scope of the incident, and if you are a follower of my UPSET Investigation Framework, you know understanding the full scope involves:

U = All compromised accounts

P = Persistence Mechanisms used

S = All systems involved (organization, adversary, and intermediaries)

E = Endgame (or mission objective)

T = Techniques, Tactics, Procedures (TTPs) utilized by the adversary

MTTU is important because this information is critical before any containment or remediation actions are taken. Leave a stone unturned, and you alert the adversary that you are onto them and possibly fail to close an avenue of access.

Remediation Stages

Mean Time to Containment

MTTC deals with neutralizing the threat. You may not have kicked the adversary out, but you have halted their progress to their mission objective and ability to inflict further damage. This may be through use of isolation capabilities, termination of malicious processes, or firewall blocks.

MTTC is important, especially with ransomware attacks where every second counts. Faster containment responses can result in reduced / eliminated disruption to business operations or loss of data.

Mean Time to Remediation / Recovery

The full scope of the incident is understood, the adversary has been halted in their tracks, no malicious processes are running on any systems in your organization. Now is the time to put things back to right. MTTR deals with the time involved in restoring business operations to pre-incident stage. It means all remnants of changes made by the adversary (persistence, account alterations, programs installed, etc.) are removed; all disrupted systems are restored to operations (i.e.: ransomware encrypted systems are recovered from backups / snapshots), compromised user accounts are reset, etc.

MTTR is important because it informs senior management of how fast the organization can recover from an incident. Disaster Recovery and Business Continuity plans play a major role in improving this score.

Risk Reduction Stages

Mean Time to Advice

After the dust has settled from the incident, the job is not done. MTTA deals with identifying and assessing the specific areas (vulnerabilities, misconfigurations, lack of security controls) that permitted the adversary to advance to the point where detection occurred (and any actions beyond). The SOC and IR teams should then compile a list of recommendations to present to management to improve the security posture of the organization so the same attack path cannot be used.

Mean Time to Implement

Once recommendations are delivered to management, how long does it take to implement them? MTTI tracks this timeline because none of it matters if you don’t fix the holes that led to the breach.

Nine Dominos

There are the nine dominos of SOC / IR metrics I recommend helping organizations know if they are on the right track to reduce risk, costs and improve morale / retention of the security teams. You may not wish to track all nine, but understanding how each metric impacts the others can provide visibility into why you are not seeing expected improvements when you implement a new security solution or change processes.

Improving prevention and reducing false positives can make huge positive impacts on your incident response timeline. Utilizing solutions that get you to resolution quicker allows the team to focus on recommendations and risk reduction strategies.

Whichever metrics you choose to track, just be sure the dominos fall in your favor.


[1] 2024 Verizon Data Breach Investigations Report, p83

[2] Mandiant M-Trends 2023

Continue reading
About the author
John Bradshaw
Sr. Director, Technical Marketing
Our ai. Your data.

Elevate your cyber defenses with Darktrace AI

Start your free trial
Darktrace AI protecting a business from cyber threats.