Improve Security with Attack Path Modeling

Learn how to prioritize vulnerabilities effectively with attack path modeling. Learn from Darktrace experts and stay ahead of cyber threats.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Max Heinemeyer
Global Field CISO
Written by
Adam Stevens
Senior Director of Product, Cloud | Darktrace
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
09
Aug 2023

TLDR: There are too many technical vulnerabilities and there is too little organizational context for IT teams to patch effectively. Attack path modelling provides the organizational context, allowing security teams to prioritize vulnerabilities. The result is a system where CVEs can be parsed in, organizational context added, and attack paths considered, ultimately providing a prioritized list of vulnerabilities that need to be patched.

Figure 1: The Darktrace user interface presents risk-prioritized vulnerabilities


This blog post explains how Darktrace addresses the challenge of vulnerability prioritization. Most of the industry focusses on understanding the technical impact of vulnerabilities globally (‘How could this CVE generally be exploited? Is it difficult to exploit? Are there pre-requisites to exploitation? …’), without taking local context of a vulnerability into account. We’ll discuss here how we create that local context through attack path modelling and map it to technical vulnerability information. The result is a stunningly powerful way to prioritize vulnerabilities.

We will explore:

1)    The challenge and traditional approach to vulnerability prioritization
2)    Creating local context through machine learning and attack path modelling
3)    Examining the result – contextualized, vulnerability prioritization

The Challenge

Anyone dealing with Threat and Vulnerability Management (TVM) knows this situation:

You have a vulnerability scanning report with dozens or hundreds of pages. There is a long list of ‘critical’ vulnerabilities. How do you start prioritizing these vulnerabilities, assuming your goal is reducing the most risk?

Sometimes the challenge is even more specific – you might have 100 servers with the same critical vulnerability present (e.g. MoveIT). But which one should you patch first, as all of those have the same technical vulnerability priority (‘critical’)? Which one will achieve the biggest risk reduction (critical asset e.g.)? Which one will be almost meaningless to patch (asset with no business impact e.g.) and thus just a time-sink for the patch and IT team?

There have been recent improvements upon flat CVE-scoring for vulnerability prioritization by adding threat-intelligence about exploitability of vulnerabilities into the mix. This is great, examples of that additional information are Exploit Prediction Scoring System (EPSS) and Known Exploited Vulnerabilities Catalogue (KEV).

Figure 2: The idea behind EPSS – focus on actually exploited CVEs. (diagram taken from https://www.first.org/epss/model)

With CVE and CVSS scores we have the theoretical technical impact of vulnerabilities, and with EPSS and KEV we have information about the likelihood of exploitation of vulnerabilities. That’s a step forward, but still doesn’t give us any local context. Now we know even more about the global and generic technical risk of a vulnerability, but we still lack the local impact on the organization.

Let’s add that missing link via machine learning and attack path modelling.

Adding Attack Path Modelling for Local Context

To prioritize technical vulnerabilities, we need to know as much as we can about the asset on which the vulnerability is present in the context of the local organization. Is it a crown jewel? Is it a choke point? Does it sit on a critical attack path? Is it a dead end, never used and has no business relevance? Does it have organizational priority? Is the asset used by VIP users, as part of a core business or IT process? Does it share identities with elevated credentials? Is the human user on the device susceptible to social engineering?

Those are just a few typical questions when trying to establish local context of an asset. Knowing more about the threat landscape, exploitability, or technical information of a CVE won’t help answer any of the above questions. Gathering, evaluating, maintaining, and using this local context for vulnerability prioritization is the hard part. This local context often resides informally in the head of the TVM or IT team member, having been assembled by having been at the organization for a long time, ‘knowing’ systems, applications and identities in question and talking to asset and application owners if time permits. This does unfortunately not scale, is time-consuming and heavily dependent on individuals.

Understanding all attack paths for an organization provides this local context programmatically.

We discover those attack paths, and these are bespoke for each organization through Darktrace PREVENT, using the following method (simplified):

1)    Build an adaptive model of the local business. Collect, combine, and analyze (using machine learning and non-machine learning techniques) data from various data domains:

a.     Network, Cloud, IT, and OT data (network-based attack paths, communication patterns, peer-groups, choke-points, …). Natively collected by Darktrace technology.

b.     Email data (social engineering attack paths, phishing susceptibility, external exposure, security awareness level, …). Natively collected by Darktrace technology.

c.     Identity data (account privileges, account groups, access levels, shared permissions, …). Collected via various integrations, e.g. Active Directory.

d.     Attack surface data (internet-facing exposure, high-impact vulnerabilities, …). Natively collected by Darktrace technology.

e.     SaaS information (further identity context). Natively collected by Darktrace

f.      Vulnerability information (CVEs, CVSS, EPSS, KEV, …). Collected via integrations, e.g. Vulnerability Scanners or Endpoint products.

Figure 3: Darktrace PREVENT revealing each stage of an attack path

2)    Understand what ‘crown jewels’ are and how to get to them. Calculate entity importance (user, technical asset), exposure levels, potential damage levels (blast radius) weakness levels, and other scores to identify most important entities and their relationships to each other (‘crown jewels’).

Various forms of machine learning and non-machine learning techniques are used to achieve this. Further details on some of the exact methods can be found here. The result is a holistic, adaptive and dynamic model of the organization that shows most important entities and how to get to them across various data domains.

The combination of local context and technical context, around the severity and likelihood of exploitation, creates the Darktrace Vulnerability Score. This enables effective risk-based prioritisation of CVE patching.

Figure 4: List of devices with the highest damage potential in the organization - local context

3)    Map the attack path model of the organization to common cyber domain knowledge. We can then combine things like MITRE ATT&CK techniques with those identified connectivity patterns and attack paths – making it easy to understand which techniques, tools and procedures (TTPs) can be used to move through the organization, and how difficult it is to exploit each TTP.

Figure 5: An example attack path with associated MITRE techniques and difficulty scores for each TTP

We can now easily start prioritizing CVE patching based on actual, organizational risk and local context.

Bringing It All Together

Finally, we overlay the attack paths calculated by Darktrace with the CVEs collected from a vulnerability scanner or EDR. This can either happen as a native integration in Darktrace PREVENT, if we are already ingesting CVE data from another solution, or via CSV upload.

Figure 6: Darktrace's global CVE prioritization in action.

But you can also go further than just looking at the CVE that delivers the biggest risk reduction globally in your organization if it is patched. You can also look only at certain group of vulnerabilities, or a sub-set of devices to understand where to patch first in this reduced scope:

Figure 7: An example of the information Darktrace reveals around a CVE

This also provides the TVM team clear justification for the patch and infrastructure teams on why these vulnerabilities should be prioritized and what the positive impact will be on risk reduction.

Attack path modelling can be utilized for various other use cases, such as threat modelling and improving SOC efficiency. We’ll explore those in more depth at a later stage.

Want to explore more on using machine learning for vulnerability prioritization? Want to test it on your own data, for free? Arrange a demo today.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Max Heinemeyer
Global Field CISO
Written by
Adam Stevens
Senior Director of Product, Cloud | Darktrace

More in this series

No items found.

Blog

/

OT

/

November 20, 2025

Managing OT Remote Access with Zero Trust Control & AI Driven Detection

Default blog imageDefault blog image

The shift toward IT-OT convergence

Recently, industrial environments have become more connected and dependent on external collaboration. As a result, truly air-gapped OT systems have become less of a reality, especially when working with OEM-managed assets, legacy equipment requiring remote diagnostics, or third-party integrators who routinely connect in.

This convergence, whether it’s driven by digital transformation mandates or operational efficiency goals, are making OT environments more connected, more automated, and more intertwined with IT systems. While this convergence opens new possibilities, it also exposes the environment to risks that traditional OT architectures were never designed to withstand.

The modernization gap and why visibility alone isn’t enough

The push toward modernization has introduced new technology into industrial environments, creating convergence between IT and OT environments, and resulting in a lack of visibility. However, regaining that visibility is just a starting point. Visibility only tells you what is connected, not how access should be governed. And this is where the divide between IT and OT becomes unavoidable.

Security strategies that work well in IT often fall short in OT, where even small missteps can lead to environmental risk, safety incidents, or costly disruptions. Add in mounting regulatory pressure to enforce secure access, enforce segmentation, and demonstrate accountability, and it becomes clear: visibility alone is no longer sufficient. What industrial environments need now is precision. They need control. And they need to implement both without interrupting operations. All this requires identity-based access controls, real-time session oversight, and continuous behavioral detection.

The risk of unmonitored remote access

This risk becomes most evident during critical moments, such as when an OEM needs urgent access to troubleshoot a malfunctioning asset.

Under that time pressure, access is often provisioned quickly with minimal verification, bypassing established processes. Once inside, there’s little to no real-time oversight of user actions whether they’re executing commands, changing configurations, or moving laterally across the network. These actions typically go unlogged or unnoticed until something breaks. At that point, teams are stuck piecing together fragmented logs or post-incident forensics, with no clear line of accountability.  

In environments where uptime is critical and safety is non-negotiable, this level of uncertainty simply isn’t sustainable.

The visibility gap: Who’s doing what, and when?

The fundamental issue we encounter is the disconnect between who has access and what they are doing with it.  

Traditional access management tools may validate credentials and restrict entry points, but they rarely provide real-time visibility into in-session activity. Even fewer can distinguish between expected vendor behavior and subtle signs of compromise, misuse or misconfiguration.  

As a result, OT and security teams are often left blind to the most critical part of the puzzle, intent and behavior.

Closing the gaps with zero trust controls and AI‑driven detection

Managing remote access in OT is no longer just about granting a connection, it’s about enforcing strict access parameters while continuously monitoring for abnormal behavior. This requires a two-pronged approach: precision access control, and intelligent, real-time detection.

Zero Trust access controls provide the foundation. By enforcing identity-based, just-in-time permissions, OT environments can ensure that vendors and remote users only access the systems they’re explicitly authorized to interact with, and only for the time they need. These controls should be granular enough to limit access down to specific devices, commands, or functions. By applying these principles consistently across the Purdue Model, organizations can eliminate reliance on catch-all VPN tunnels, jump servers, and brittle firewall exceptions that expose the environment to excess risk.

Access control is only one part of the equation

Darktrace / OT complements zero trust controls with continuous, AI-driven behavioral detection. Rather than relying on static rules or pre-defined signatures, Darktrace uses Self-Learning AI to build a live, evolving understanding of what’s “normal” in the environment, across every device, protocol, and user. This enables real-time detection of subtle misconfigurations, credential misuse, or lateral movement as they happen, not after the fact.

By correlating user identity and session activity with behavioral analytics, Darktrace gives organizations the full picture: who accessed which system, what actions they performed, how those actions compared to historical norms, and whether any deviations occurred. It eliminates guesswork around remote access sessions and replaces it with clear, contextual insight.

Importantly, Darktrace distinguishes between operational noise and true cyber-relevant anomalies. Unlike other tools that lump everything, from CVE alerts to routine activity, into a single stream, Darktrace separates legitimate remote access behavior from potential misuse or abuse. This means organizations can both audit access from a compliance standpoint and be confident that if a session is ever exploited, the misuse will be surfaced as a high-fidelity, cyber-relevant alert. This approach serves as a compensating control, ensuring that even if access is overextended or misused, the behavior is still visible and actionable.

If a session deviates from learned baselines, such as an unusual command sequence, new lateral movement path, or activity outside of scheduled hours, Darktrace can flag it immediately. These insights can be used to trigger manual investigation or automated enforcement actions, such as access revocation or session isolation, depending on policy.

This layered approach enables real-time decision-making, supports uninterrupted operations, and delivers complete accountability for all remote activity, without slowing down critical work or disrupting industrial workflows.

Where Zero Trust Access Meets AI‑Driven Oversight:

  • Granular Access Enforcement: Role-based, just-in-time access that aligns with Zero Trust principles and meets compliance expectations.
  • Context-Enriched Threat Detection: Self-Learning AI detects anomalous OT behavior in real time and ties threats to access events and user activity.
  • Automated Session Oversight: Behavioral anomalies can trigger alerting or automated controls, reducing time-to-contain while preserving uptime.
  • Full Visibility Across Purdue Layers: Correlated data connects remote access events with device-level behavior, spanning IT and OT layers.
  • Scalable, Passive Monitoring: Passive behavioral learning enables coverage across legacy systems and air-gapped environments, no signatures, agents, or intrusive scans required.

Complete security without compromise

We no longer have to choose between operational agility and security control, or between visibility and simplicity. A Zero Trust approach, reinforced by real-time AI detection, enables secure remote access that is both permission-aware and behavior-aware, tailored to the realities of industrial operations and scalable across diverse environments.

Because when it comes to protecting critical infrastructure, access without detection is a risk and detection without access control is incomplete.

Continue reading
About the author
Pallavi Singh
Product Marketing Manager, OT Security & Compliance

Blog

/

/

November 20, 2025

Securing Generative AI: Managing Risk in Amazon Bedrock with Darktrace / CLOUD

Default blog imageDefault blog image

Security risks and challenges of generative AI in the enterprise

Generative AI and managed foundation model platforms like Amazon Bedrock are transforming how organizations build and deploy intelligent applications. From chatbots to summarization tools, Bedrock enables rapid agent development by connecting foundation models to enterprise data and services. But with this flexibility comes a new set of security challenges, especially around visibility, access control, and unintended data exposure.

As organizations move quickly to operationalize generative AI, traditional security controls are struggling to keep up. Bedrock’s multi-layered architecture, spanning agents, models, guardrails, and underlying AWS services, creates new blind spots that standard posture management tools weren’t designed to handle. Visibility gaps make it difficult to know which datasets agents can access, or how model outputs might expose sensitive information. Meanwhile, developers often move faster than security teams can review IAM permissions or validate guardrails, leading to misconfigurations that expand risk. In shared-responsibility environments like AWS, this complexity can blur the lines of ownership, making it critical for security teams to have continuous, automated insight into how AI systems interact with enterprise data.

Darktrace / CLOUD provides comprehensive visibility and posture management for Bedrock environments, automatically detecting and proactively scanning agents and knowledge bases, helping teams secure their AI infrastructure without slowing down expansion and innovation.

A real-world scenario: When access goes too far

Consider a scenario where an organization deploys a Bedrock agent to help internal staff quickly answer business questions using company knowledge. The agent was connected to a knowledge base pointing at documents stored in Amazon S3 and given access to internal services via APIs.

To get the system running quickly, developers assigned the agent a broad execution role. This role granted access to multiple S3 buckets, including one containing sensitive customer records. The over-permissioning wasn’t malicious; it stemmed from the complexity of IAM policy creation and the difficulty of identifying which buckets held sensitive data.

The team assumed the agent would only use the intended documents. However, they did not fully consider how employees might interact with the agent or how it might act on the data it processed.  

When an employee asked a routine question about quarterly customer activity, the agent surfaced insights that included regulated data, revealing it to someone without the appropriate access.

This wasn’t a case of prompt injection or model manipulation. The agent simply followed instructions and used the resources it was allowed to access. The exposure was valid under IAM policy, but entirely unintended.

How Darktrace / CLOUD prevents these risks

Darktrace / CLOUD helps organizations avoid scenarios like unintended data exposure by providing layered visibility and intelligent analysis across Bedrock and SageMaker environments. Here’s how each capability works in practice:

Configuration-level visibility

Bedrock deployments often involve multiple components: agents, guardrails, and foundation models, each with its own configuration. Darktrace / CLOUD indexes these configurations so teams can:

  1. Inspect deployed agents and confirm they are connected only to approved data sources.
  2. Track evaluation job setups and their links to Amazon S3 datasets, uncovering hidden data flows that could expose sensitive information.
  3. Maintain full awareness of all AI components, reducing the chance of overlooked assets introducing risk.

By unifying configuration data across Bedrock, SageMaker, and other AWS services, Darktrace / CLOUD provides a single source of truth for AI asset visibility. Teams can instantly see how each component is configured and whether it aligns with corporate security policies. This eliminates guesswork, accelerates audits, and helps prevent misaligned settings from creating data exposure risks.

 Agents for bedrock relationship views.
Figure 1: Agents for bedrock relationship views

Architectural awareness

Complex AI environments can make it difficult to understand how components interact. Darktrace / CLOUD generates real-time architectural diagrams that:

  1. Visualize relationships between agents, models, and datasets.
  1. Highlight unintended data access paths or risk propagation across interconnected services.

This clarity helps security teams spot vulnerabilities before they lead to exposure. By surfacing these relationships dynamically, Darktrace / CLOUD enables proactive risk management, helping teams identify architectural drift, redundant data connections, or unmonitored agents before attackers or accidental misuse can exploit them. This reduces investigation time and strengthens compliance confidence across AI workloads.

Figure 2: Full Bedrock agent architecture including lambda and IAM permission mapping
Figure 2: Full Bedrock agent architecture including lambda and IAM permission mapping

Access & privilege analysis

IAM permissions apply to every AWS service, including Bedrock. When Bedrock agents assume IAM roles that were broadly defined for other workloads, they often inherit excessive privileges. Without strict least-privilege controls, the agent may have access to far more data and services than required, creating avoidable security exposure. Darktrace / CLOUD:

  1. Reviews execution roles and user permissions to identify excessive privileges.
  2. Flags anomalies that could enable privilege escalation or unauthorized API actions.

This ensures agents operate within the principle of least privilege, reducing attack surface. Beyond flagging risky roles, Darktrace / CLOUD continuously learns normal patterns of access to identify when permissions are abused or expanded in real time. Security teams gain context into why an action is anomalous and how it could affect connected assets, allowing them to take targeted remediation steps that preserve productivity while minimizing exposure.

Misconfiguration detection

Misconfigurations are a leading cause of cloud security incidents. Darktrace / CLOUD automatically detects:

  1. Publicly accessible S3 buckets that may contain sensitive training data.
  2. Missing guardrails in Bedrock deployments, which can allow inappropriate or sensitive outputs.
  3. Other issues such as lack of encryption, direct internet access, and root access to models.  

By surfacing these risks early, teams can remediate before they become exploitable. Darktrace / CLOUD turns what would otherwise be manual reviews into automated, continuous checks, reducing time to discovery and preventing small oversights from escalating into full-scale incidents. This automated assurance allows organizations to innovate confidently while keeping their AI systems compliant and secure by design.

Configuration data for Anthropic foundation model
Figure 3: Configuration data for Anthropic foundation model

Behavioral anomaly detection

Even with correct configurations, behavior can signal emerging threats. Using AWS CloudTrail, Darktrace / CLOUD:

  1. Monitors for unusual data access patterns, such as agents querying unexpected datasets.
  2. Detects anomalous training job invocations that could indicate attempts to pollute models.

This real-time behavioral insight helps organizations respond quickly to suspicious activity. Because it learns the “normal” behavior of each Bedrock component over time, Darktrace / CLOUD can detect subtle shifts that indicate emerging risks, before formal indicators of compromise appear. The result is faster detection, reduced investigation effort, and continuous assurance that AI-driven workloads behave as intended.

Conclusion

Generative AI introduces transformative capabilities but also complex risks that evolve alongside innovation. The flexibility of services like Amazon Bedrock enables new efficiencies and insights, yet even legitimate use can inadvertently expose sensitive data or bypass security controls. As organizations embrace AI at scale, the ability to monitor and secure these environments holistically, without slowing development, is becoming essential.

By combining deep configuration visibility, architectural insight, privilege and behavior analysis, and real-time threat detection, Darktrace gives security teams continuous assurance across AI tools like Bedrock and SageMaker. Organizations can innovate with confidence, knowing their AI systems are governed by adaptive, intelligent protection.

[related-resource]

Continue reading
About the author
Adam Stevens
Senior Director of Product, Cloud | Darktrace
Your data. Our AI.
Elevate your network security with Darktrace AI