Blog
/

Thought Leadership

/

Implications of NIS2 on cybersecurity and AI

Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
Explore the key aspects of the NIS2 Directive, the latest EU cyber security legislation coming into effect in 2024. Learn how it impacts AI and security teams.

The NIS2 Directive requires member states to adopt laws that will improve the cyber resilience of organizations within the EU. It impacts organizations that are “operators of essential services”. Under NIS 1, EU member states could choose what this meant. In an effort to ensure more consistent application, NIS2 has set out its own definition. It eliminates the distinction between operators of essential services and digital service providers from NIS1, instead defining a new list of sectors:

  • Energy (electricity, district heating and cooling, gas, oil, hydrogen)
  • Transport (air, rail, water, road)
  • Banking (credit institutions)
  • Financial market infrastructures
  • Health (healthcare providers and pharma companies)
  • Drinking water (suppliers and distributors)
  • Digital infrastructure (DNS, TLD registries, telcos, data center providers, etc.)
  • ICT service providers (B2B): MSSPs and managed service providers
  • Public administration (central and regional government institutions, as defined per member state)
  • Space
  • Postal and courier services
  • Waste management
  • Chemicals
  • Food
  • Manufacturing of medical devices
  • Computers and electronics
  • Machinery and equipment
  • Motor vehicles, trailers and semi-trailers and other transport equipment
  • Digital providers (online market places, online search engines, and social networking service platforms) and research organizations.

With these updates, it becomes harder to try and find industry segments not included within the scope. NIS2 represents legally binding cyber security requirements for a significant region and economy. Standout features that have garnered the most attention include the tight timelines associated with notification requirements. Under NIS 2, in-scope entities must submit an initial report or “early warning” to the competent national authority or computer security incident response team (CSIRT) within 24 hours from when the entity became aware of a significant incident. This is a new development from the first iteration of the Directive, which used more vague language of the need to notify authorities “without undue delay”.

Another aspect gaining attention is oversight and regulation – regulators are going to be empowered with significant investigation and supervision powers including on-site inspections.

The stakes are now higher, with the prospect of fines that are capped at €10 million or 2% of an offending organization’s annual worldwide turnover – whichever is greater. Added to that, the NIS2 Directive includes an explicit obligation to hold members of management bodies personally responsible for breaches of their duties to ensure compliance with NIS2 obligations – and members can be held personally liable.  

The risk management measures introduced in the Directive are not altogether surprising – they reflect common best practices. Many organizations (especially those that are newly in scope for NIS2) may have to expand their cyber security capabilities, but there’s nothing controversial or alarming in the required measures.  For organizations in this situation, there are various tools, best practices, and frameworks they can leverage.  Darktrace in particular provides capabilities in the areas of visibility, incident handling, and reporting that can help.

NIS2 and Cyber AI

The use of AI is not an outright requirement within NIS2 – which may be down to lack of knowledge and expertise in the area, and/or the immaturity of the sector. The clue to this might be in the timing: the provisional agreement on the NIS2 text was reached in May 2022 – six months before ChatGPT and other open-source Generative AI tools propelled broader AI technology into the forefront of public consciousness. If the language were drafted today, it's not far-fetched to imagine AI being mentioned much more prominently and perhaps even becoming a requirement.

NIS2 does, however, very clearly recommend that “member states should encourage the use of any innovative technology, including artificial intelligence”[1].  Another section speaks directly to essential and important entities, saying that they should “evaluate their own cyber security capabilities, and where appropriate, pursue the integration of cyber security enhancing technologies, such as artificial intelligence or machine learning systems…”[2]

One of the recitals states that “member states should adopt policies on the promotion of active cyber protection”.  Where active cyber protection is defined as “the prevention, detection, monitoring, analysis and mitigation of network security breaches in an active manner.”[3]  

From a Darktrace perspective, our self-learning Cyber AI technology is precisely what enables our technology to deliver active cyber protection – protecting organizations and uplifting security teams at every stage of an incident lifecycle – from proactively hardening defenses before an attack is launched, to real-time threat detection and response, through to recovering quickly back to a state of good health.  

The visibility provided by Darktrace is vital to understanding the effectiveness of policies and ensuring policy compliance. NIS2 also covers incident handling and business continuity, which Darktrace HEAL addresses through AI-enabled incident response, readiness reports, simulations, and secure collaborations.

Reporting is integral to NIS2 and organizations can leverage Darktrace’s incident reporting features to present the necessary technical details of an incident and provide a jump start to compiling a full report with business context and impact.  

What’s next for NIS2

We don’t yet know the details for how EU member states will transpose NIS2 into national law – they have until 17th October 2024 to work this out. The Commission also commits to reviewing the functioning of the Directive every three years. Given how much our overall understanding and appreciation for not only the dangers of AI but also its power (perhaps even necessity in the realm of cyber security) is changing, we may see many member states will leverage the recitals’ references to AI in order to make a strong push if not a requirement that essential and important organizations within their jurisdiction leverage AI.

Organizations are starting to prepare now to meet the forthcoming legislation related to NIS2. Download our CISO’s Guide to NIS2 Preparedness, which includes everything you need to know to get ahead of the directive.

[1] (51) on page 11
[2]
(89) on page 17
[3]
(57) on page 12

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Author
John Allen
SVP, Field CISO

John Allen is SVP, Field CISO at Darktrace. He focuses on cyber risk management, governance and compliance, helping drive digital transformations and modernizations, aligning to business and enterprise objectives, navigating cross-functional projects and managing leadership and team building. Allen is credentialed with CRISC from ISACA. Prior to Darktrace, Allen was head of risk, IT for Cardinal Health. Allen earned an MBA and a BS in computer science and engineering from Ohio State University.

Book a 1-1 meeting with one of our experts
Share this article

More in this series

No items found.

Blog

/

February 3, 2025

/

Cloud

CNAPP Alone Isn’t Enough: Focusing on CDR for Real-Time Cross Domain Protection

Default blog imageDefault blog image

Forecasts predict public cloud spending will soar to over $720 billion by 2025, with 90%[1] of organizations embracing a hybrid cloud approach by 2027. These figures could also be eclipsed as more businesses unearth the potential impact that AI can make on their productivity. The pace of evolution is staggering, but one thing hasn’t changed: the cloud security market is a maze of complexity. Filled with acronyms, overlapping capabilities, and endless use cases tailored to every buyer persona.

On top of this, organizations face a fragmented landscape of security tools, each designed to cover just one slice of the cloud security puzzle. Then there’s CNAPP (Cloud-Native Application Protection Platform) — a broad platform promising to do it all but often falling short, especially around providing runtime detection and response capabilities. It’s no wonder organizations struggle to cut through the noise and find the precision they require.

Looking more closely at what CNAPP has to offer, it can feel like as if it is all you would ever need, but is that really the case?

Strengths and limitations of CNAPP

A CNAPP is undeniably a compelling solution, originally coming from CSPM (Cloud Security Posture Management), it provided organizations with a snapshot of their deployed cloud assets, highlighting whether they were as secure as intended. However, this often resulted in an overwhelming list of issues to fix, leaving organizations unsure where to focus their energy for maximum impact.

To address this, CNAPP’s evolved, incorporating capabilities like; identifying software vulnerabilities, mapping attack paths, and understanding which identities could act within the cloud. The goal became clear: prioritize fixes to reduce the risk of compromise.

But what if we could avoid these problems altogether? Imagine deploying software securely from the start — preventing the merging of vulnerable packages and ensuring proper configurations in production environments by shifting left. This preventative approach is vital to any “secure by design” strategy, CNAPP’s again evolving to add this functionality alongside.

However, as applications grow more complex, so do the variety and scope of potential issues. The responsibility for addressing these challenges often falls to engineers, who are left balancing the pressure to write code with the burden of fixing critical findings that may never even pose a real risk to the organization.

While CNAPP serves as an essential risk prevention tool — focusing on hygiene, compliance, and enabling organizations to deploy high-quality code on well-configured infrastructure — its role is largely limited to reducing the potential for issues. Once applications and infrastructure are live, the game changes. Security’s focus shifts to detecting unwanted activity and responding to real-time risks.

Limitations of CNAPP

Here’s where CNAPP shows its limitations:

1. Blind spots for on-premises workloads

Designed for cloud-native environments, it can leave blind spots for workloads that remain on-premises — a significant concern given that 90% of organizations are expected to adopt a hybrid cloud strategy by 2027. These blind spots can increase the risk of cross-domain attacks, underscoring the need for a solution that goes beyond purely prevention but adds real-time detection and response.

2. Detecting and mitigating cross-domain threats

Adversaries have evolved to exploit the complexity of hybrid and cloud environments through cross-domain attacks. These attacks span multiple domains — including traditional network environments, identity systems, SaaS platforms, and cloud environments — making them exceptionally difficult to detect and mitigate. Attackers are human and will naturally choose the path of least resistance, why spend time writing a detailed software exploit for a vulnerability if you can just target the identity?

Imagine a scenario where an attacker compromises an organization via leaked credentials and then moves laterally, similar to the example outlined in this blog: The Price of Admission: Countering Stolen Credentials with Darktrace. If an attacker identifies cloud credentials and moves into the cloud control plane, they could access additional sensitive data. Without a detection platform that monitors these areas for unusual activity, while working to consolidate findings into a unified timeline, detecting these types of attacks becomes incredibly challenging.

A CNAPP might only point to a potential misconfiguration of an identity or for example a misconfiguration around secret storage, but it cannot detect when that misconfiguration has been exploited — let alone respond to it.

Identity + Network: Unlocking cross-domain threats

Identity is more than just a role or username; it is essentially an access point for attackers to leverage and move between different areas of a digital estate. Real-time monitoring of human and non-human identities is crucial for understanding intent, spotting anomalies, and preventing possible attacks before they spread.

Non-human roles, such as service accounts or automation tooling, often operate with trust and without oversight. In 2024, the Cybersecurity and Critical Infrastructure Agency (CISA) [2] released a warning regarding new strategies employed by SolarWinds attackers. These strategies were primarily aimed at cloud infrastructure and non-human identities. The warning details how attackers leverage credentials and valid applications for malicious purposes.

With organizations opting for a hybrid approach, combining network, identity, cloud management and cloud runtime activity is essential to detecting and mitigating cross domain attacks, these are just some of the capabilities needed for effective detection and response:

  • AI driven automated and unified investigation of events – due to the volume of data and activity within businesses digital estates leveraging AI is vital, to enable SOC teams in understanding and facilitating proportional and effective responses.
  • Real-time monitoring auditing combined with anomaly detection for human and non-human identities.
  • A unified investigation platform that can deliver a real-time understanding of Identity, deployed cloud assets, runtime and contextual findings as well as coverage for remaining on premises workloads.
  • The ability to leverage threat intelligence automatically to detect potential malicious activities quickly.

The future of cloud security: Balancing risk management with real-time detection and response

Darktrace / CLOUD's CDR approach enhances CNAPP by providing the essential detection and native response needed to protect against cross-domain threats. Its agentless, default setup is both cost-effective and scalable, creating a runtime baseline that significantly boosts visibility for security teams. While proactive controls are crucial for cloud security, pairing them with Cloud Detection and Response solutions addresses a broader range of challenges.

With Darktrace / CLOUD, organizations benefit from continuous, real-time monitoring and advanced AI-driven behavioural detection, ensuring proactive detection and a robust cloud-native response. This integrated approach delivers comprehensive protection across the digital estate.

Unlock advanced cloud protection

Darktrace / CLOUD solution brief screenshot

Download the Darktrace / CLOUD solution brief to discover how autonomous, AI-driven defense can secure your environment in real-time.

  • Achieve 60% more accurate detection of unknown and novel cloud threats.
  • Respond instantly with autonomous threat response, cutting response time by 90%.
  • Streamline investigations with automated analysis, improving ROI by 85%.
  • Gain a 30% boost in cloud asset visibility with real-time architecture modeling.
  • References

    1. https://www.gartner.com/en/newsroom/press-releases/2024-11-19-gartner-forecasts-worldwide-public-cloud-end-user-spending-to-total-723-billion-dollars-in-2025
    2. https://www.cisa.gov/news-events/cybersecurity-advisories/aa24-057a
    Continue reading
    About the author
    Adam Stevens
    Director of Product, Cloud Security

    Blog

    /

    January 31, 2025

    /
    No items found.

    Reimagining Your SOC: Overcoming Alert Fatigue with AI-Led Investigations  

    Default blog imageDefault blog image

    The efficiency of a Security Operations Center (SOC) hinges on its ability to detect, analyze and respond to threats effectively. With advancements in AI and automation, key early SOC team metrics such as Mean Time to Detect (MTTD) have seen significant improvements:

    • 96% of defenders believing AI-powered solutions significantly boost the speed and efficiency of prevention, detection, response, and recovery.
    • Organizations leveraging AI and automation can shorten their breach lifecycle by an average of 108 days compared to those without these technologies.

    While tool advances have improved performance and effectiveness in the detection phase, this has not been as beneficial to the next step of the process where initial alerts are investigated further to determine their relevance and how they relate to other activities. This is often measured with the metric Mean Time to Analysis (MTTA), although some SOC teams operate a two-level process with teams for initial triage to filter out more obviously uninteresting alerts and for more detailed analysis of the remainder. SOC teams continue to grapple with alert fatigue, overwhelmed analysts, and inefficient triage processes, preventing them from achieving the operational efficiency necessary for a high-performing SOC.

    Addressing this core inefficiency requires extending AI's capabilities beyond detection to streamline and optimize the following investigative workflows that underpin effective analysis.

    Challenges with SOC alert investigation

    Detecting cyber threats is only the beginning of a much broader challenge of SOC efficiency. The real bottleneck often lies in the investigation process.

    Detection tools and techniques have evolved significantly with the use of machine learning methods, improving early threat detection. However, after a detection pops up, human analysts still typically step in to evaluate the alert, gather context, and determine whether it’s a true threat or a false alarm and why. If it is a threat, further investigation must be performed to understand the full scope of what may be a much larger problem. This phase, measured by the mean time to analysis, is critical for swift incident response.

    Challenges with manual alert investigation:

    • Too many alerts
    • Alerts lack context
    • Cognitive load sits with analysts
    • Insufficient talent in the industry
    • Fierce competition for experienced analysts

    For many organizations, investigation is where the struggle of efficiency intensifies. Analysts face overwhelming volumes of alerts, a lack of consolidated context, and the mental strain of juggling multiple systems. With a worldwide shortage of 4 million experienced level two and three SOC analysts, the cognitive burden placed on teams is immense, often leading to alert fatigue and missed threats.

    Even with advanced systems in place not all potential detections are investigated. In many cases, only a quarter of initial alerts are triaged (or analyzed). However, the issue runs deeper. Triaging occurs after detection engineering and alert tuning, which often disable many alerts that could potentially reveal true threats but are not accurate enough to justify the time and effort of the security team. This means some potential threats slip through unnoticed.

    Understanding alerts in the SOC: Stopping cyber incidents is hard

    Let’s take a look at the cyber-attack lifecycle and the steps involved in detecting and stopping an attack:

    First we need a trace of an attack…

    The attack will produce some sort of digital trace. Novel attacks, insider threats, and attacker techniques such as living-off-the-land can make attacker activities extremely hard to distinguish.

    A detection is created…

    Then we have to detect the trace, for example some beaconing to a rare domain. Initial detection alerts being raised underpin the MTTD (mean time to detection). Reducing this initial unseen duration is where we have seen significant improvement with modern threat detection tools.

    When it comes to threat detection, the possibilities are vast. Your initial lead could come from anything: an alert about unusual network activity, a potential known malware detection, or an odd email. Once that lead comes in, it’s up to your security team to investigate further and determine if this is this a legitimate threat or a false alarm and what the context is behind the alert.

    Investigation begins…

    It doesn’t just stop at a detection. Typically, humans also need to look at the alert, investigate, understand, analyze, and conclude whether this is a genuine threat that needs a response. We normally measure this as MTTA (mean time to analyze).

    Conducting the investigation effectively requires a high degree of skill and efficiency, as every second counts in mitigating potential damage. Security teams must analyze the available data, correlate it across multiple sources, and piece together the timeline of events to understand the full scope of the incident. This process involves navigating through vast amounts of information, identifying patterns, and discerning relevant details. All while managing the pressure of minimizing downtime and preventing further escalation.

    Containment begins…

    Once we confirm something as a threat, and the human team determines a response is required and understand the scope, we need to contain the incident. That's normally the MTTC (mean time to containment) and can be further split into immediate and more permanent measures.

    For more about how AI-led solutions can help in the containment stage read here: Autonomous Response: Streamlining Cybersecurity and Business Operations

    The challenge is not only in 1) detecting threats quickly, but also 2) triaging and investigating them rapidly and with precision, and 3) prioritizing the most critical findings to avoid missed opportunities. Effective investigation demands a combination of advanced tools, robust workflows, and the expertise to interpret and act on the insights they generate. Without these, organizations risk delaying critical containment and response efforts, leaving them vulnerable to greater impacts.

    While there are further steps (remediation, and of course complete recovery) here we will focus on investigation.

    Developing an AI analyst: How Darktrace replicates human investigation

    Darktrace has been working on understanding the investigative process of a skilled analyst since 2017. By conducting internal research between Darktrace expert SOC analysts and machine learning engineers, we developed a formalized understanding of investigative processes. This understanding formed the basis of a multi-layered AI system that systematically investigates data, taking advantage of the speed and breadth afforded by machine systems.

    With this research we found that the investigative process often revolves around iterating three key steps: hypothesis creation, data collection, and results evaluation.

    All these details are crucial for an analyst to determine the nature of a potential threat. Similarly, they are integral components of our Cyber AI Analyst which is an integral component across our product suite. In doing so, Darktrace has been able to replicate the human-driven approach to investigating alerts using machine learning speed and scale.

    Here’s how it works:

    • When an initial or third-party alert is triggered, the Cyber AI Analyst initiates a forensic investigation by building multiple hypotheses and gathering relevant data to confirm or refute the nature of suspicious activity, iterating as necessary, and continuously refining the original hypothesis as new data emerges throughout the investigation.
    • Using a combination of machine learning including supervised and unsupervised methods, NLP and graph theory to assess activity, this investigation engine conducts a deep analysis with incidents raised to the human team only when the behavior is deemed sufficiently concerning.
    • After classification, the incident information is organized and processed to generate the analysis summary, including the most important descriptive details, and priority classification, ensuring that critical alerts are prioritized for further action by the human-analyst team.
    • If the alert is deemed unimportant, the complete analysis process is made available to the human team so that they can see what investigation was performed and why this conclusion was drawn.
    Darktrace cyber ai analyst workflow, how it works

    To illustrate this via example, if a laptop is beaconing to a rare domain, the Cyber AI Analyst would create hypotheses including whether this could be command and control traffic, data exfiltration, or something else. The AI analyst then collects data, analyzes it, makes decisions, iterates, and ultimately raises a new high-level incident alert describing and detailing its findings for human analysts to review and follow up.

    Learn more about Darktrace's Cyber AI Analyst

    • Cost Savings Equivalent to adding 30 full-time Level 2 analysts without increasing headcount
    • Minimize Business Risk Takes on the busy work from human analysts and elevates a team’s overall decision making
    • Improve Security Outcomes Identifies subtle, sophisticated threats through holistic investigations

    Unlocking an efficient SOC

    To create a mature and proactive SOC, addressing the inefficiencies in the alert investigation process is essential. By extending AI's capabilities beyond detection, SOC teams can streamline and optimize investigative workflows, reducing alert fatigue and enhancing analyst efficiency.

    This holistic approach not only improves Mean Time to Analysis (MTTA) but also ensures that SOCs are well-equipped to handle the evolving threat landscape. Embracing AI augmentation and automation in every phase of threat management will pave the way for a more resilient and proactive security posture, ultimately leading to a high-performing SOC that can effectively safeguard organizational assets.

    Every relevant alert is investigated

    The Cyber AI Analyst is not a generative AI system, or an XDR or SEIM aggregator that simply prompts you on what to do next. It uses a multi-layered combination of many different specialized AI methods to investigate every relevant alert from across your enterprise, native, 3rd party, and manual triggers, operating at machine speed and scale. This also positively affects detection engineering and alert tuning, because it does not suffer from fatigue when presented with low accuracy but potentially valuable alerts.

    Retain and improve analyst skills

    Transferring most analysis processes to AI systems can risk team skills if they don't maintain or build them and if the AI doesn't explain its process. This can reduce the ability to challenge or build on AI results and cause issues if the AI is unavailable. The Cyber AI Analyst, by revealing its investigation process, data gathering, and decisions, promotes and improves these skills. Its deep understanding of cyber incidents can be used for skill training and incident response practice by simulating incidents for security teams to handle.

    Create time for cyber risk reduction

    Human cybersecurity professionals excel in areas that require critical thinking, strategic planning, and nuanced decision-making. With alert fatigue minimized and investigations streamlined, your analysts can avoid the tedious data collection and analysis stages and instead focus on critical decision-making tasks such as implementing recovery actions and performing threat hunting.

    Stay tuned for part 3/3

    Part 3/3 in the Reimagine your SOC series explores the preventative security solutions market and effective risk management strategies.

    Coming soon!

    Continue reading
    About the author
    Brittany Woodsmall
    Product Marketing Manager, AI & Attack Surface
    Your data. Our AI.
    Elevate your network security with Darktrace AI