Blog
/
/
September 27, 2022

Understanding The Threat of Social Engineering

Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
27
Sep 2022
Learn why MFA and security awareness fall short and how Self-Learning AI can enhance your cyber defense strategy.

Attackers have leveraged social engineering in several high-profile hacks in recent months, with organizations like Uber, Rockstar Games, Cloudflare, Cisco, and LastPass among the most well-known targets.

Social engineering is the manipulation of a user, often through fear or doubt, to coax them into actions like revealing credentials or other sensitive information. The threat landscape is teeming with social engineering attempts across all forms of digital messaging, including email, Slack, and SMS. Moreover, spear-phishing, watering hole attacks, and spoofing are growing increasingly sophisticated.

Organizations are taking numerous defensive measures in response. This includes ramping up security education efforts, as well as configuring multi-factor authentication (MFA). But while MFA strengthens security, it can still be thwarted by hackers, and security awareness training programs often yield mixed or disappointing results. Now, organizations are increasingly turning to artificial intelligence to stop cyber-attacks carried out through social engineering. 

Since application-based transportation companies face distinct risks with their complex digital infrastructure, they require dynamic security solutions that adapt to evolving phishing techniques to guarantee reliable service to their customers. To that end, the Bluebird Group, the largest taxi service in Indonesia, has been using Darktrace to protect its email and cloud-based messaging since 2021.  

“While we’ve pivoted and shown flexibility in the face of change, so too have the attackers,” said Sigit Djokosoetono, CEO at PT Blue Bird Tbk, a subsidiary of The Bluebird Group. “We’ve seen an uptick in attacks targeting cloud and SaaS applications, for example. Phishing emails are becoming more realistic and more frequent.” 

Traditional email defenses lag behind contemporary social engineering threats because they rely on threat intelligence and collecting “deny-lists” of email domains and IP addresses already recognized as bad. But attackers can set up new domains for pennies and update infrastructure too frequently for this method to have effect.  

Darktrace’s unique approach to cyber security stops these attacks. Self-Learning AI learns the who, what, when, and where of every email user’s communication patterns. This evolving and multi-dimensional understanding allows the AI to spot subtle signs of a social engineering attack, regardless of whether it is known or novel and regardless of the tactics in place. 

If an employee’s credentials are used as part of a social engineering hack, Darktrace can identify the hacker’s malicious behavior. It then makes micro-decisions to neutralize the attack within seconds, stopping the offending message without disruption to the business.

“Darktrace’s AI-powered email security solution has reduced our email threats – such as spear phishing and spoofing – by 95% because it takes autonomous action to contain malicious emails before they reach a user. We can’t expect humans to spot the difference between a real and a fake anymore – it’s not sustainable,” said Djokosoetono. 

More recently, social engineering has gone beyond email, and to other platforms like Slack and Microsoft Teams. This can be more difficult for security teams to manage. Darktrace takes a holistic approach to security and can be installed anywhere an organization has data. The various coverage areas are united through the Self-Learning AI, which looks at every area of the digital estate to reveal the full scope of an attack, even as the attacker traverses multiple digital environments. 

“For our employees, a weight is lifted from their shoulders,” said Djokosoetono. “When it comes to something like phishing emails, training on how to spot these is important but we simply cannot put the onus on humans to spot these well-researched, targeted email attacks. With AI in place, we’re stopping these threats before humans have to deal with them."

Darktrace’s AI is always-on and works at machine-speed to protect companies, so employees can focus on producing their best work without the constant fear of malicious messaging. 

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Author
Brianna Leddy
Director of Analyst Operations

Based in San Francisco, Brianna is Director of Analyst Operations at Darktrace. She joined the analyst team in 2016 and has since advised a wide range of enterprise customers on advanced threat hunting and leveraging Self-Learning AI for detection and response. Brianna works closely with the Darktrace SOC team to proactively alert customers to emerging threats and investigate unusual behavior in enterprise environments. Brianna holds a Bachelor’s degree in Chemical Engineering from Carnegie Mellon University.

Book a 1-1 meeting with one of our experts
Share this article

More in this series

No items found.

Blog

/

Compliance

/

February 11, 2025

NIS2 Compliance: Interpreting 'State-of-the-Art' for Organisations

Default blog imageDefault blog image

NIS2 Background

17 October 2024 marked the deadline for European Union (EU) Member States to implement the NIS2 Directive into national law. The Directive aims to enhance the EU’s cybersecurity posture by establishing a high common level of cybersecurity for critical infrastructure and services. It builds on its predecessor, the 2018 NIS Directive, by expanding the number of sectors in scope, enforcing greater reporting requirements and encouraging Member States to ensure regulated organisations adopt ‘state-of-the-art' security measures to protect their networks, OT and IT systems.  

Timeline of NIS2
Figure 1: Timeline of NIS2

The challenge of NIS2 & 'state-of-the-art'

Preamble (51) - "Member States should encourage the use of any innovative technology, including artificial intelligence, the use of which could improve the detection and prevention of cyberattacks, enabling resources to be diverted towards cyberattacks more effectively."
Article 21 - calls on Member States to ensure that essential and important entities “take appropriate and proportionate” cyber security measures, and that they do so by “taking into account the state-of-the-art and, where applicable, relevant European and international standards, as well as the cost of implementation.”

Regulartory expectations and ambiguity of NIS2

While organisations in scope can rely on technical guidance provided by ENISA1 , the EU’s agency for cybersecurity, or individual guidelines provided by Member States or Public-Private Partnerships where they have been published,2 the mention of ‘state-of-the-art' remains up to interpretation in most Member States. The use of the phrase implies that cybersecurity measures must evolve continuously to keep pace with emerging threats and technological advancements without specifying what ‘state-of-the-art’ actually means for a given context and risk.3  

This ambiguity makes it difficult for organisations to determine what constitutes compliance at any given time and could lead to potential inconsistencies in implementation and enforcement. Moreover, the rapid pace of technological change means that what is considered "state-of-the-art" today will become outdated, further complicating compliance efforts.

However, this is not unique to NIS regulation. As EU scholars have noted, while “state-of-the-art" is widely referred to in legal text relating to technology, there is no standardised legal definition of what it actually constitutes.4

Defining state-of-the-art cybersecurity

In this blog, we outline technical considerations for state-of-the-art cybersecurity. We draw from expertise within our own business and in academia as well as guidelines and security standards set by national agencies, such as Germany’s Federal Office for Information Security (BSI) or Spain’s National Security Framework (ENS), to put forward five criteria to define state-of-the-art cybersecurity.

The five core criteria include:

  • Continuous monitoring
  • Incident correlation
  • Detection of anomalous activity
  • Autonomous response
  • Proactive cyber resilience

These principles build on long-standing security considerations, such as business continuity, vulnerability management and basic security hygiene practices.  

Although these considerations are written in the context of the NIS2 Directive, they are likely to also be relevant for other jurisdictions. We hope these criteria help organisations understand how to best meet their responsibilities under the NIS2 Directive and assist Competent Authorities in defining compliance expectations for the organisations they regulate.  

Ultimately, adopting state-of-the-art cyber defences is crucial for ensuring that organisations are equipped with the best tools to combat new and fast-growing threats. Leading technical authorities, such as the UK National Cyber Security Centre (NCSC), recognise that adoption of AI-powered cyber defences will offset the increased volume and impact of AI on cyber threats.5

State of the art cybersecurity in the context of NIS2

1. Continuous monitoring

Continuous monitoring is required to protect an increasingly complex attack surface from attackers.

First, organisations' attack surfaces have expanded following the widespread adoption of hybrid or cloud infrastructures and the increased adoption of connected Internet of Things (IoT) devices.6 This exponential growth creates a complex digital environment for organisations, making it difficult for security teams to track all internet-facing assets and identify potential vulnerabilities.

Second, with the significant increase in the speed and sophistication of cyber-attacks, organisations face a greater need to detect security threats and non-compliance issues in real-time.  

Continuous monitoring, defined by the U.S. National Institute of Standards and Technology (NIST) as the ability to maintain “ongoing awareness of information security, vulnerabilities, and threats to support organizational risk management decisions,”7 has therefore become a cornerstone of an effective cybersecurity strategy. By implementing continuous monitoring, organisations can ensure a real-time understanding of their attack surface and that new external assets are promptly accounted for. For instance, Spain’s technical guidelines for regulation, as set forth by the National Security Framework (Royal Decree 311/2022), highlight the importance of adopting continuous monitoring to detect anomalous activities or behaviours and to ensure timely responses to potential threats (article 10).8  

This can be achieved through the following means:  

All assets that form part of an organisation's estate, both known and unknown, must be identified and continuously monitored for current and emerging risks. Germany’s BSI mandates the continuous monitoring of all protocol and logging data in real-time (requirement #110).9 This should be conducted alongside any regular scans to detect unknown devices or cases of shadow IT, or the use of unauthorised or unmanaged applications and devices within an organisation, which can expose internet-facing assets to unmonitored risks. Continuous monitoring can therefore help identify potential risks and high-impact vulnerabilities within an organisation's digital estate and eliminate potential gaps and blind spots.

Organisations looking to implement more efficient continuous monitoring strategies may turn to automation, but, as the BSI notes, it is important for responsible parties to be immediately warned if an alert is raised (reference 110).10 Following the BSI’s recommendations, the alert must be examined and, if necessary, contained within a short period of time corresponding with the analysis of the risk at hand.

Finally, risk scoring and vulnerability mapping are also essential parts of this process. Looking across the Atlantic, the US’ National Institute of Standards and Technology (NIST) defines continuous monitoring as “maintaining ongoing awareness of information security, vulnerabilities, and threats to support organizational risk management decisions”.11 Continuous monitoring helps identify potential risks and significant vulnerabilities within an organisation's digital assets, fostering a dynamic understanding of risk. By doing so, risk scoring and vulnerability mapping allows organisations to prioritise the risks associated with their most critically exposed assets.

2. Correlation of incidents across your entire environment

Viewing and correlating incident alerts when working with different platforms and tools poses significant challenges to SecOps teams. Security professionals often struggle to cross-reference alerts efficiently, which can lead to potential delays in identifying and responding to threats. The complexity of managing multiple sources of information can overwhelm teams, making it difficult to maintain a cohesive understanding of the security landscape.

This fragmentation underscores the need for a centralised approach that provides a "single pane of glass" view of all cybersecurity alerts. These systems streamline the process of monitoring and responding to incidents, enabling security teams to act more swiftly and effectively. By consolidating alerts into a unified interface, organisations can enhance their ability to detect and mitigate threats, ultimately improving their overall security posture.  

To achieve consolidation, organisations should consider the role automation can play when reviewing and correlating incidents. This is reflected in Spain’s technical guidelines for national security regulations regarding the requirements for the “recording of activity” (reinforcement R5).12 Specifically, the guidelines state that:  

"The system shall implement tools to analyses and review system activity and audit information, in search of possible or actual security compromises. An automatic system for collection of records, correlation of events and automatic response to them shall be available”.13  

Similarly, the German guidelines stress that automated central analysis is essential not only for recording all protocol and logging data generated within the system environment but also to ensure that the data is correlated to ensure that security-relevant processes are visible (article 115).14

Correlating disparate incidents and alerts is especially important when considering the increased connectivity between IT and OT environments driven by business and functional requirements. Indeed, organisations that believe they have air-gapped systems are now becoming aware of points of IT/OT convergence within their systems. It is therefore crucial for organisations managing both IT and OT environments to be able to visualise and secure devices across all IT and OT protocols in real-time to identify potential spillovers.  

By consolidating data into a centralised system, organisations can achieve a more resilient posture. This approach exposes and eliminates gaps between people, processes, and technology before they can be exploited by malicious actors. As seen in the German and Spanish guidelines, a unified view of security alerts not only enhances the efficacy of threat detection and response but also ensures comprehensive visibility and control over the organisation's cybersecurity posture.

3. Detection of anomalous activity  

Recent research highlights the emergence of a "new normal" in cybersecurity, marked by an increase in zero-day vulnerabilities. Indeed, for the first time since sharing their annual list, the Five Eyes intelligence alliance reported that in 2023, the majority of the most routinely exploited vulnerabilities were initially exploited as zero-days.15  

To effectively combat these advanced threats, policymakers, industry and academic stakeholders alike recognise the importance of anomaly-based techniques to detect both known and unknown attacks.

As AI-enabled threats become more prevalent,16 traditional cybersecurity methods that depend on lists of "known bads" are proving inadequate against rapidly evolving and sophisticated attacks. These legacy approaches are limited because they can only identify threats that have been previously encountered and cataloged. However, cybercriminals are constantly developing new, never-before-seen threats, such as signatureless ransomware or living off the land techniques, which can easily bypass these outdated defences.

The importance of anomaly detection in cybersecurity can be found in Spain’s technical guidelines, which states that “tools shall be available to automate the prevention and response process by detecting and identifying anomalies17” (reinforcement R4 prevention and automatic response to "incident management”).  

Similarly, the UK NCSC’s Cyber Assessment Framework (CAF) highlights how anomaly-based detection systems are capable of detecting threats that “evade standard signature-based security solutions” (Principle C2 - Proactive Security Event Discovery18). The CAF’s C2 principle further outlines:  

“The science of anomaly detection, which goes beyond using pre-defined or prescriptive pattern matching, is a challenging area. Capabilities like machine learning are increasingly being shown to have applicability and potential in the field of intrusion detection.”19

By leveraging machine learning and multi-layered AI techniques, organisations can move away from static rules and signatures, adopting a more behavioural approach to identifying and containing risks. This shift not only enhances the detection of emerging threats but also provides a more robust defence mechanism.

A key component of this strategy is behavioral zero trust, which focuses on identifying unauthorized and out-of-character attempts by users, devices, or systems. Implementing a robust procedure to verify each user and issuing the minimum required access rights based on their role and established patterns of activity is essential. Organisations should therefore be encouraged to follow a robust procedure to verify each user and issue the minimum required access rights based on their role and expected or established patterns of activity. By doing so, organisations can stay ahead of emerging threats and embrace a more dynamic and resilient cybersecurity strategy.  

4. Autonomous response

The speed at which cyber-attacks occur means that defenders must be equipped with tools that match the sophistication and agility of those used by attackers. Autonomous response tools are thus essential for modern cyber defence, as they enable organisations to respond to both known and novel threats in real time.  

These tools leverage a deep contextual and behavioral understanding of the organisation to take precise actions, effectively containing threats without disrupting business operations.

To avoid unnecessary business disruptions and maintain robust security, especially in more sensitive networks such as OT environments, it is crucial for organisations to determine the appropriate response depending on their environment. This can range from taking autonomous and native actions, such as isolating or blocking devices, or integrating their autonomous response tool with firewalls or other security tools to taking customized actions.  

Autonomous response solutions should also use a contextual understanding of the business environment to make informed decisions, allowing them to contain threats swiftly and accurately. This means that even as cyber-attacks evolve and become more sophisticated, organisations can maintain continuous protection without compromising operational efficiency.  

Indeed, research into the adoption of autonomous cyber defences points to the importance of implementing “organisation-specific" and “context-informed” approaches.20  To decide the appropriate level of autonomy for each network action, it is argued, it is essential to use evidence-based risk prioritisation that is customised to the specific operations, assets, and data of individual enterprises.21

By adopting autonomous response solutions, organisations can ensure their defences are as dynamic and effective as the threats they face, significantly enhancing their overall security posture.

5. Proactive cyber resilience  

Adopting a proactive approach to cybersecurity is crucial for organisations aiming to safeguard their operations and reputation. By hardening their defences enough so attackers are unable to target them effectively, organisations can save significant time and money. This proactive stance helps reduce business disruption, reputational damage, and the need for lengthy, resource-intensive incident responses.

Proactive cybersecurity incorporates many of the strategies outlined above. This can be seen in a recent survey of information technology practitioners, which outlines four components of a proactive cybersecurity culture: (1) visibility of corporate assets, (2) leveraging intelligent and modern technology, (3) adopting consistent and comprehensive training methods and (4) implementing risk response procedures.22 To this, we may also add continuous monitoring which allows organisations to understand the most vulnerable and high-value paths across their architectures, allowing them to secure their critical assets more effectively.  

Alongside these components, a proactive cyber strategy should be based on a combined business context and knowledge, ensuring that security measures are aligned with the organisation's specific needs and priorities.  

This proactive approach to cyber resilience is reflected in Spain’s technical guidance (article 8.2): “Prevention measures, which may incorporate components geared towards deterrence or reduction of the exposure surface, should eliminate or reduce the likelihood of threats materializing.”23 It can also be found in the NCSC’s CAF, which outlines how organisations can achieve “proactive attack discovery” (see Principle C2).24 Likewise, Belgium’s NIS2 transposition guidelines mandate the use of preventive measures to ensure the continued availability of services in the event of exceptional network failures (article 30).25  

Ultimately, a proactive approach to cybersecurity not only enhances protection but also lowers regulatory risk and supports the overall resilience and stability of the organisation.

Looking forward

The NIS2 Directive marked a significant regulatory milestone in strengthening cybersecurity across the EU.26 Given the impact of emerging technologies, such as AI, on cybersecurity, it is to see that Member States are encouraged to promote the adoption of ‘state-of-the-art' cybersecurity across regulated entities.  

In this blog, we have sought to translate what state-of-the-art cybersecurity may look like for organisations looking to enhance their cybersecurity posture. To do so, we have built on existing cybersecurity guidance, research and our own experience as an AI-cybersecurity company to outline five criteria: continuous monitoring, incident correlation, detection of anomalous activity, autonomous response, and proactive cyber resilience.

By embracing these principles and evolving cybersecurity practices in line with the state-of-the-art, organisations can comply with the NIS2 Directive while building a resilient cybersecurity posture capable of withstanding evolutions in the cyber threat landscape. Looking forward, it will be interesting to see how other jurisdictions embrace new technologies, such as AI, in solving the cybersecurity problem.

NIS2 white paper

Get ahead with the NIS2 White Paper

Get a clear roadmap for meeting NIS2 requirements and strengthening your cybersecurity posture. Learn how to ensure compliance, mitigate risks, and protect your organization from evolving threats.

Download Here!

References

[1] https://www.enisa.europa.eu/publications/implementation-guidance-on-nis-2-security-measures

[2] https://www.teletrust.de/fileadmin/user_upload/2023-05_TeleTrusT_Guideline_State_of_the_art_in_IT_security_EN.pdf

[3] https://kpmg.com/uk/en/home/insights/2024/04/what-does-nis2-mean-for-energy-businesses.html

[4] https://orbilu.uni.lu/bitstream/10993/50878/1/SCHMITZ_IFIP_workshop_sota_author-pre-print.pdf

[5]https://www.ncsc.gov.uk/report/impact-of-ai-on-cyber-threat

[6] https://www.sciencedirect.com/science/article/pii/S2949715923000793

[7] https://csrc.nist.gov/glossary/term/information_security_continuous_monitoring

[8] https://ens.ccn.cni.es/es/docman/documentos-publicos/39-boe-a-2022-7191-national-security-framework-ens/file

[10] https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/KRITIS/Konkretisierung_Anforderungen_Massnahmen_KRITIS.html

[11] https://nvlpubs.nist.gov/nistpubs/legacy/sp/nistspecialpublication800-137.pdf

[12] https://ens.ccn.cni.es/es/docman/documentos-publicos/39-boe-a-2022-7191-national-security-framework-ens/file

[13] https://ens.ccn.cni.es/es/docman/documentos-publicos/39-boe-a-2022-7191-national-security-framework-ens/file

[14] https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/KRITIS/Konkretisierung_Anforderungen_Massnahmen_KRITIS.html

[15] https://therecord.media/surge-zero-day-exploits-five-eyes-report

[16] https://www.ncsc.gov.uk/report/impact-of-ai-on-cyber-threat

[17] https://ens.ccn.cni.es/es/docman/documentos-publicos/39-boe-a-2022-7191-national-security-framework-ens/file

[18] https://www.ncsc.gov.uk/collection/cyber-assessment-framework/caf-objective-c-detecting-cyber-security-events/principle-c2-proactive-security-event-discovery

[19] https://www.ncsc.gov.uk/collection/cyber-assessment-framework/caf-objective-c-detecting-cyber-security-events/principle-c2-proactive-security-event-discovery

[20] https://cetas.turing.ac.uk/publications/autonomous-cyber-defence-autonomous-agents

[21] https://cetas.turing.ac.uk/publications/autonomous-cyber-defence-autonomous-agents

[22] https://www.researchgate.net/publication/376170443_Cultivating_Proactive_Cybersecurity_Culture_among_IT_Professional_to_Combat_Evolving_Threats

[23] https://ens.ccn.cni.es/es/docman/documentos-publicos/39-boe-a-2022-7191-national-security-framework-ens/file

[24] https://www.ncsc.gov.uk/collection/cyber-assessment-framework/caf-objective-c-detecting-cyber-security-events/principle-c2-proactive-security-event-discovery

[25] https://www.ejustice.just.fgov.be/mopdf/2024/05/17_1.pdf#page=49

[26] ENISA, NIS Directive 2

Continue reading
About the author
Livia Fries
Public Policy Manager, EMEA

Blog

/

AI

/

February 11, 2025

From Hype to Reality: How AI is Transforming Cybersecurity Practices

Default blog imageDefault blog image

AI is everywhere, predominantly because it has changed the way humans interact with data. AI is a powerful tool for data analytics, predictions, and recommendations, but accuracy, safety, and security are paramount for operationalization.

In cybersecurity, AI-powered solutions are becoming increasingly necessary to keep up with modern business complexity and this new age of cyber-threat, marked by attacker innovation, use of AI, speed, and scale. The emergence of these new threats calls for a varied and layered approach in AI security technology to anticipate asymmetric threats.

While many cybersecurity vendors are adding AI to their products, they are not always communicating the capabilities or data used clearly. This is especially the case with Large Language Models (LLMs). Many products are adding interactive and generative capabilities which do not necessarily increase the efficacy of detection and response but rather are aligned with enhancing the analyst and security team experience and data retrieval.

Consequently, many  people erroneously conflate generative AI with other types of AI. Similarly, only 31% of security professionals report that they are “very familiar” with supervised machine learning, the type of AI most often applied in today’s cybersecurity solutions to identify threats using attack artifacts and facilitate automated responses. This confusion around AI and its capabilities can result in suboptimal cybersecurity measures, overfitting, inaccuracies due to ineffective methods/data, inefficient use of resources, and heightened exposure to advanced cyber threats.

Vendors must cut through the AI market and demystify the technology in their products for safe, secure, and accurate adoption. To that end, let’s discuss common AI techniques in cybersecurity as well as how Darktrace applies them.

Modernizing cybersecurity with AI

Machine learning has presented a significant opportunity to the cybersecurity industry, and many vendors have been using it for years. Despite the high potential benefit of applying machine learning to cybersecurity, not every AI tool or machine learning model is equally effective due to its technique, application, and data it was trained on.

Supervised machine learning and cybersecurity

Supervised machine models are trained on labeled, structured data to facilitate automation of a human-led trained tasks. Some cybersecurity vendors have been experimenting with supervised machine learning for years, with most automating threat detection based on reported attack data using big data science, shared cyber-threat intelligence, known or reported attack behavior, and classifiers.

In the last several years, however, more vendors have expanded into the behavior analytics and anomaly detection side. In many applications, this method separates the learning, when the behavioral profile is created (baselining), from the subsequent anomaly detection. As such, it does not learn continuously and requires periodic updating and re-training to try to stay up to date with dynamic business operations and new attack techniques. Unfortunately, this opens the door for a high rate of daily false positives and false negatives.

Unsupervised machine learning and cybersecurity

Unlike supervised approaches, unsupervised machine learning does not require labeled training data or human-led training. Instead, it independently analyzes data to detect compelling patterns without relying on knowledge of past threats. This removes the dependency of human input or involvement to guide learning.

However, it is constrained by input parameters, requiring a thoughtful consideration of technique and feature selection to ensure the accuracy of the outputs. Additionally, while it can discover patterns in data as they are anomaly-focused, some of those patterns may be irrelevant and distracting.

When using models for behavior analytics and anomaly detection, the outputs come in the form of anomalies rather than classified threats, requiring additional modeling for threat behavior context and prioritization. Anomaly detection performed in isolation can render resource-wasting false positives.

LLMs and cybersecurity

LLMs are a major aspect of mainstream generative AI, and they can be used in both supervised and unsupervised ways. They are pre-trained on massive volumes of data and can be applied to human language, machine language, and more.

With the recent explosion of LLMs in the market, many vendors are rushing to add generative AI to their products, using it for chatbots, Retrieval-Augmented Generation (RAG) systems, agents, and embeddings. Generative AI in cybersecurity can optimize data retrieval for defenders, summarize reporting, or emulate sophisticated phishing attacks for preventative security.

But, since this is semantic analysis, LLMs can struggle with the reasoning necessary for security analysis and detection consistently. If not applied responsibly, generative AI can cause confusion by “hallucinating,” meaning referencing invented data, without additional post-processing to decrease the impact or by providing conflicting responses due to confirmation bias in the prompts written by different security team members.

Combining techniques in a multi-layered AI approach

Each type of machine learning technique has its own set of strengths and weaknesses, so a multi-layered, multi-method approach is ideal to enhance functionality while overcoming the shortcomings of any one method.

Darktrace’s multi-layered AI engine is powered by multiple machine learning approaches, which operate in combination for cyber defense. This allows Darktrace to protect the entire digital estates of the organizations it secures, including corporate networks, cloud computing services, SaaS applications, IoT, Industrial Control Systems (ICS), and email systems.

Plugged into the organization’s infrastructure and services, our AI engine ingests and analyzes the raw data and its interactions within the environment and forms an understanding of the normal behavior, right down to the granular details of specific users and devices. The system continually revises its understanding about what is normal based on evolving evidence, continuously learning as opposed to baselining techniques.

This dynamic understanding of normal partnered with dozens of anomaly detection models means that the AI engine can identify, with a high degree of precision, events or behaviors that are both anomalous and unlikely to be benign. Understanding anomalies through the lens of many models as well as autonomously fine-tuning the models’ performances gives us a higher understanding and confidence in anomaly detection.

The next layer provides event correlation and threat behavior context to understand the risk level of an anomalous event(s). Every anomalous event is investigated by Cyber AI Analyst that uses a combination of unsupervised machine learning models to analyze logs with supervised machine learning trained on how to investigate. This provides anomaly and risk context along with investigation outcomes with explainability.

The ability to identify activity that represents the first footprints of an attacker, without any prior knowledge or intelligence, lies at the heart of the AI system’s efficacy in keeping pace with threat actor innovations and changes in tactics and techniques. It helps the human team detect subtle indicators that can be hard to spot amid the immense noise of legitimate, day-to-day digital interactions. This enables advanced threat detection with full domain visibility.

Digging deeper into AI: Mapping specific machine learning techniques to cybersecurity functions

Visibility and control are vital for the practical adoption of AI solutions, as it builds trust between human security teams and their AI tools. That is why we want to share some specific applications of AI across our solutions, moving beyond hype and buzzwords to provide grounded, technical explanations.

Darktrace’s technology helps security teams cover every stage of the incident lifecycle with a range of comprehensive analysis and autonomous investigation and response capabilities.

  1. Behavioral prediction: Our AI understands your unique organization by learning normal patterns of life. It accomplishes this with multiple clustering algorithms, anomaly detection models, Bayesian meta-classifier for autonomous fine-tuning, graph theory, and more.
  2. Real-time threat detection: With a true understanding of normal, our AI engine connects anomalous events to risky behavior using probabilistic models. 
  3. Investigation: Darktrace performs in-depth analysis and investigation of anomalies, in particular automating Level 1 of a SOC team and augmenting the rest of the SOC team through prioritization for human-led investigations. Some of these methods include supervised and unsupervised machine learning models, semantic analysis models, and graph theory.
  4. Response: Darktrace calculates the proportional action to take in order to neutralize in-progress attacks at machine speed. As a result, organizations are protected 24/7, even when the human team is out of the office. Through understanding the normal pattern of life of an asset or peer group, the autonomous response engine can isolate the anomalous/risky behavior and surgically block. The autonomous response engine also has the capability to enforce the peer group’s pattern of life when rare and risky behavior continues.
  5. Customizable model editor: This layer of customizable logic models tailors our AI’s processing to give security teams more visibility as well as the opportunity to adapt outputs, therefore increasing explainability, interpretability, control, and the ability to modify the operationalization of the AI output with auditing.

See the complete AI architecture in the paper “The AI Arsenal: Understanding the Tools Shaping Cybersecurity.”

Figure 1. Alerts can be customized in the model editor in many ways like editing the thresholds for rarity and unusualness scores above.

Machine learning is the fundamental ally in cyber defense

Traditional security methods, even those that use a small subset of machine learning, are no longer sufficient, as these tools can neither keep up with all possible attack vectors nor respond fast enough to the variety of machine-speed attacks, given their complexity compared to known and expected patterns.

Security teams require advanced detection capabilities, using multiple machine learning techniques to understand the environment, filter the noise, and take action where threats are identified.

Darktrace’s multi-layered AI comes together to achieve behavioral prediction, real-time threat detection and response, and incident investigation, all while empowering your security team with visibility and control.

Download the full report

Discover specifically how Darktrace applies different types of AI to improve cybersecurity efficacy and operations in this technical paper.

Continue reading
About the author
Nicole Carignan
SVP, Security & AI Strategy, Field CISO
Your data. Our AI.
Elevate your network security with Darktrace AI