Employee-Conscious Email Security Solutions in the Workforce
10
Apr 2023
Email threats commonly affect organizations. Read Darktrace's expert insights on how to safeguard your business by educating employees about email security.
When considering email security, IT teams have historically had to choose between excluding employees entirely, or including them but giving them too much power and implementing unenforceable, trust-based policies that try to make up for it.
However, just because email security should not rely on employees, this does not mean they should be excluded entirely. Employees are the ones interacting with emails daily, and their experiences and behaviors can provide valuable security insights and even influence productivity.
AI technology supports employee engagement in this non-intrusive, nuanced way to not only maintain email security, but also enhance it.
Finding a Balance of Employee Involvement in Security Strategies
Historically, security solutions offered ‘all or nothing’ approaches to employee engagement. On one hand, when employees are involved, they are unreliable. Employees cannot all be experts in security on top of their actual job responsibilities, and mistakes are bound to happen in fast-paced environments.
Although there have been attempts to raise security awareness, they often have shortcomings, as training emails lack context and realism, leaving employees with poor understandings that often lead to reporting emails that are actually safe. Having users constantly triaging their inboxes and reporting safe emails wastes time that takes away from their own productivity as well as the productivity of the security team.
Other historic forms of employee involvement also put security at risk. For example, users could create blanket rules through feedback, which could lead to common problems like safe-listing every email that comes from the gmail.com domain. Other times, employees could choose for themselves to release emails without context or limitations, introducing major risks to the organization. While these types of actions include employees to participate in security, they do so at the cost of security.
Even lower stakes employee involvement can prove ineffective. For example, excessive warnings when sending emails to external contacts can lead to banner fatigue. When employees see the same warning message or alert at the top of every message, it’s human nature that they soon become accustomed and ultimately immune to it.
On the other hand, when employees are fully excluded from security, an opportunity is missed to fine-tune security according to the actual users and to gain feedback on how well the email security solution is working.
So, both options of historically conventional email security, to include or exclude employees, prove incapable of leveraging employees effectively. The best email security practice strikes a balance between these two extremes, allowing more nuanced interactions that maintain security without interrupting daily business operations. This can be achieved with AI that tailors the interactions specifically to each employee to add to security instead of detracting from it.
Reducing False Reports While Improving Security Awareness Training
Humans and AI-powered email security can simultaneously level up by working together. AI can inform employees and employees can inform AI in an employee-AI feedback loop.
By understanding ‘normal’ behavior for every email user, AI can identify unusual, risky components of an email and take precise action based on the nature of the email to neutralize them, such as rewriting links, flattening attachments, and moving emails to junk. AI can go one step further and explain in non-technical language why it has taken a specific action, which educates users. In contrast to point-in-time simulated phishing email campaigns, this means AI can share its analysis in context and in real time at the moment a user is questioning an email.
The employee-AI feedback loop educates employees so that they can serve as additional enrichment data. It determines the appropriate levels to inform and teach users, while not relying on them for threat detection.
In the other direction, the AI learns from users’ activity in the inbox and gradually factors this into its decision-making. This is not a ‘one size fits all’ mechanism – one employee marking an email as safe will never result in blanket approval across the business – but over time, patterns can be observed and autonomous decision-making enhanced.
Figure 1: The employee-AI feedback loop increases employee understanding without putting security at risk.
The employee-AI feedback loop draws out the maximum potential benefits of employee involvement in email security. Other email security solutions only consider the security team, enhancing its workflow but never considering the employees that report suspicious emails. Employees who try to do the right thing but blindly report emails never learn or improve and end up wasting their own time. By considering employees and improving security awareness training, the employee-AI feedback loop can level up users. They learn from the AI explanations how to identify malicious components, and so then report fewer emails but with greater accuracy.
While AI programs have classically acted like black boxes, Darktrace trains its AI on the best data, the organization’s actual employees, and invites both the security team and employees to see the reasoning behind its conclusions. Over time, employees will trust themselves more as they better learn how to discern unsafe emails.
Leveraging AI to Generate Productivity Gains
Uniquely, AI-powered email security can have effects outside of security-related areas. It can save time by managing non-productive email. As the AI constantly learns employee behavior in the inbox, it becomes extremely effective at detecting spam and graymail – emails that aren't necessarily malicious, but clutter inboxes and hamper productivity. It does this on a per-user basis, specific to how each employee treats spam, graymail, and newsletters. The AI learns to detect this clutter and eventually learns which to pull from the inbox, saving time for the employees. This highlights how security solutions can go even further than merely protecting the email environment with a light touch, to the point where AI can promote productivity gains by automating tasks like inbox sorting.
Preventing Email Mishaps: How to Deal with Human Error
Improved user understanding and decision making cannot stop natural human error. Employees are bound to make mistakes and can easily send emails to the wrong people, especially when Outlook auto-fills the wrong recipient. This can have effects ranging anywhere from embarrassing to critical, with major implications on compliance, customer trust, confidential intellectual property, and data loss.
However, AI can help reduce instances of accidentally sending emails to the wrong people. When a user goes to send an email in Outlook, the AI will analyze the recipients. It considers the contextual relationship between the sender and recipients, the relationships the recipients have with each other, how similar each recipient’s name and history is to other known contacts, and the names of attached files.
If the AI determines that the email is outside of a user’s typical behavior, it may alert the user. Security teams can customize what the AI does next: it can block the email, block the email but allow the user to override it, or do nothing but invite the user to think twice. Since the AI analyzes each email, these alerts are more effective than consistent, blanket alerts warning about external recipients, which often go ignored. With this targeted approach, the AI prevents data leakage and reduces cyber risk.
Since the AI is always on and continuously learning, it can adapt autonomously to employee changes. If the role of an employee evolves, the AI will learn the new normal, including common behaviors, recipients, attached file names, and more. This allows the AI to continue effectively flagging potential instances of human error, without needing manual rule changes or disrupting the employee’s workflow.
Email Security Informed by Employee Experience
As the practical users of email, employees should be considered when designing email security. This employee-conscious lens to security can strengthen defenses, improve productivity, and prevent data loss.
In these ways, email security can benefit both employees and security teams. Employees can become another layer of defense with improved security awareness training that cuts down on false reports of safe emails. This insight into employee email behavior can also enhance employee productivity by learning and sorting graymail. Finally, viewing security in relation to employees can help security teams deploy tools that reduce data loss by flagging misdirected emails. With these capabilities, Darktrace/Email™ enables security teams to optimize the balance of employee involvement in email security.
Like this and want more?
Receive the latest blog in your inbox
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Newsletter
Enjoying the blog?
Sign up to receive the latest news and insights from the Darktrace newsletter – delivered directly to your inbox
Thanks for signing up!
Look out for your first newsletter, coming soon.
Oops! Something went wrong while submitting the form.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Author
Dan Fein
VP, Product
Based in New York, Dan joined Darktrace’s technical team in 2015, helping customers quickly achieve a complete and granular understanding of Darktrace’s product suite. Dan has a particular focus on Darktrace/Email, ensuring that it is effectively deployed in complex digital environments, and works closely with the development, marketing, sales, and technical teams. Dan holds a Bachelor’s degree in Computer Science from New York University.
Carlos Gray
Product Manager
Carlos Gonzalez Gray is a Product Marketing Manager at Darktrace, based in the Madrid Office. As an email security Subject Matter Expert he collaborates with the global product team to align each product with the company’s ethos and ensures Darktrace are continuously pushing the boundaries of innovation. His prior role at Darktrace was in Sales Engineering, leading the Iberian team and specializing in both the email and OT sectors. Additionally, his prior experience as a consultant to IBEX 35 companies in Spain has made him well-versed in compliance, auditing, and data privacy. Carlos holds an Honors BA in Political Science and a Masters in Cybersecurity from IE University.
NIS2 Compliance: Interpreting 'State-of-the-Art' for Organisations
NIS2 Background
17 October 2024 marked the deadline for European Union (EU) Member States to implement the NIS2 Directive into national law. The Directive aims to enhance the EU’s cybersecurity posture by establishing a high common level of cybersecurity for critical infrastructure and services. It builds on its predecessor, the 2018 NIS Directive, by expanding the number of sectors in scope, enforcing greater reporting requirements and encouraging Member States to ensure regulated organisations adopt ‘state-of-the-art' security measures to protect their networks, OT and IT systems.
Figure 1: Timeline of NIS2
The challenge of NIS2 & 'state-of-the-art'
Preamble (51) - "Member States should encourage the use of any innovative technology, including artificial intelligence, the use of which could improve the detection and prevention of cyberattacks, enabling resources to be diverted towards cyberattacks more effectively."
Article 21 - calls on Member States to ensure that essential and important entities “take appropriate and proportionate” cyber security measures, and that they do so by “taking into account the state-of-the-art and, where applicable, relevant European and international standards, as well as the cost of implementation.”
Regulartory expectations and ambiguity of NIS2
While organisations in scope can rely on technical guidance provided by ENISA1 , the EU’s agency for cybersecurity, or individual guidelines provided by Member States or Public-Private Partnerships where they have been published,2 the mention of ‘state-of-the-art' remains up to interpretation in most Member States. The use of the phrase implies that cybersecurity measures must evolve continuously to keep pace with emerging threats and technological advancements without specifying what ‘state-of-the-art’ actually means for a given context and risk.3
This ambiguity makes it difficult for organisations to determine what constitutes compliance at any given time and could lead to potential inconsistencies in implementation and enforcement. Moreover, the rapid pace of technological change means that what is considered "state-of-the-art" today will become outdated, further complicating compliance efforts.
However, this is not unique to NIS regulation. As EU scholars have noted, while “state-of-the-art" is widely referred to in legal text relating to technology, there is no standardised legal definition of what it actually constitutes.4
Defining state-of-the-art cybersecurity
In this blog, we outline technical considerations for state-of-the-art cybersecurity. We draw from expertise within our own business and in academia as well as guidelines and security standards set by national agencies, such as Germany’s Federal Office for Information Security (BSI) or Spain’s National Security Framework (ENS), to put forward five criteria to define state-of-the-art cybersecurity.
The five core criteria include:
Continuous monitoring
Incident correlation
Detection of anomalous activity
Autonomous response
Proactive cyber resilience
These principles build on long-standing security considerations, such as business continuity, vulnerability management and basic security hygiene practices.
Although these considerations are written in the context of the NIS2 Directive, they are likely to also be relevant for other jurisdictions. We hope these criteria help organisations understand how to best meet their responsibilities under the NIS2 Directive and assist Competent Authorities in defining compliance expectations for the organisations they regulate.
Ultimately, adopting state-of-the-art cyber defences is crucial for ensuring that organisations are equipped with the best tools to combat new and fast-growing threats. Leading technical authorities, such as the UK National Cyber Security Centre (NCSC), recognise that adoption of AI-powered cyber defences will offset the increased volume and impact of AI on cyber threats.5
State of the art cybersecurity in the context of NIS2
1. Continuous monitoring
Continuous monitoring is required to protect an increasingly complex attack surface from attackers.
First, organisations' attack surfaces have expanded following the widespread adoption of hybrid or cloud infrastructures and the increased adoption of connected Internet of Things (IoT) devices.6 This exponential growth creates a complex digital environment for organisations, making it difficult for security teams to track all internet-facing assets and identify potential vulnerabilities.
Second, with the significant increase in the speed and sophistication of cyber-attacks, organisations face a greater need to detect security threats and non-compliance issues in real-time.
Continuous monitoring, defined by the U.S. National Institute of Standards and Technology (NIST) as the ability to maintain “ongoing awareness of information security, vulnerabilities, and threats to support organizational risk management decisions,”7 has therefore become a cornerstone of an effective cybersecurity strategy. By implementing continuous monitoring, organisations can ensure a real-time understanding of their attack surface and that new external assets are promptly accounted for. For instance, Spain’s technical guidelines for regulation, as set forth by the National Security Framework (Royal Decree 311/2022), highlight the importance of adopting continuous monitoring to detect anomalous activities or behaviours and to ensure timely responses to potential threats (article 10).8
This can be achieved through the following means:
All assets that form part of an organisation's estate, both known and unknown, must be identified and continuously monitored for current and emerging risks. Germany’s BSI mandates the continuous monitoring of all protocol and logging data in real-time (requirement #110).9 This should be conducted alongside any regular scans to detect unknown devices or cases of shadow IT, or the use of unauthorised or unmanaged applications and devices within an organisation, which can expose internet-facing assets to unmonitored risks. Continuous monitoring can therefore help identify potential risks and high-impact vulnerabilities within an organisation's digital estate and eliminate potential gaps and blind spots.
Organisations looking to implement more efficient continuous monitoring strategies may turn to automation, but, as the BSI notes, it is important for responsible parties to be immediately warned if an alert is raised (reference 110).10 Following the BSI’s recommendations, the alert must be examined and, if necessary, contained within a short period of time corresponding with the analysis of the risk at hand.
Finally, risk scoring and vulnerability mapping are also essential parts of this process. Looking across the Atlantic, the US’ National Institute of Standards and Technology (NIST) defines continuous monitoring as “maintaining ongoing awareness of information security, vulnerabilities, and threats to support organizational risk management decisions”.11 Continuous monitoring helps identify potential risks and significant vulnerabilities within an organisation's digital assets, fostering a dynamic understanding of risk. By doing so, risk scoring and vulnerability mapping allows organisations to prioritise the risks associated with their most critically exposed assets.
2. Correlation of incidents across your entire environment
Viewing and correlating incident alerts when working with different platforms and tools poses significant challenges to SecOps teams. Security professionals often struggle to cross-reference alerts efficiently, which can lead to potential delays in identifying and responding to threats. The complexity of managing multiple sources of information can overwhelm teams, making it difficult to maintain a cohesive understanding of the security landscape.
This fragmentation underscores the need for a centralised approach that provides a "single pane of glass" view of all cybersecurity alerts. These systems streamline the process of monitoring and responding to incidents, enabling security teams to act more swiftly and effectively. By consolidating alerts into a unified interface, organisations can enhance their ability to detect and mitigate threats, ultimately improving their overall security posture.
To achieve consolidation, organisations should consider the role automation can play when reviewing and correlating incidents. This is reflected in Spain’s technical guidelines for national security regulations regarding the requirements for the “recording of activity” (reinforcement R5).12 Specifically, the guidelines state that:
"The system shall implement tools to analyses and review system activity and audit information, in search of possible or actual security compromises. An automatic system for collection of records, correlation of events and automatic response to them shall be available”.13
Similarly, the German guidelines stress that automated central analysis is essential not only for recording all protocol and logging data generated within the system environment but also to ensure that the data is correlated to ensure that security-relevant processes are visible (article 115).14
Correlating disparate incidents and alerts is especially important when considering the increased connectivity between IT and OT environments driven by business and functional requirements. Indeed, organisations that believe they have air-gapped systems are now becoming aware of points of IT/OT convergence within their systems. It is therefore crucial for organisations managing both IT and OT environments to be able to visualise and secure devices across all IT and OT protocols in real-time to identify potential spillovers.
By consolidating data into a centralised system, organisations can achieve a more resilient posture. This approach exposes and eliminates gaps between people, processes, and technology before they can be exploited by malicious actors. As seen in the German and Spanish guidelines, a unified view of security alerts not only enhances the efficacy of threat detection and response but also ensures comprehensive visibility and control over the organisation's cybersecurity posture.
3. Detection of anomalous activity
Recent research highlights the emergence of a "new normal" in cybersecurity, marked by an increase in zero-day vulnerabilities. Indeed, for the first time since sharing their annual list, the Five Eyes intelligence alliance reported that in 2023, the majority of the most routinely exploited vulnerabilities were initially exploited as zero-days.15
To effectively combat these advanced threats, policymakers, industry and academic stakeholders alike recognise the importance of anomaly-based techniques to detect both known and unknown attacks.
As AI-enabled threats become more prevalent,16 traditional cybersecurity methods that depend on lists of "known bads" are proving inadequate against rapidly evolving and sophisticated attacks. These legacy approaches are limited because they can only identify threats that have been previously encountered and cataloged. However, cybercriminals are constantly developing new, never-before-seen threats, such as signatureless ransomware or living off the land techniques, which can easily bypass these outdated defences.
The importance of anomaly detection in cybersecurity can be found in Spain’s technical guidelines, which states that “tools shall be available to automate the prevention and response process by detecting and identifying anomalies17” (reinforcement R4 prevention and automatic response to "incident management”).
Similarly, the UK NCSC’s Cyber Assessment Framework (CAF) highlights how anomaly-based detection systems are capable of detecting threats that “evade standard signature-based security solutions” (Principle C2 - Proactive Security Event Discovery18). The CAF’s C2 principle further outlines:
“The science of anomaly detection, which goes beyond using pre-defined or prescriptive pattern matching, is a challenging area. Capabilities like machine learning are increasingly being shown to have applicability and potential in the field of intrusion detection.”19
By leveraging machine learning and multi-layered AI techniques, organisations can move away from static rules and signatures, adopting a more behavioural approach to identifying and containing risks. This shift not only enhances the detection of emerging threats but also provides a more robust defence mechanism.
A key component of this strategy is behavioral zero trust, which focuses on identifying unauthorized and out-of-character attempts by users, devices, or systems. Implementing a robust procedure to verify each user and issuing the minimum required access rights based on their role and established patterns of activity is essential. Organisations should therefore be encouraged to follow a robust procedure to verify each user and issue the minimum required access rights based on their role and expected or established patterns of activity. By doing so, organisations can stay ahead of emerging threats and embrace a more dynamic and resilient cybersecurity strategy.
4. Autonomous response
The speed at which cyber-attacks occur means that defenders must be equipped with tools that match the sophistication and agility of those used by attackers. Autonomous response tools are thus essential for modern cyber defence, as they enable organisations to respond to both known and novel threats in real time.
These tools leverage a deep contextual and behavioral understanding of the organisation to take precise actions, effectively containing threats without disrupting business operations.
To avoid unnecessary business disruptions and maintain robust security, especially in more sensitive networks such as OT environments, it is crucial for organisations to determine the appropriate response depending on their environment. This can range from taking autonomous and native actions, such as isolating or blocking devices, or integrating their autonomous response tool with firewalls or other security tools to taking customized actions.
Autonomous response solutions should also use a contextual understanding of the business environment to make informed decisions, allowing them to contain threats swiftly and accurately. This means that even as cyber-attacks evolve and become more sophisticated, organisations can maintain continuous protection without compromising operational efficiency.
Indeed, research into the adoption of autonomous cyber defences points to the importance of implementing “organisation-specific" and “context-informed” approaches.20 To decide the appropriate level of autonomy for each network action, it is argued, it is essential to use evidence-based risk prioritisation that is customised to the specific operations, assets, and data of individual enterprises.21
By adopting autonomous response solutions, organisations can ensure their defences are as dynamic and effective as the threats they face, significantly enhancing their overall security posture.
5. Proactive cyber resilience
Adopting a proactive approach to cybersecurity is crucial for organisations aiming to safeguard their operations and reputation. By hardening their defences enough so attackers are unable to target them effectively, organisations can save significant time and money. This proactive stance helps reduce business disruption, reputational damage, and the need for lengthy, resource-intensive incident responses.
Proactive cybersecurity incorporates many of the strategies outlined above. This can be seen in a recent survey of information technology practitioners, which outlines four components of a proactive cybersecurity culture: (1) visibility of corporate assets, (2) leveraging intelligent and modern technology, (3) adopting consistent and comprehensive training methods and (4) implementing risk response procedures.22 To this, we may also add continuous monitoring which allows organisations to understand the most vulnerable and high-value paths across their architectures, allowing them to secure their critical assets more effectively.
Alongside these components, a proactive cyber strategy should be based on a combined business context and knowledge, ensuring that security measures are aligned with the organisation's specific needs and priorities.
This proactive approach to cyber resilience is reflected in Spain’s technical guidance (article 8.2): “Prevention measures, which may incorporate components geared towards deterrence or reduction of the exposure surface, should eliminate or reduce the likelihood of threats materializing.”23 It can also be found in the NCSC’s CAF, which outlines how organisations can achieve “proactive attack discovery” (see Principle C2).24 Likewise, Belgium’s NIS2 transposition guidelines mandate the use of preventive measures to ensure the continued availability of services in the event of exceptional network failures (article 30).25
Ultimately, a proactive approach to cybersecurity not only enhances protection but also lowers regulatory risk and supports the overall resilience and stability of the organisation.
Looking forward
The NIS2 Directive marked a significant regulatory milestone in strengthening cybersecurity across the EU.26 Given the impact of emerging technologies, such as AI, on cybersecurity, it is to see that Member States are encouraged to promote the adoption of ‘state-of-the-art' cybersecurity across regulated entities.
In this blog, we have sought to translate what state-of-the-art cybersecurity may look like for organisations looking to enhance their cybersecurity posture. To do so, we have built on existing cybersecurity guidance, research and our own experience as an AI-cybersecurity company to outline five criteria: continuous monitoring, incident correlation, detection of anomalous activity, autonomous response, and proactive cyber resilience.
By embracing these principles and evolving cybersecurity practices in line with the state-of-the-art, organisations can comply with the NIS2 Directive while building a resilient cybersecurity posture capable of withstanding evolutions in the cyber threat landscape. Looking forward, it will be interesting to see how other jurisdictions embrace new technologies, such as AI, in solving the cybersecurity problem.
From Hype to Reality: How AI is Transforming Cybersecurity Practices
AI is everywhere, predominantly because it has changed the way humans interact with data. AI is a powerful tool for data analytics, predictions, and recommendations, but accuracy, safety, and security are paramount for operationalization.
In cybersecurity, AI-powered solutions are becoming increasingly necessary to keep up with modern business complexity and this new age of cyber-threat, marked by attacker innovation, use of AI, speed, and scale. The emergence of these new threats calls for a varied and layered approach in AI security technology to anticipate asymmetric threats.
While many cybersecurity vendors are adding AI to their products, they are not always communicating the capabilities or data used clearly. This is especially the case with Large Language Models (LLMs). Many products are adding interactive and generative capabilities which do not necessarily increase the efficacy of detection and response but rather are aligned with enhancing the analyst and security team experience and data retrieval.
Consequently, many people erroneously conflate generative AI with other types of AI. Similarly, only 31% of security professionals report that they are “very familiar” with supervised machine learning, the type of AI most often applied in today’s cybersecurity solutions to identify threats using attack artifacts and facilitate automated responses. This confusion around AI and its capabilities can result in suboptimal cybersecurity measures, overfitting, inaccuracies due to ineffective methods/data, inefficient use of resources, and heightened exposure to advanced cyber threats.
Vendors must cut through the AI market and demystify the technology in their products for safe, secure, and accurate adoption. To that end, let’s discuss common AI techniques in cybersecurity as well as how Darktrace applies them.
Modernizing cybersecurity with AI
Machine learning has presented a significant opportunity to the cybersecurity industry, and many vendors have been using it for years. Despite the high potential benefit of applying machine learning to cybersecurity, not every AI tool or machine learning model is equally effective due to its technique, application, and data it was trained on.
Supervised machine learning and cybersecurity
Supervised machine models are trained on labeled, structured data to facilitate automation of a human-led trained tasks. Some cybersecurity vendors have been experimenting with supervised machine learning for years, with most automating threat detection based on reported attack data using big data science, shared cyber-threat intelligence, known or reported attack behavior, and classifiers.
In the last several years, however, more vendors have expanded into the behavior analytics and anomaly detection side. In many applications, this method separates the learning, when the behavioral profile is created (baselining), from the subsequent anomaly detection. As such, it does not learn continuously and requires periodic updating and re-training to try to stay up to date with dynamic business operations and new attack techniques. Unfortunately, this opens the door for a high rate of daily false positives and false negatives.
Unsupervised machine learning and cybersecurity
Unlike supervised approaches, unsupervised machine learningdoes not require labeled training data or human-led training. Instead, it independently analyzes data to detect compelling patterns without relying on knowledge of past threats. This removes the dependency of human input or involvement to guide learning.
However, it is constrained by input parameters, requiring a thoughtful consideration of technique and feature selection to ensure the accuracy of the outputs. Additionally, while it can discover patterns in data as they are anomaly-focused, some of those patterns may be irrelevant and distracting.
When using models for behavior analytics and anomaly detection, the outputs come in the form of anomalies rather than classified threats, requiring additional modeling for threat behavior context and prioritization. Anomaly detection performed in isolation can render resource-wasting false positives.
LLMs and cybersecurity
LLMs are a major aspect of mainstream generative AI, and they can be used in both supervised and unsupervised ways. They are pre-trained on massive volumes of data and can be applied to human language, machine language, and more.
With the recent explosion of LLMs in the market, many vendors are rushing to add generative AI to their products, using it for chatbots, Retrieval-Augmented Generation (RAG) systems, agents, and embeddings. Generative AI in cybersecurity can optimize data retrieval for defenders, summarize reporting, or emulate sophisticated phishing attacks for preventative security.
But, since this is semantic analysis, LLMs can struggle with the reasoning necessary for security analysis and detection consistently. If not applied responsibly, generative AI can cause confusion by “hallucinating,” meaning referencing invented data, without additional post-processing to decrease the impact or by providing conflicting responses due to confirmation bias in the prompts written by different security team members.
Combining techniques in a multi-layered AI approach
Each type of machine learning technique has its own set of strengths and weaknesses, so a multi-layered, multi-method approach is ideal to enhance functionality while overcoming the shortcomings of any one method.
Darktrace’s multi-layered AI engine is powered by multiple machine learning approaches, which operate in combination for cyber defense. This allows Darktrace to protect the entire digital estates of the organizations it secures, including corporate networks, cloud computing services, SaaS applications, IoT, Industrial Control Systems (ICS), and email systems.
Plugged into the organization’s infrastructure and services, our AI engine ingests and analyzes the raw data and its interactions within the environment and forms an understanding of the normal behavior, right down to the granular details of specific users and devices. The system continually revises its understanding about what is normal based on evolving evidence, continuously learning as opposed to baselining techniques.
This dynamic understanding of normal partnered with dozens of anomaly detection models means that the AI engine can identify, with a high degree of precision, events or behaviors that are both anomalous and unlikely to be benign. Understanding anomalies through the lens of many models as well as autonomously fine-tuning the models’ performances gives us a higher understanding and confidence in anomaly detection.
The next layer provides event correlation and threat behavior context to understand the risk level of an anomalous event(s). Every anomalous event is investigated by Cyber AI Analyst that uses a combination of unsupervised machine learning models to analyze logs with supervised machine learning trained on how to investigate. This provides anomaly and risk context along with investigation outcomes with explainability.
The ability to identify activity that represents the first footprints of an attacker, without any prior knowledge or intelligence, lies at the heart of the AI system’s efficacy in keeping pace with threat actor innovations and changes in tactics and techniques. It helps the human team detect subtle indicators that can be hard to spot amid the immense noise of legitimate, day-to-day digital interactions. This enables advanced threat detection with full domain visibility.
Digging deeper into AI: Mapping specific machine learning techniques to cybersecurity functions
Visibility and control are vital for the practical adoption of AI solutions, as it builds trust between human security teams and their AI tools. That is why we want to share some specific applications of AI across our solutions, moving beyond hype and buzzwords to provide grounded, technical explanations.
Darktrace’s technology helps security teams cover every stage of the incident lifecycle with a range of comprehensive analysis and autonomous investigation and response capabilities.
Behavioral prediction: Our AI understands your unique organization by learning normal patterns of life. It accomplishes this with multiple clustering algorithms, anomaly detection models, Bayesian meta-classifier for autonomous fine-tuning, graph theory, and more.
Real-time threat detection: With a true understanding of normal, our AI engine connects anomalous events to risky behavior using probabilistic models.
Investigation: Darktrace performs in-depth analysis and investigation of anomalies, in particular automating Level 1 of a SOC team and augmenting the rest of the SOC team through prioritization for human-led investigations. Some of these methods include supervised and unsupervised machine learning models, semantic analysis models, and graph theory.
Response: Darktrace calculates the proportional action to take in order to neutralize in-progress attacks at machine speed. As a result, organizations are protected 24/7, even when the human team is out of the office. Through understanding the normal pattern of life of an asset or peer group, the autonomous response engine can isolate the anomalous/risky behavior and surgically block. The autonomous response engine also has the capability to enforce the peer group’s pattern of life when rare and risky behavior continues.
Customizable model editor: This layer of customizable logic models tailors our AI’s processing to give security teams more visibility as well as the opportunity to adapt outputs, therefore increasing explainability, interpretability, control, and the ability to modify the operationalization of the AI output with auditing.
Figure 1. Alerts can be customized in the model editor in many ways like editing the thresholds for rarity and unusualness scores above.
Machine learning is the fundamental ally in cyber defense
Traditional security methods, even those that use a small subset of machine learning, are no longer sufficient, as these tools can neither keep up with all possible attack vectors nor respond fast enough to the variety of machine-speed attacks, given their complexity compared to known and expected patterns.
Security teams require advanced detection capabilities, using multiple machine learning techniques to understand the environment, filter the noise, and take action where threats are identified.
Darktrace’s multi-layered AI comes together to achieve behavioral prediction, real-time threat detection and response, and incident investigation, all while empowering your security team with visibility and control.