Blog
/

Thought Leadership

/
November 27, 2024

AI and Cybersecurity Predictions for 2025

Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
27
Nov 2024
This blog outlines ten trends we expect to see in cybersecurity in 2025, from a rise in multi-agent systems to heightened supply chain risk from LLMs.

Each year, Darktrace's AI and cybersecurity experts reflect on the events of the past 12 months and predict the trends we expect to shape the cybersecurity landscape in the year ahead. In 2024, we predicted that the global elections, fast-moving AI innovations, and increasingly cloud-based IT environments would be key factors shaping the cyber threat landscape.

Looking ahead to 2025, we expect the total addressable market of cybercrime to expand as attackers add more tactics to their toolkits. Threat actors will continue to take advantage of the volatile geopolitical environment and cybersecurity challenges will increasingly move to new frontiers like space. When it comes to AI, we anticipate the innovation in AI agents in 2024 to pave the way for the rise of multi-agent systems in 2025, creating new challenges and opportunities for cybersecurity professionals and attackers alike.

Here are ten trends to watch for in 2025:

The overall Total Addressable Market (TAM) of cybercrime gets bigger

Cybercrime is a global business, and an increasingly lucrative one, scaling through the adoption of AI and cybercrime-as-a-service. Annual revenue from cybercrime is already estimated to be over $8 trillion, which we’ve found is almost 5x greater than the revenue of the Magnificent Seven stocks. There are a few key factors driving this growth.

The ongoing growth of devices and systems means that existing malware families will continue to be successful. As of October 2024, it’s estimated that more than 5.52 billion people (~67%) have access to the internet and sources estimate 18.8 billion connected devices will be online by the end of 2024. The increasing adoption of AI is poised to drive even more interconnected systems as well as new data centers and infrastructure globally.

At the same time, more sophisticated capabilities are available for low-level attackers – we’ve already seen the trickle-down economic benefits of living off the land, edge infrastructure exploitation, and identity-focused exploitation. The availability of Ransomware-as-a-Service (RaaS) and Malware-as-a-Service (MaaS) make more advanced tactics the norm. The subscription income that these groups can generate enables more adversarial innovation, so attacks are getting faster and more effective with even bigger financial ramifications.

While there has also been an increasing trend in the last year of improved cross-border law enforcement, the efficacy of these efforts remains to be seen as cybercriminal gangs are also getting more resilient and professionalized. They are building better back-up systems and infrastructure as well as more multi-national networks and supply chains.

Security teams need to prepare for the rise of AI agents and multi-agent systems

Throughout 2024, we’ve seen major announcements about advancements in AI agents from the likes of OpenAI, Microsoft, Salesforce, and more. In 2025, we’ll see increasing innovation in and adoption of AI agents as well as the emergence of multi-agent systems (or “agent swarms”), where groups of autonomous agents work together to tackle complex tasks.

The rise of AI agents and multi-agent systems will introduce new challenges in cybersecurity, including new attack vectors and vulnerabilities. Security teams need to think about how to protect these systems to prevent data poisoning, prompt injection, or social engineering attacks.

One benefit of multi-agent systems is that agents can autonomously communicate, collaborate, and interact. However without clear and distinct boundaries and explicit permissions, this can also pose a major data privacy risk and avenue for manipulation. These issues cannot be addressed by traditional application testing alone. We must ensure these systems are secure by design, where robust protective mechanisms and data guardrails are built into the foundations.

Threat actors will be the earliest adopters of AI agents and multi-agent systems

We’ve already seen how quickly threat actors have been able to adopt generative AI for tasks like email phishing and reconnaissance. The next frontier for threat actors will be AI agents and multi-agent systems that are specialized in autonomous tasks like surveillance, initial access brokering, privilege escalation, vulnerability exploitation, data summarization for smart exfiltration, and more. Because they have no concern for safe, secure, accurate, and responsible use, adversaries will adopt these systems faster than cyber defenders.

We could also start to see use cases emerge for multi-agent systems in cyber defense – with potential for early use cases in incident response, application testing, and vulnerability discovery. On the whole, security teams will be slower to adopt these systems than adversaries because of the need to put in place proper security guardrails and build trust over time.

There is heightened supply chain risk for Large Language Models (LLMs)

Training LLMs requires a lot of data, and many experts have warned that world is running out of quality data for that training. As a result, there will be an increasing reliance on synthetic data, which can introduce new issues of accuracy and efficacy. Moreover, data supply chain risks will be an Achilles heel for organizations, with the potential interjection of vulnerabilities through the data and machine learning providers that they rely on. Poisoning one data set could have huge trickle-down impacts across many different systems. Data security will be paramount in 2025.

The race to identify software vulnerabilities intensifies

The time it takes for threat actors to exploit newly published CVEs is getting shorter, giving defenders an even smaller window to apply patches and remediations. A 2024 report from Cloudflare found that threat actors quickly weaponized proof of concept exploits in attacks as quickly as 22 minutes after the exploits were made public.

At the same time, 2024 also saw the first reports from researchers across academia and the tech industry using AI for vulnerability discovery in real-world code. With threat actors getting faster at exploiting vulnerabilities, defenders will need to use AI to identify vulnerabilities in their software stack and to help identify and prioritize remediations and patches.

Insider threat risks will force organizations to evolve zero trust strategies

In 2025, an increasingly volatile geopolitical situation and the intensity of the AI race will make insider threats an even bigger risk for businesses, forcing organizations to expand zero-trust strategies. The traditional zero-trust model provides protection from external threats to an organization’s network by requiring continuous verification of the devices and users attempting to access critical business systems, services, and information from multiple sources. However, as we have seen in the more recent Jack Teixeira case, malicious insiders can still do significant damage to an organization within their approved and authenticated boundary.

To circumvent the remaining security gaps in a zero-trust architecture and mitigate increasing risk of insider threats, organizations will need to integrate a behavioral understanding dimension to their zero-trust approaches. The zero-trust best practice of “never trust, always verify” needs to evolve to become “never trust, always verify, and continuously monitor.”

Identity remains an expensive problem for businesses

2024 saw some of the biggest and costliest attacks – all because the attacker had access to compromised credentials. Essentially, they had the key to the front door. Businesses still struggle with identity and access management (IAM), and it’s getting more complex now that we’re in the middle of a massive Software-as-a-Service (SaaS) migration driven by increasing rates of AI and cloud use across businesses.

This challenge is going to be exacerbated in 2025 by a few global and business factors. First, there is an increasing push for digital identities, such as the rollout of the EU Digital Identity Framework that is underway, which could introduce additional attack vectors. As they scale, businesses are turning more and more to centralized identity and access solutions with decentralized infrastructure and relying on SaaS and application-native security.

Increasing vulnerabilities at the edge

During the COVID-19 pandemic, many organizations had to stand-up remote access solutions quickly – in a matter of days or weeks – without the high level of due diligence that they require to be fully secured. In 2025, we expect to see continued fall-out as these quickly spun-up solutions start to present genuine vulnerability to businesses. We’ve already seen this start to play out in 2024 with the mass-exploitation of internet-edge devices like firewalls and VPN gateway products.

By July 2024, Darktrace’s threat research team observed that the most widely exploited edge infrastructure devices were those related to Ivanti Connect Secure, JetBrains TeamCity, FortiClient Enterprise Management Server, and Palo Alto Networks PAN-OS. Across the industry, we’ve already seen many zero days and vulnerabilities exploiting these internet-connected devices, which provide inroads into the network and store/cache credentials and passwords of other users that are highly valuable for threat actors.

Hacking Operational Technology (OT) gets easier

Hacking OT is notoriously complex – causing damage requires an intimate knowledge of the specific systems being targeted and historically was the reserve of nation states. But as OT has become more reliant and integrated with IT systems, attackers have stumbled on ways to cause disruption without having to rely on the sophisticated attack-craft normally associated with nation-state groups. That’s why some of the most disruptive attacks of the last year have come from hacktivist and financially-motivated criminal gangs – such as the hijacking of internet-exposed Programmable Logic Controllers (PLCs) by anti-Israel hacking groups and ransomware attacks resulting in the cancellation of hospital operations.  

In 2025, we expect to see an increase in cyber-physical disruption caused by threat groups motivated by political ideology or financial gain, bringing the OT threat landscape closer in complexity and scale to that of the IT landscape. The sectors most at risk are those with a strong reliance on IoT sensors, including healthcare, transportation, and manufacturing sectors.

Securing space infrastructure and systems becomes a critical imperative

The global space industry is growing at an incredibly fast pace, and 2025 is on track to be another record-breaking year for spaceflight with major missions and test flights planned by NASA, ESA, CNSA as well as the expected launch of the first commercial space station from Vast and programs from Blue Origin, Amazon and more. Research from Analysis Mason suggests that 38,000 additional satellites will be built and launched by 2033 and the global space industry revenue will reach $1.7 trillion by 2032. Space has also been identified as a focus area for the incoming US administration.

In 2025, we expect to see new levels of tension emerge as private and public infrastructure increasingly intersect in space, shining a light on the lack of agreed upon cyber norms and the increasing challenge of protecting complex and remote space systems against modern cyber threats.  Historically focused on securing earth-bound networks and environments, the space industry will face challenges as post-orbit threats rise, with satellites moving up the target list.

The EU’s NIS2 Directive now recognizes the space sector as an essential entity that is subject to its most strict cybersecurity requirements. Will other jurisdictions follow suit? We expect global debates about cyber vulnerabilities in space to come to the forefront as we become more reliant on space-based technology.

Preparing for the future

Whatever 2025 brings, Darktrace is committed to providing robust cybersecurity leadership and solutions to enterprises around the world. Our team of subject matter experts will continue to monitor emerging threat trends, advising both our customers and our product development teams.

And for day-to-day security, our multi-layered AI cybersecurity platform can protect against all types of threats, whether they are known, unknown, entirely novel, or powered by AI. It accomplishes this by learning what is normal for your unique organization, therefore identifying unusual and suspicious behavior at machine speed, regardless of existing rules and signatures. In this way, organizations with Darktrace can be ready for any developments in the cybersecurity threat landscape that the new year may bring.

Discover more about AI cybersecurity in the white paper “CISO’s Guide to Buying AI

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Author
The Darktrace Community
Book a 1-1 meeting with one of our experts
Share this article

More in this series

No items found.

Blog

/

November 27, 2024

/

Inside the SOC

Behind the Veil: Darktrace's Detection of VPN Exploitation in SaaS Environments

Default blog imageDefault blog image

Introduction

In today’s digital landscape, Software-as-a-Service (SaaS) platforms have become indispensable for businesses, offering unparalleled flexibly, scalability, and accessibly across locations. However, this convenience comes with a significant caveat - an expanded attack surface that cyber criminals are increasingly exploiting. In 2023, 96.7% of organizations reported security incidents involving at least one SaaS application [1].

Virtual private networks (VPNs) play a crucial role in SaaS security, acting as gateways for secure remote access and safeguarding sensitive data and systems when properly configured. However, vulnerabilities in VPNs can create openings for attacks to exploit, allowing them to infiltrate SaaS environments, compromise data, and disrupt business operations. Notably, in early 2024, the Darktrace Threat Research team investigated the exploitation of zero-day vulnerabilities in Ivanti Connect Secure VPNs, which would allow threat actors to gain access to sensitive systems and execute remote code.

More recently, in August, Darktrace identified a SaaS compromise where a threat actor logged into a customer’s VPN from an unusual IP address, following an initial email compromise. The attacker then used a separate VPN to create a new email rule designed to obfuscate the phishing campaign they would later launch.

Attack Overview

The initial attack vector in this case appeared to be through the customer’s email environment. A trusted external contact received a malicious email from another mutual contact who had been compromised and forwarded it to several of the organization’s employees, believing it to be legitimate. Attackers often send malicious emails from compromised accounts to their past contacts, leveraging the trust associated with familiar email addresses. In this case, that trust caused an external victim to unknowingly propagate the attack further. Unfortunately, an internal user then interacted with a malicious payload included in the reply section of the forwarded email.

Later the same day, Darktrace / IDENTITY detected unusual login attempts from the IP 5.62.57[.]7, which had never been accessed by other SaaS users before. There were two failed attempts prior to the successful logins, with the error messages “Authentication failed due to flow token expired” and “This occurred due to 'Keep me signed in' interrupt when the user was signing in.” These failed attempts indicate that the threat actor may have been attempting to gain unauthorized access using stolen credentials or exploiting session management vulnerabilities. Furthermore, there was no attempt to use multi-factor authentication (MFA) during the successful login, suggesting that the threat actor had compromised the account’s credentials.

Following this, Darktrace detected the now compromised account creating a new email rule named “.” – a telltale sign of a malicious actor attempting to hide behind an ambiguous or generic rule name.

The email rule itself was designed to archive incoming emails and mark them as read, effectively hiding them from the user’s immediate view. By moving emails to the “Archive” folder, which is not frequently checked by end users, the attacker can conceal malicious communications and avoid detection. The settings also prevent any automatic deletion of the rules or forced overrides, indicating a cautious approach to maintaining control over the mailbox without raising suspicion. This technique allows the attacker to manipulate email visibility while maintaining a façade of normality in the compromised account.

Email Rule:

  • AlwaysDeleteOutlookRulesBlob: False
  • Force: False
  • MoveToFolder: Archive
  • Name: .
  • MarkAsRead: True
  • StopProcessingRules: True

Darktrace further identified that this email rule had been created from another IP address, 95.142.124[.]42, this time located in Canada. Open-source intelligence (OSINT) sources indicated this endpoint may have been malicious [2].

Given that this new email rule was created just three minutes after the initial login from a different IP in a different country, Darktrace recognized a geographic inconsistency. By analyzing the timing and rarity of the involved IP addresses, Darktrace identified the likelihood of malicious activity rather than legitimate user behavior, prompting further investigation.

Figure 1: The compromised SaaS account making anomalous login attempts from an unusual IP address in the US, followed by the creation of a new email rule from another VPN IP in Canada.

Just one minute later, Darktrace observed the attacker sending a large number of phishing emails to both internal and external recipients.

Figure 2: The compromised SaaS user account sending a high volume of outbound emails to new recipients or containing suspicious content.

Darktrace / EMAIL detected a significant spike in inbound emails for the compromised account, likely indicating replies to phishing emails.

Figure 3: The figure demonstrates the spike in inbound emails detected for the compromised account, including phishing-related replies.

Furthermore, Darktrace identified that these phishing emails contained a malicious DocSend link. While docsend[.]com is generally recognized as a legitimate file-sharing service belonging to Dropbox, it can be vulnerable to exploitation for hosting malicious content. In this instance, the DocSend domain in question, ‘hxxps://docsend[.]com/view/h9t85su8njxtugmq’, was flagged as malicious by various OSINT vendors [3][4].

Figure 4: Phishing emails detected containing a malicious DocSend link.

In this case, Darktrace Autonomous Response was not in active mode in the customer’s environment, which allowed the compromise to escalate until their security team intervened based on Darktrace’s alerts. Had Autonomous Response been enabled during the incident, it could have quickly mitigated the threat by disabling users and inbox rules, as suggested by Darktrace as actions that could be manually applied, exhibiting unusual behavior within the customer’s SaaS environment.

Figure 5: Suggested Autonomous Response actions for this incident that required human confirmation.

Despite this, Darktrace’s Managed Threat Detection service promptly alerted the Security Operations Center (SOC) team about the compromise, allowing them to conduct a thorough investigation and inform the customer before any further damage could take place.

Conclusion

This incident highlights the role of Darktrace in enhancing cyber security through its advanced AI capabilities. By detecting the initial phishing email and tracking the threat actor's actions across the SaaS environment, Darktrace effectively identified the threat and brought it to the attention of the customer’s security team.

Darktrace’s proactive monitoring was crucial in recognizing the unusual behavior of the compromised account. Darktrace / IDENTITY detected unauthorized access attempts from rare IP addresses, revealing the attacker’s use of a VPN to hide their location.

Correlating these anomalies allowed Darktrace to prompt immediate investigation, showcasing its ability to identify malicious activities that traditional security tools might miss. By leveraging AI-driven insights, organizations can strengthen their defense posture and prevent further exploitation of compromised accounts.

Credit to Priya Thapa (Cyber Analyst), Ben Atkins (Senior Model Developer) and Ryan Traill (Analyst Content Lead)

Appendices

Real-time Detection Models

  • SaaS / Compromise / Unusual Login and New Email Rule
  • SaaS / Compromise / High Priority New Email Rule
  • SaaS / Compromise / New Email Rule and Unusual Email Activity
  • SaaS / Compromise / Unusual Login and Outbound Email Spam
  • SaaS / Compliance / Anomalous New Email Rule
  • SaaS / Compromise / Suspicious Login and Suspicious Outbound Email(s)
  • SaaS / Email Nexus / Possible Outbound Email Spam

Autonomous Response Models

  • Antigena / SaaS / Antigena Email Rule Block
  • Antigena / SaaS / Antigena Enhanced Monitoring from SaaS User Block
  • Antigena / SaaS / Antigena Suspicious SaaS Activity Block

MITRE ATT&CK Mapping

Technique Name Tactic ID Sub-Technique of

  • Cloud Accounts. DEFENSE EVASION, PERSISTENCE, PRIVILEGE ESCALATION, INITIAL ACCESS T1078.004 T1078
  • Compromise Accounts RESOURCE DEVELOPMENT T1586
  • Email Accounts RESOURCE DEVELOPMENT T1586.002 T1586
  • Internal Spearphishing LATERAL MOVEMENT T1534 -
  • Outlook Rules PERSISTENCE T1137.005 T1137
  • Phishing INITIAL ACCESS T1566 -

Indicators of Compromise (IoCs)

IoC – Type – Description

5.62.57[.]7 – Unusual Login Source

95.142.124[.]42– IP – Unusual Source for Email Rule

hxxps://docsend[.]com/view/h9t85su8njxtugmq - Domain - Phishing Link

References

[1] https://wing.security/wp-content/uploads/2024/02/2024-State-of-SaaS-Report-Wing-Security.pdf

[2] https://www.virustotal.com/gui/ip-address/95.142.124.42

[3] https://urlscan.io/result/0caf3eee-9275-4cda-a28f-6d3c6c3c1039/

[4] https://www.virustotal.com/gui/url/8631f8004ee000b3f74461e5060e6972759c8d38ea8c359d85da9014101daddb

Continue reading
About the author
Priya Thapa
Cyber Analyst

Blog

/

November 27, 2024

/
No items found.

Why artificial intelligence is the future of cybersecurity

Default blog imageDefault blog image

Introduction: AI & Cybersecurity

In the wake of artificial intelligence (AI) becoming more commonplace, it’s no surprise to see that threat actors are also adopting the use of AI in their attacks at an accelerated pace. AI enables augmentation of complex tasks such as spear-phishing, deep fakes, polymorphic malware generation, and advanced persistent threat (APT) campaigns, which significantly enhances the sophistication and scale of their operations. This has put security professionals in a reactive state, struggling to keep pace with the proliferation of threats.

As AI reshapes the future of cyber threats, defenders are also looking to integrate AI technologies into their security stack. Adopting AI-powered solutions in cybersecurity enables security teams to detect and respond to these advanced threats more quickly and accurately as well as automate traditionally manual and routine tasks. According to research done by Darktrace in the 2024 State of AI Cybersecurity Report improving threat detection, identifying exploitable vulnerabilities, and automating low level security tasks were the top three ways practitioners saw AI enhancing their security team’s capabilities [1], underscoring the wide-ranging capabilities of AI in cyber.  

In this blog, we will discuss how AI has impacted the threat landscape, the rise of generative AI and AI adoption in security tools, and the importance of using multiple types of AI in cybersecurity solutions for a holistic and proactive approach to keeping your organization safe.  

The impact of AI on the threat landscape

The integration of AI and cybersecurity has brought about significant advancements across industries. However, it also introduces new security risks that challenge traditional defenses.  Three major concerns with the misuse of AI being leveraged by adversaries are: (1) the increase of novel social engineering attacks that are harder to detect and able to bypass traditional security tools,  (2) the ease of access for less experienced threat actors to now deliver advanced attacks at speed and scale and (3) the attacking of AI itself, to include machine learning models, data corpuses and APIs or interfaces.

In the context of social engineering, AI can be used to create more convincing phishing emails, conduct advanced reconnaissance, and simulate human-like interactions to deceive victims more effectively. Generative AI tools, such as ChatGPT, are already being used by adversaries to craft these sophisticated phishing emails, which can more aptly mimic human semantics without spelling or grammatical error and include personal information pulled from internet sources such as social media profiles. And this can all be done at machine speed and scale. In fact, Darktrace researchers observed a 135% rise in ‘novel social engineering attacks’ across Darktrace / EMAIL customers in 2023, corresponding to the widespread adoption and use of ChatGPT [2].  

Furthermore, these sophisticated social engineering attacks are now able to circumvent traditional security tools. In between December 21, 2023, and July 5, 2024, Darktrace / EMAIL detected 17.8 million phishing emails across the fleet, with 62% of these phishing emails successfully bypassing Domain-based Message Authentication, Reporting, and Conformance (DMARC) verification checks [2].  

And while the proliferation of novel attacks fueled by AI is persisting, AI also lowers the barrier to entry for threat actors. Publicly available AI tools make it easy for adversaries to automate complex tasks that previously required advanced technical skills. Additionally, AI-driven platforms and phishing kits available on the dark web provide ready-made solutions, enabling even novice attackers to execute effective cyber campaigns with minimal effort.

The impact of adversarial use of AI on the ever-evolving threat landscape is important for organizations to understand as it fundamentally changes the way we must approach cybersecurity. However, while the intersection of cybersecurity and AI can have potentially negative implications, it is important to recognize that AI can also be used to help protect us.

A generation of generative AI in cybersecurity

When the topic of AI in cybersecurity comes up, it’s typically in reference to generative AI, which became popularized in 2023. While it does not solely encapsulate what AI cybersecurity is or what AI can do in this space, it’s important to understand what generative AI is and how it can be implemented to help organizations get ahead of today’s threats.  

Generative AI (e.g., ChatGPT or Microsoft Copilot) is a type of AI that creates new or original content. It has the capability to generate images, videos, or text based on information it learns from large datasets. These systems use advanced algorithms and deep learning techniques to understand patterns and structures within the data they are trained on, enabling them to generate outputs that are coherent, contextually relevant, and often indistinguishable from human-created content.

For security professionals, generative AI offers some valuable applications. Primarily, it’s used to transform complex security data into clear and concise summaries. By analyzing vast amounts of security logs, alerts, and technical data, it can contextualize critical information quickly and present findings in natural, comprehensible language. This makes it easier for security teams to understand critical information quickly and improves communication with non-technical stakeholders. Generative AI can also automate the creation of realistic simulations for training purposes, helping security teams prepare for various cyberattack scenarios and improve their response strategies.  

Despite its advantages, generative AI also has limitations that organizations must consider. One challenge is the potential for generating false positives, where benign activities are mistakenly flagged as threats, which can overwhelm security teams with unnecessary alerts. Moreover, implementing generative AI requires significant computational resources and expertise, which may be a barrier for some organizations. It can also be susceptible to prompt injection attacks and there are risks with intellectual property or sensitive data being leaked when using publicly available generative AI tools.  In fact, according to the MIT AI Risk Registry, there are potentially over 700 risks that need to be mitigated with the use of generative AI.

Generative AI impact on cyber attacks screenshot data sheet

For more information on generative AI's impact on the cyber threat landscape download the Darktrace Data Sheet

Beyond the Generative AI Glass Ceiling

Generative AI has a place in cybersecurity, but security professionals are starting to recognize that it’s not the only AI organizations should be using in their security tool kit. In fact, according to Darktrace’s State of AI Cybersecurity Report, “86% of survey participants believe generative AI alone is NOT enough to stop zero-day threats.” As we look toward the future of AI in cybersecurity, it’s critical to understand that different types of AI have different strengths and use cases and choosing the technologies based on your organization’s specific needs is paramount.

There are a few types of AI used in cybersecurity that serve different functions. These include:

Supervised Machine Learning: Widely used in cybersecurity due to its ability to learn from labeled datasets. These datasets include historical threat intelligence and known attack patterns, allowing the model to recognize and predict similar threats in the future. For example, supervised machine learning can be applied to email filtering systems to identify and block phishing attempts by learning from past phishing emails. This is human-led training facilitating automation based on known information.  

Large Language Models (LLMs): Deep learning models trained on extensive datasets to understand and generate human-like text. LLMs can analyze vast amounts of text data, such as security logs, incident reports, and threat intelligence feeds, to identify patterns and anomalies that may indicate a cyber threat. They can also generate detailed and coherent reports on security incidents, summarizing complex data into understandable formats.

Natural Language Processing (NLP): Involves the application of computational techniques to process and understand human language. In cybersecurity, NLP can be used to analyze and interpret text-based data, such as emails, chat logs, and social media posts, to identify potential threats. For instance, NLP can help detect phishing attempts by analyzing the language used in emails for signs of deception.

Unsupervised Machine Learning: Continuously learns from raw, unstructured data without predefined labels. It is particularly useful in identifying new and unknown threats by detecting anomalies that deviate from normal behavior. In cybersecurity, unsupervised learning can be applied to network traffic analysis to identify unusual patterns that may indicate a cyberattack. It can also be used in endpoint detection and response (EDR) systems to uncover previously unknown malware by recognizing deviations from typical system behavior.

Types of AI in cybersecurity
Figure 1: Types of AI in cybersecurity

Employing multiple types of AI in cybersecurity is essential for creating a layered and adaptive defense strategy. Each type of AI, from supervised and unsupervised machine learning to large language models (LLMs) and natural language processing (NLP), brings distinct capabilities that address different aspects of cyber threats. Supervised learning excels at recognizing known threats, while unsupervised learning uncovers new anomalies. LLMs and NLP enhance the analysis of textual data for threat detection and response and aid in understanding and mitigating social engineering attacks. By integrating these diverse AI technologies, organizations can achieve a more holistic and resilient cybersecurity framework, capable of adapting to the ever-evolving threat landscape.

A Multi-Layered AI Approach with Darktrace

AI-powered security solutions are emerging as a crucial line of defense against an AI-powered threat landscape. In fact, “Most security stakeholders (71%) are confident that AI-powered security solutions will be better able to block AI-powered threats than traditional tools.” And 96% agree that AI-powered solutions will level up their organization’s defenses.  As organizations look to adopt these tools for cybersecurity, it’s imperative to understand how to evaluate AI vendors to find the right products as well as build trust with these AI-powered solutions.  

Darktrace, a leader in AI cybersecurity since 2013, emphasizes interpretability, explainability, and user control, ensuring that our AI is understandable, customizable and transparent. Darktrace’s approach to cyber defense is rooted in the belief that the right type of AI must be applied to the right use cases. Central to this approach is Self-Learning AI, which is crucial for identifying novel cyber threats that most other tools miss. This is complemented by various AI methods, including LLMs, generative AI, and supervised machine learning, to support the Self-Learning AI.  

Darktrace focuses on where AI can best augment the people in a security team and where it can be used responsibly to have the most positive impact on their work. With a combination of these AI techniques, applied to the right use cases, Darktrace enables organizations to tailor their AI defenses to unique risks, providing extended visibility across their entire digital estates with the Darktrace ActiveAI Security Platform™.

Credit to: Ed Metcalf, Senior Director Product Marketing, AI & Innovations - Nicole Carignan VP of Strategic Cyber AI for their contribution to this blog.

CISOs guide to buying AI white paper cover

To learn more about Darktrace and AI in cybersecurity download the CISO’s Guide to Cyber AI here.

Download the white paper to learn how buyers should approach purchasing AI-based solutions. It includes:

  • Key steps for selecting AI cybersecurity tools
  • Questions to ask and responses to expect from vendors
  • Understand tools available and find the right fit
  • Ensure AI investments align with security goals and needs
Continue reading
About the author
Brittany Woodsmall
Product Marketing Manager, Attack Surface Management
Your data. Our AI.
Elevate your network security with Darktrace AI