Blog
/
AI
/
November 25, 2024

Why Artificial Intelligence is the Future of Cybersecurity

Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
25
Nov 2024
This blog explores the impact of AI on the threat landscape, the benefits of AI in cybersecurity, and the role it plays in enhancing security practices and tools.

Introduction: AI & Cybersecurity

In the wake of artificial intelligence (AI) becoming more commonplace, it’s no surprise to see that threat actors are also adopting the use of AI in their attacks at an accelerated pace. AI enables augmentation of complex tasks such as spear-phishing, deep fakes, polymorphic malware generation, and advanced persistent threat (APT) campaigns, which significantly enhances the sophistication and scale of their operations. This has put security professionals in a reactive state, struggling to keep pace with the proliferation of threats.

As AI reshapes the future of cyber threats, defenders are also looking to integrate AI technologies into their security stack. Adopting AI-powered solutions in cybersecurity enables security teams to detect and respond to these advanced threats more quickly and accurately as well as automate traditionally manual and routine tasks. According to research done by Darktrace in the 2024 State of AI Cybersecurity Report improving threat detection, identifying exploitable vulnerabilities, and automating low level security tasks were the top three ways practitioners saw AI enhancing their security team’s capabilities [1], underscoring the wide-ranging capabilities of AI in cyber.  

In this blog, we will discuss how AI has impacted the threat landscape, the rise of generative AI and AI adoption in security tools, and the importance of using multiple types of AI in cybersecurity solutions for a holistic and proactive approach to keeping your organization safe.  

The impact of AI on the threat landscape

The integration of AI and cybersecurity has brought about significant advancements across industries. However, it also introduces new security risks that challenge traditional defenses.  Three major concerns with the misuse of AI being leveraged by adversaries are: (1) the increase of novel social engineering attacks that are harder to detect and able to bypass traditional security tools,  (2) the ease of access for less experienced threat actors to now deliver advanced attacks at speed and scale and (3) the attacking of AI itself, to include machine learning models, data corpuses and APIs or interfaces.

In the context of social engineering, AI can be used to create more convincing phishing emails, conduct advanced reconnaissance, and simulate human-like interactions to deceive victims more effectively. Generative AI tools, such as ChatGPT, are already being used by adversaries to craft these sophisticated phishing emails, which can more aptly mimic human semantics without spelling or grammatical error and include personal information pulled from internet sources such as social media profiles. And this can all be done at machine speed and scale. In fact, Darktrace researchers observed a 135% rise in ‘novel social engineering attacks’ across Darktrace / EMAIL customers in 2023, corresponding to the widespread adoption and use of ChatGPT [2].  

Furthermore, these sophisticated social engineering attacks are now able to circumvent traditional security tools. In between December 21, 2023, and July 5, 2024, Darktrace / EMAIL detected 17.8 million phishing emails across the fleet, with 62% of these phishing emails successfully bypassing Domain-based Message Authentication, Reporting, and Conformance (DMARC) verification checks [2].  

And while the proliferation of novel attacks fueled by AI is persisting, AI also lowers the barrier to entry for threat actors. Publicly available AI tools make it easy for adversaries to automate complex tasks that previously required advanced technical skills. Additionally, AI-driven platforms and phishing kits available on the dark web provide ready-made solutions, enabling even novice attackers to execute effective cyber campaigns with minimal effort.

The impact of adversarial use of AI on the ever-evolving threat landscape is important for organizations to understand as it fundamentally changes the way we must approach cybersecurity. However, while the intersection of cybersecurity and AI can have potentially negative implications, it is important to recognize that AI can also be used to help protect us.

A generation of generative AI in cybersecurity

When the topic of AI in cybersecurity comes up, it’s typically in reference to generative AI, which became popularized in 2023. While it does not solely encapsulate what AI cybersecurity is or what AI can do in this space, it’s important to understand what generative AI is and how it can be implemented to help organizations get ahead of today’s threats.  

Generative AI (e.g., ChatGPT or Microsoft Copilot) is a type of AI that creates new or original content. It has the capability to generate images, videos, or text based on information it learns from large datasets. These systems use advanced algorithms and deep learning techniques to understand patterns and structures within the data they are trained on, enabling them to generate outputs that are coherent, contextually relevant, and often indistinguishable from human-created content.

For security professionals, generative AI offers some valuable applications. Primarily, it’s used to transform complex security data into clear and concise summaries. By analyzing vast amounts of security logs, alerts, and technical data, it can contextualize critical information quickly and present findings in natural, comprehensible language. This makes it easier for security teams to understand critical information quickly and improves communication with non-technical stakeholders. Generative AI can also automate the creation of realistic simulations for training purposes, helping security teams prepare for various cyberattack scenarios and improve their response strategies.  

Despite its advantages, generative AI also has limitations that organizations must consider. One challenge is the potential for generating false positives, where benign activities are mistakenly flagged as threats, which can overwhelm security teams with unnecessary alerts. Moreover, implementing generative AI requires significant computational resources and expertise, which may be a barrier for some organizations. It can also be susceptible to prompt injection attacks and there are risks with intellectual property or sensitive data being leaked when using publicly available generative AI tools.  In fact, according to the MIT AI Risk Registry, there are potentially over 700 risks that need to be mitigated with the use of generative AI.

Generative AI impact on cyber attacks screenshot data sheet

For more information on generative AI's impact on the cyber threat landscape download the Darktrace Data Sheet

Beyond the Generative AI Glass Ceiling

Generative AI has a place in cybersecurity, but security professionals are starting to recognize that it’s not the only AI organizations should be using in their security tool kit. In fact, according to Darktrace’s State of AI Cybersecurity Report, “86% of survey participants believe generative AI alone is NOT enough to stop zero-day threats.” As we look toward the future of AI in cybersecurity, it’s critical to understand that different types of AI have different strengths and use cases and choosing the technologies based on your organization’s specific needs is paramount.

There are a few types of AI used in cybersecurity that serve different functions. These include:

Supervised Machine Learning: Widely used in cybersecurity due to its ability to learn from labeled datasets. These datasets include historical threat intelligence and known attack patterns, allowing the model to recognize and predict similar threats in the future. For example, supervised machine learning can be applied to email filtering systems to identify and block phishing attempts by learning from past phishing emails. This is human-led training facilitating automation based on known information.  

Large Language Models (LLMs): Deep learning models trained on extensive datasets to understand and generate human-like text. LLMs can analyze vast amounts of text data, such as security logs, incident reports, and threat intelligence feeds, to identify patterns and anomalies that may indicate a cyber threat. They can also generate detailed and coherent reports on security incidents, summarizing complex data into understandable formats.

Natural Language Processing (NLP): Involves the application of computational techniques to process and understand human language. In cybersecurity, NLP can be used to analyze and interpret text-based data, such as emails, chat logs, and social media posts, to identify potential threats. For instance, NLP can help detect phishing attempts by analyzing the language used in emails for signs of deception.

Unsupervised Machine Learning: Continuously learns from raw, unstructured data without predefined labels. It is particularly useful in identifying new and unknown threats by detecting anomalies that deviate from normal behavior. In cybersecurity, unsupervised learning can be applied to network traffic analysis to identify unusual patterns that may indicate a cyberattack. It can also be used in endpoint detection and response (EDR) systems to uncover previously unknown malware by recognizing deviations from typical system behavior.

Types of AI in cybersecurity
Figure 1: Types of AI in cybersecurity

Employing multiple types of AI in cybersecurity is essential for creating a layered and adaptive defense strategy. Each type of AI, from supervised and unsupervised machine learning to large language models (LLMs) and natural language processing (NLP), brings distinct capabilities that address different aspects of cyber threats. Supervised learning excels at recognizing known threats, while unsupervised learning uncovers new anomalies. LLMs and NLP enhance the analysis of textual data for threat detection and response and aid in understanding and mitigating social engineering attacks. By integrating these diverse AI technologies, organizations can achieve a more holistic and resilient cybersecurity framework, capable of adapting to the ever-evolving threat landscape.

A Multi-Layered AI Approach with Darktrace

AI-powered security solutions are emerging as a crucial line of defense against an AI-powered threat landscape. In fact, “Most security stakeholders (71%) are confident that AI-powered security solutions will be better able to block AI-powered threats than traditional tools.” And 96% agree that AI-powered solutions will level up their organization’s defenses.  As organizations look to adopt these tools for cybersecurity, it’s imperative to understand how to evaluate AI vendors to find the right products as well as build trust with these AI-powered solutions.  

Darktrace, a leader in AI cybersecurity since 2013, emphasizes interpretability, explainability, and user control, ensuring that our AI is understandable, customizable and transparent. Darktrace’s approach to cyber defense is rooted in the belief that the right type of AI must be applied to the right use cases. Central to this approach is Self-Learning AI, which is crucial for identifying novel cyber threats that most other tools miss. This is complemented by various AI methods, including LLMs, generative AI, and supervised machine learning, to support the Self-Learning AI.  

Darktrace focuses on where AI can best augment the people in a security team and where it can be used responsibly to have the most positive impact on their work. With a combination of these AI techniques, applied to the right use cases, Darktrace enables organizations to tailor their AI defenses to unique risks, providing extended visibility across their entire digital estates with the Darktrace ActiveAI Security Platform™.

Credit to: Ed Metcalf, Senior Director Product Marketing, AI & Innovations - Nicole Carignan VP of Strategic Cyber AI for their contribution to this blog.

CISOs guide to buying AI white paper cover

To learn more about Darktrace and AI in cybersecurity download the CISO’s Guide to Cyber AI here.

Download the white paper to learn how buyers should approach purchasing AI-based solutions. It includes:

  • Key steps for selecting AI cybersecurity tools
  • Questions to ask and responses to expect from vendors
  • Understand tools available and find the right fit
  • Ensure AI investments align with security goals and needs
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Author
Brittany Woodsmall
Product Marketing Manager, AI & Attack Surface
Book a 1-1 meeting with one of our experts
Share this article

More in this series

No items found.

Blog

/

OT

/

March 25, 2025

Darktrace Recognized as the Only Visionary in the 2025 Gartner® Magic Quadrant™ for CPS Protection Platforms

Default blog imageDefault blog image

We are thrilled to announce that Darktrace has been named the only Visionary in the inaugural Gartner® Magic Quadrant™ for Cyber-Physical Systems (CPS) Protection Platforms. We feel This recognition highlights Darktrace’s AI-driven approach to securing industrial environments, where conventional security solutions struggle to keep pace with increasing cyber threats.

A milestone for CPS security

It's our opinion that the first-ever Gartner Magic Quadrant for CPS Protection Platforms reflects a growing industry shift toward purpose-built security solutions for critical infrastructure. As organizations integrate IT, OT, and cloud-connected systems, the cyber risk landscape continues to expand. Gartner evaluated 17 vendors based on their Ability to Execute and Completeness of Vision, establishing a benchmark for security leaders looking to enhance cyber resilience in industrial environments.

We believe the Gartner recognition of Darktrace as the only Visionary reaffirms the platform’s ability to proactively defend against cyber risks through AI-driven anomaly detection, autonomous response, and risk-based security strategies. With increasingly sophisticated attacks targeting industrial control systems, organizations need a solution that continuously evolves to defend against both known and unknown threats.

AI-driven security for CPS environments

Securing CPS environments requires an approach that adapts to the dynamic nature of industrial operations. Traditional security tools rely on static signatures and predefined rules, leaving gaps in protection against novel and sophisticated threats. Darktrace / OT takes a different approach, leveraging Self-Learning AI to detect and neutralize threats in real time, even in air-gapped or highly regulated environments.

Darktrace / OT continuously analyzes network behaviors to establish a deep understanding of what is “normal” for each industrial environment. This enables it to autonomously identify deviations that signal potential cyber threats, providing early warning and proactive defense before attacks can disrupt operations. Unlike rule-based security models that require constant manual updates, Darktrace / OT improves with the environment, ensuring long-term resilience against emerging cyber risks.

Bridging the IT-OT security gap

A major challenge for organizations protecting CPS environments is the disconnect between IT and OT security. While IT security has traditionally focused on data

protection and compliance, OT security is driven by operational uptime and safety, leading to siloed security programs that leave critical gaps in visibility and response.

Darktrace / OT eliminates these silos by providing unified visibility across IT, OT, and IoT assets, ensuring that security teams have a complete picture of their attack surface. Its AI-driven approach enables cross-domain threat detection, recognizing risks that move laterally between IT and OT environments. By seamlessly integrating with existing security architectures, Darktrace / OT helps organizations close security gaps without disrupting industrial processes.

Proactive OT risk management and resilience

Beyond detection and response, Darktrace / OT strengthens organizations’ ability to manage cyber risk proactively. By mapping vulnerabilities to real-world attack paths, it prioritizes remediation actions based on actual exploitability and business impact, rather than relying on isolated CVE scores. This risk-based approach enables security teams to focus resources where they matter most, reducing overall exposure to cyber threats.

With autonomous threat response capabilities, Darktrace / OT not only identifies risks but also contains them in real time, preventing attackers from escalating intrusions. Whether mitigating ransomware, insider threats, or sophisticated nation-state attacks, Darktrace / OT ensures that industrial environments remain secure, operational, and resilient, no matter how threats evolve.

AI-powered incident response and SOC automation

Security teams are facing an overwhelming volume of alerts, making it difficult to prioritize threats and respond effectively. Darktrace / OT’s Cyber AI Analyst acts as a force multiplier for security teams by automating threat investigation, alert triage, and response actions. By mimicking the workflow of a human SOC analyst, Cyber AI Analyst provides contextual insights that accelerate incident response and reduce the manual workload on security teams.

With 24/7 autonomous monitoring, Darktrace / OT ensures that threats are continuously detected and investigated in real time. Whether facing ransomware, insider threats, or sophisticated nation-state attacks, organizations can rely on AI-driven security to contain threats before they disrupt operations.

Trusted by customers: Darktrace / OT recognized in Gartner Peer Insights

Source: Gartner Peer Insights (Oct 28th)

Beyond our recognition in the Gartner Magic Quadrant, we feel Darktrace / OT is one of the highest-rated CPS security solutions on Gartner Peer Insights, reflecting strong customer trust and validation. With a 4.9/5 overall rating and the highest "Willingness to Recommend" score among CPS vendors, organizations across critical infrastructure and industrial sectors recognize the impact of our AI-driven security approach. Source: Gartner Peer Insights (Oct 28th)

This strong customer endorsement underscores why leading enterprises trust Darktrace / OT to secure their CPS environments today and in the future.

Redefining the future of CPS security

It's our view that Darktrace’s recognition as the only Visionary in the Gartner Magic Quadrant for CPS Protection Platforms validates its leadership in next-generation industrial security. As cyber threats targeting critical infrastructure continue to rise, organizations must adopt AI-driven security solutions that can adapt, respond, and mitigate risks in real time.

We believe this recognition reinforces our commitment to innovation and our mission to secure the world’s most essential systems. This recognition reinforces our commitment to innovation and our mission to secure the world’s most essential systems.

® Download the full Gartner Magic Quadrant for CPS Protection Platforms

® Request a demo to see Darktrace OT in action.

Gartner, Magic Quadrant for CPS Protection Platforms , Katell Thielemann, Wam Voster, Ruggero Contu 12 February 2025

Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

GARTNER is a registered trademark and service mark of Gartner and Magic Quadrant and Peer Insights are a registered trademark, of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved. Gartner Peer Insights content consists of the opinions of individual end users based on their own experiences with the vendors listed on the platform, should not be construed as statements of fact, nor do they represent the views of Gartner or its affiliates. Gartner does not endorse any vendor, product or service depicted in this content nor makes any warranties, expressed or implied, with respect to this content, about its accuracy or completeness, including any warranties of merchantability or fitness for a particular purpose.

Continue reading
About the author
Pallavi Singh
Product Marketing Manager, OT Security & Compliance

Blog

/

AI

/

March 25, 2025

Survey Findings: AI Cybersecurity Priorities and Objectives in 2025

Default blog imageDefault blog image

AI is changing the cybersecurity field, both on the offensive and defensive sides. We surveyed over 1,500 cybersecurity professionals from around the world to uncover their attitudes, understanding, and priorities when it comes to AI cybersecurity in 2025. Our full report, unearthing some telling trends, is available now.  

Download the full report to explore these findings in depth

It is clear that security professionals know their field is changing fast, and that AI will continue to influence those changes. Our survey results show that they are aware that the rise of AI will require them to adopt new tools and learn to use them effectively. Still, they aren’t always certain about how to plan for the future, or what to invest in.

The top priorities of security stakeholders for improving their defenses against AI-powered threats include augmenting their existing tool stacks with AI-powered solutions and improving integration among their security tools.

Figure 1: Year-over-year changes to the priorities of securitystakeholders.

Increasing cybersecurity staff

As was also the case last year, security stakeholders are less interested in hiring additional staff than in adding new AI-powered tools onto their existing security stacks, with only with 11% (and only 8% of executives) planning to increase cybersecurity staff in 2025.

This suggests that leaders are looking for new methods to overcome talent resource shortages.

Adding AI-powered security tools to supplement existing solutions

Executives are particularly enthusiastic about adopting AI-driven tools. Within that goal, there is consensus about the qualities cyber professionals are looking for when purchasing new security capabilities or replacing existing products.

  • 87% of survey respondents prefer solutions that are part of a broader platform over individual point products

These results are similar to last year’s, where again, almost nine out of ten agreed that a platform-oriented security solution was more effective at stopping cyber threats than a collection of individual products.

  • 88% of survey respondents agree that the use of AI within the security stack is critical to freeing up time for security teams to become more proactive, compared to reactive

AI itself can contribute to this shift from reactive to proactive security, improving risk prioritization and automating preventative strategies like Attack Surface Management (ASM) and proactive exposure management.

  • 84% of survey respondents prefer defensive AI solutions that do not require the organization’s data to be shared externally

This preference may reflect increasing attention to the data privacy and security risks posed by generative AI (gen AI) adoption. It may also reflect growing awareness of data residency requirements and other restrictions that regulators are imposing.

Improving cybersecurity awareness training for end users

Based on the survey results, practitioners in SecOps are more interested in improving security awareness training.

This goal is not necessarily mutually exclusive from the addition of AI tools. For example, teams can leverage AI to build more effective security awareness training programs, and as gen AI tools are adopted, users will need to be taught about data privacy and associated security risks.

Looking towards the future

One conclusion we can draw from the attitudinal shifts from last year’s survey to this year’s: while hiring more security staff might be a nice-to-have, implementing AI-powered tools so that existing employees can work smarter is increasingly viewed as a must-have.

However, trending goals are not just about managing resources, whether headcount or AI investments, to keep up with workloads. Existing end users must also be trained to follow safe practices while using established and newly adopted tools.

Security professionals, including executives, SecOps, and every role in between, continue to shift their identified challenges and priorities as they gear up for the coming year in the Era of AI.

State of AI report

Download the full report to explore these findings in depth

The full report for Darktrace’s State of AI Cybersecurity is out now. Download the paper to dig deeper into these trends, and see how results differ by industry, region, organization size, and job title.  

Continue reading
About the author
The Darktrace Community
Your data. Our AI.
Elevate your network security with Darktrace AI