ブログ
/
AI
/
July 24, 2024

The State of AI in Cybersecurity: Understanding AI Technologies

Part 4: This blog explores the findings from Darktrace’s State of AI Cybersecurity Report on security professionals' understanding of the different types of AI used in security programs. Get the latest insights into the evolving challenges, growing demand for skilled professionals, and the need for integrated security solutions by downloading the full report.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
The Darktrace Community
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
24
Jul 2024

About the State of AI Cybersecurity Report

Darktrace surveyed 1,800 CISOs, security leaders, administrators, and practitioners from industries around the globe. Our research was conducted to understand how the adoption of new AI-powered offensive and defensive cybersecurity technologies are being managed by organizations.

This blog continues the conversation from “The State of AI in Cybersecurity: Unveiling Global Insights from 1,800 Security Practitioners”. This blog will focus on security professionals’ understanding of AI technologies in cybersecurity tools.

To access download the full report, click here.

How familiar are security professionals with supervised machine learning

Just 31% of security professionals report that they are “very familiar” with supervised machine learning.

Many participants admitted unfamiliarity with various AI types. Less than one-third felt "very familiar" with the technologies surveyed: only 31% with supervised machine learning and 28% with natural language processing (NLP).

Most participants were "somewhat" familiar, ranging from 46% for supervised machine learning to 36% for generative adversarial networks (GANs). Executives and those in larger organizations reported the highest familiarity.

Combining "very" and "somewhat" familiar responses, 77% had familiarity with supervised machine learning, 74% generative AI, and 73% NLP. With generative AI getting so much media attention, and NLP being the broader area of AI that encompasses generative AI, these results may indicate that stakeholders are understanding the topic on the basis of buzz, not hands-on work with the technologies.  

If defenders hope to get ahead of attackers, they will need to go beyond supervised learning algorithms trained on known attack patterns and generative AI. Instead, they’ll need to adopt a comprehensive toolkit comprised of multiple, varied AI approaches—including unsupervised algorithms that continuously learn from an organization’s specific data rather than relying on big data generalizations.  

Different types of AI

Different types of AI have different strengths and use cases in cyber security. It’s important to choose the right technique for what you’re trying to achieve.  

Supervised machine learning: Applied more often than any other type of AI in cyber security. Trained on human attack patterns and historical threat intelligence.  

Large language models (LLMs): Applies deep learning models trained on extremely large data sets to understand, summarize, and generate new content. Used in generative AI tools.  

Natural language processing (NLP): Applies computational techniques to process and understand human language.  

Unsupervised machine learning: Continuously learns from raw, unstructured data to identify deviations that represent true anomalies.  

What impact will generative AI have on the cybersecurity field?

More than half of security professionals (57%) believe that generative AI will have a bigger impact on their field over the next few years than other types of AI.

Chart showing the types of AI expected to impact security the most
Figure 1: Chart from Darktrace's State of AI in Cybersecurity Report

Security stakeholders are highly aware of generative AI and LLMs, viewing them as pivotal to the field's future. Generative AI excels at abstracting information, automating tasks, and facilitating human-computer interaction. However, LLMs can "hallucinate" due to training data errors and are vulnerable to prompt injection attacks. Despite improvements in securing LLMs, the best cyber defenses use a mix of AI types for enhanced accuracy and capability.

AI education is crucial as industry expectations for generative AI grow. Leaders and practitioners need to understand where and how to use AI while managing risks. As they learn more, there will be a shift from generative AI to broader AI applications.

Do security professionals fully understand the different types of AI in security products?

Only 26% of security professionals report a full understanding of the different types of AI in use within security products.

Confusion is prevalent in today’s marketplace. Our survey found that only 26% of respondents fully understand the AI types in their security stack, while 31% are unsure or confused by vendor claims. Nearly 65% believe generative AI is mainly used in cybersecurity, though it’s only useful for identifying phishing emails. This highlights a gap between user expectations and vendor delivery, with too much focus on generative AI.

Key findings include:

  • Executives and managers report higher understanding than practitioners.
  • Larger organizations have better understanding due to greater specialization.

As AI evolves, vendors are rapidly introducing new solutions faster than practitioners can learn to use them. There's a strong need for greater vendor transparency and more education for users to maximize the technology's value.

To help ease confusion around AI technologies in cybersecurity, Darktrace has released the CISO’s Guide to Cyber AI. A comprehensive white paper that categorizes the different applications of AI in cybersecurity. Download the White Paper here.  

Do security professionals believe generative AI alone is enough to stop zero-day threats?

No! 86% of survey participants believe generative AI alone is NOT enough to stop zero-day threats

This consensus spans all geographies, organization sizes, and roles, though executives are slightly less likely to agree. Asia-Pacific participants agree more, while U.S. participants agree less.

Despite expecting generative AI to have the most impact, respondents recognize its limited security use cases and its need to work alongside other AI types. This highlights the necessity for vendor transparency and varied AI approaches for effective security across threat prevention, detection, and response.

Stakeholders must understand how AI solutions work to ensure they offer advanced, rather than outdated, threat detection methods. The survey shows awareness that old methods are insufficient.

To access the full report, click here.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
The Darktrace Community

Blog

/

/

February 16, 2026

CVE-2026-1731: How Darktrace Sees the BeyondTrust Exploitation Wave Unfolding

Default blog imageDefault blog image

Note: Darktrace's Threat Research team is publishing now to help defenders. We will continue updating this blog as our investigations unfold.

Background

On February 6, 2026, the Identity & Access Management solution BeyondTrust announced patches for a vulnerability, CVE-2026-1731, which enables unauthenticated remote code execution using specially crafted requests.  This vulnerability affects BeyondTrust Remote Support (RS) and particular older versions of Privileged Remote Access (PRA) [1].

A Proof of Concept (PoC) exploit for this vulnerability was released publicly on February 10, and open-source intelligence (OSINT) reported exploitation attempts within 24 hours [2].

Previous intrusions against Beyond Trust technology have been cited as being affiliated with nation-state attacks, including a 2024 breach targeting the U.S. Treasury Department. This incident led to subsequent emergency directives from  the Cybersecurity and Infrastructure Security Agency (CISA) and later showed attackers had chained previously unknown vulnerabilities to achieve their goals [3].

Additionally, there appears to be infrastructure overlap with React2Shell mass exploitation previously observed by Darktrace, with command-and-control (C2) domain  avg.domaininfo[.]top seen in potential post-exploitation activity for BeyondTrust, as well as in a React2Shell exploitation case involving possible EtherRAT deployment.

Darktrace Detections

Darktrace’s Threat Research team has identified highly anomalous activity across several customers that may relate to exploitation of BeyondTrust since February 10, 2026. Observed activities include:

-              Outbound connections and DNS requests for endpoints associated with Out-of-Band Application Security Testing; these services are commonly abused by threat actors for exploit validation.  Associated Darktrace models include:

o    Compromise / Possible Tunnelling to Bin Services

-              Suspicious executable file downloads. Associated Darktrace models include:

o    Anomalous File / EXE from Rare External Location

-              Outbound beaconing to rare domains. Associated Darktrace models include:

o   Compromise / Agent Beacon (Medium Period)

o   Compromise / Agent Beacon (Long Period)

o   Compromise / Sustained TCP Beaconing Activity To Rare Endpoint

o   Compromise / Beacon to Young Endpoint

o   Anomalous Server Activity / Rare External from Server

o   Compromise / SSL Beaconing to Rare Destination

-              Unusual cryptocurrency mining activity. Associated Darktrace models include:

o   Compromise / Monero Mining

o   Compromise / High Priority Crypto Currency Mining

And model alerts for:

o    Compromise / Rare Domain Pointing to Internal IP

IT Defenders: As part of best practices, we highly recommend employing an automated containment solution in your environment. For Darktrace customers, please ensure that Autonomous Response is configured correctly. More guidance regarding this activity and suggested actions can be found in the Darktrace Customer Portal.  

Appendices

Potential indicators of post-exploitation behavior:

·      217.76.57[.]78 – IP address - Likely C2 server

·      hXXp://217.76.57[.]78:8009/index.js - URL -  Likely payload

·      b6a15e1f2f3e1f651a5ad4a18ce39d411d385ac7  - SHA1 - Likely payload

·      195.154.119[.]194 – IP address – Likely C2 server

·      hXXp://195.154.119[.]194/index.js - URL – Likely payload

·      avg.domaininfo[.]top – Hostname – Likely C2 server

·      104.234.174[.]5 – IP address - Possible C2 server

·      35da45aeca4701764eb49185b11ef23432f7162a – SHA1 – Possible payload

·      hXXp://134.122.13[.]34:8979/c - URL – Possible payload

·      134.122.13[.]34 – IP address – Possible C2 server

·      28df16894a6732919c650cc5a3de94e434a81d80 - SHA1 - Possible payload

References:

1.        https://nvd.nist.gov/vuln/detail/CVE-2026-1731

2.        https://www.securityweek.com/beyondtrust-vulnerability-targeted-by-hackers-within-24-hours-of-poc-release/

3.        https://www.rapid7.com/blog/post/etr-cve-2026-1731-critical-unauthenticated-remote-code-execution-rce-beyondtrust-remote-support-rs-privileged-remote-access-pra/

Continue reading
About the author
Emma Foulger
Global Threat Research Operations Lead

Blog

/

AI

/

February 13, 2026

How AI is redefining cybersecurity and the role of today’s CIO

Default blog imageDefault blog image

Why AI is essential to modern security

As attackers use automation and AI to outpace traditional tools and people, our approach to cybersecurity must fundamentally change. That’s why one of my first priorities as Withum's CIO was to elevate cybersecurity from a technical function to a business enabler.

What used to be “IT’s problem” is now a boardroom conversation – and for good reason. Protecting our data, our people, and our clients directly impacts revenue, reputation and competitive positioning.  

As CIOs / CISOs, our responsibilities aren’t just keeping systems running, but enabling trust, protecting our organization's reputation, and giving the business confidence to move forward even as the digital world becomes less predictable. To pull that off, we need to know the business inside-out, understand risk, and anticipate what's coming next. That's where AI becomes essential.

Staying ahead when you’re a natural target

With more than 3,100 team members and over 1,000 CPAs (Certified Public Accountant), Withum’s operates in an industry that naturally attracts attention from attackers. Firms like ours handle highly sensitive financial and personal information, which puts us squarely in the crosshairs for sophisticated phishing, ransomware, and cloud-based attacks.

We’ve built our security program around resilience, visibility, and scale. By using Darktrace’s AI-powered platform, we can defend against both known and unknown threats, across email and network, without slowing our teams down.

Our focus is always on what we’re protecting: our clients’ information, our intellectual property, and the reputation of the firm. With Darktrace, we’re not just keeping up with the massive volume of AI-powered attacks coming our way, we’re staying ahead. The platform defends our digital ecosystem around the clock, detecting potential threats across petabytes of data and autonomously investigating and responding to tens of thousands of incidents every year.

Catching what traditional tools miss

Beyond the sheer scale of attacks, Darktrace ActiveAI Security PlatformTM is critical for identifying threats that matter to our business. Today’s attackers don’t use generic techniques. They leverage automation and AI to craft highly targeted attacks – impersonating trusted colleagues, mimicking legitimate websites, and weaving in real-world details that make their messages look completely authentic.

The platform, covering our network, endpoints, inboxes, cloud and more is so effective because it continuously learns what’s normal for our business: how our users typically behave, the business- and industry-specific language we use, how systems communicate, and how cloud resources are accessed. It picks up on minute details that would sail right past traditional tools and even highly trained security professionals.

Freeing up our team to do what matters

On average, Darktrace autonomously investigates 88% of all our security events, using AI to connect the dots across email, network, and cloud activity to figure out what matters. That shift has changed how our team works. Instead of spending hours sorting through alerts, we can focus on proactive efforts that actually strengthen our security posture.

For example, we saved 1,850 hours on investigating security issues over a ten-day period. We’ve reinvested the time saved into strengthening policies, refining controls, and supporting broader business initiatives, rather than spending endless hours manually piecing together alerts.

Real confidence, real results

The impact of our AI-driven approach goes well beyond threat detection. Today, we operate from a position of confidence, knowing that threats are identified early, investigated automatically, and communicated clearly across our organization.

That confidence was tested when we withstood a major ransomware attack by a well-known threat group. Not only were we able to contain the incident, but we were able to trace attacker activity and provided evidence to law enforcement. That was an exhilarating experience! My team did an outstanding job, and moments like that reinforce exactly why we invest in the right technology and the right people.

Internally, this capability has strengthened trust at the executive level. We share security reporting regularly with leadership, translating technical activity into business-relevant insights. That transparency reinforces cybersecurity as a shared responsibility, one that directly supports growth, continuity, and reputation.

Culturally, we’ve embedded security awareness into daily operations through mandatory monthly training, executive communication, and real-world industry examples that keep cybersecurity top of mind for every employee.

The only headlines we want are positive ones: Withum expanding services, Withum growing year over year. Security plays a huge role in making sure that’s the story we get to tell.

What’s next

Looking ahead, we’re expanding our use of Darktrace, including new cloud capabilities that extend AI-driven visibility and investigation into our AWS and Azure environments.

As I continue shaping our security team, I look for people with passion, curiosity, and a genuine drive to solve problems. Those qualities matter just as much as formal credentials in my view. Combined with AI, these attributes help us build a resilient, engaged security function with low turnover and high impact.

For fellow technology leaders, my advice is simple: be forward-thinking and embrace change. We must understand the business, the threat landscape, and how technology enables both. By augmenting human expertise rather than replacing it, AI allows us to move upstream by anticipating risk, advising the business, and fostering stronger collaboration across teams.

Continue reading
About the author
あなたのデータ × DarktraceのAI
唯一無二のDarktrace AIで、ネットワークセキュリティを次の次元へ