The State of AI in Cybersecurity: Understanding AI Technologies
Part 4: This blog explores the findings from Darktrace’s State of AI Cybersecurity Report on security professionals' understanding of the different types of AI used in security programs. Get the latest insights into the evolving challenges, growing demand for skilled professionals, and the need for integrated security solutions by downloading the full report.
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
The Darktrace Community
Share
24
Jul 2024
About the State of AI Cybersecurity Report
Darktrace surveyed 1,800 CISOs, security leaders, administrators, and practitioners from industries around the globe. Our research was conducted to understand how the adoption of new AI-powered offensive and defensive cybersecurity technologies are being managed by organizations.
How familiar are security professionals with supervised machine learning
Just 31% of security professionals report that they are “very familiar” with supervised machine learning.
Many participants admitted unfamiliarity with various AI types. Less than one-third felt "very familiar" with the technologies surveyed: only 31% with supervised machine learning and 28% with natural language processing (NLP).
Most participants were "somewhat" familiar, ranging from 46% for supervised machine learning to 36% for generative adversarial networks (GANs). Executives and those in larger organizations reported the highest familiarity.
Combining "very" and "somewhat" familiar responses, 77% had familiarity with supervised machine learning, 74% generative AI, and 73% NLP. With generative AI getting so much media attention, and NLP being the broader area of AI that encompasses generative AI, these results may indicate that stakeholders are understanding the topic on the basis of buzz, not hands-on work with the technologies.
If defenders hope to get ahead of attackers, they will need to go beyond supervised learning algorithms trained on known attack patterns and generative AI. Instead, they’ll need to adopt a comprehensive toolkit comprised of multiple, varied AI approaches—including unsupervised algorithms that continuously learn from an organization’s specific data rather than relying on big data generalizations.
Different types of AI
Different types of AI have different strengths and use cases in cyber security. It’s important to choose the right technique for what you’re trying to achieve.
Supervised machine learning: Applied more often than any other type of AI in cyber security. Trained on human attack patterns and historical threat intelligence.
Large language models (LLMs): Applies deep learning models trained on extremely large data sets to understand, summarize, and generate new content. Used in generative AI tools.
Natural language processing (NLP): Applies computational techniques to process and understand human language.
Unsupervised machine learning: Continuously learns from raw, unstructured data to identify deviations that represent true anomalies.
What impact will generative AI have on the cybersecurity field?
More than half of security professionals (57%) believe that generative AI will have a bigger impact on their field over the next few years than other types of AI.
Figure 1: Chart from Darktrace's State of AI in Cybersecurity Report
Security stakeholders are highly aware of generative AI and LLMs, viewing them as pivotal to the field's future. Generative AI excels at abstracting information, automating tasks, and facilitating human-computer interaction. However, LLMs can "hallucinate" due to training data errors and are vulnerable to prompt injection attacks. Despite improvements in securing LLMs, the best cyber defenses use a mix of AI types for enhanced accuracy and capability.
AI education is crucial as industry expectations for generative AI grow. Leaders and practitioners need to understand where and how to use AI while managing risks. As they learn more, there will be a shift from generative AI to broader AI applications.
Do security professionals fully understand the different types of AI in security products?
Only 26% of security professionals report a full understanding of the different types of AI in use within security products.
Confusion is prevalent in today’s marketplace. Our survey found that only 26% of respondents fully understand the AI types in their security stack, while 31% are unsure or confused by vendor claims. Nearly 65% believe generative AI is mainly used in cybersecurity, though it’s only useful for identifying phishing emails. This highlights a gap between user expectations and vendor delivery, with too much focus on generative AI.
Key findings include:
Executives and managers report higher understanding than practitioners.
Larger organizations have better understanding due to greater specialization.
As AI evolves, vendors are rapidly introducing new solutions faster than practitioners can learn to use them. There's a strong need for greater vendor transparency and more education for users to maximize the technology's value.
To help ease confusion around AI technologies in cybersecurity, Darktrace has released the CISO’s Guide to Cyber AI. A comprehensive white paper that categorizes the different applications of AI in cybersecurity. Download the White Paper here.
Do security professionals believe generative AI alone is enough to stop zero-day threats?
No! 86% of survey participants believe generative AI alone is NOT enough to stop zero-day threats
This consensus spans all geographies, organization sizes, and roles, though executives are slightly less likely to agree. Asia-Pacific participants agree more, while U.S. participants agree less.
Despite expecting generative AI to have the most impact, respondents recognize its limited security use cases and its need to work alongside other AI types. This highlights the necessity for vendor transparency and varied AI approaches for effective security across threat prevention, detection, and response.
Stakeholders must understand how AI solutions work to ensure they offer advanced, rather than outdated, threat detection methods. The survey shows awareness that old methods are insufficient.
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Defending the Cloud: Stopping Cyber Threats in Azure and AWS with Darktrace
Real-world intrusions across Azure and AWS
As organizations pursue greater scalability and flexibility, cloud platforms like Microsoft Azure and Amazon Web Services (AWS) have become essential for enabling remote operations and digitalizing corporate environments. However, this shift introduces a new set of security risks, including expanding attack surfaces, misconfigurations, and compromised credentials frequently exploited by threat actors.
This blog dives into three instances of compromise within a Darktrace customer’s Azure and AWS environment which Darktrace.
The first incident took place in early 2024 and involved an attacker compromising a legitimate user account to gain unauthorized access to a customer’s Azure environment.
The other two incidents, taking place in February and March 2025, targeted AWS environments. In these cases, threat actors exfiltrated corporate data, and in one instance, was able to detonate ransomware in a customer’s environment.
Case 1 - Microsoft Azure
Figure 1: Simplified timeline of the attack on a customer’s Azure environment.
In early 2024, Darktrace identified a cloud compromise on the Azure cloud environment of a customer in the Europe, the Middle East and Africa (EMEA) region.
Initial access
In this case, a threat actor gained access to the customer’s cloud environment after stealing access tokens and creating a rogue virtual machine (VM). The malicious actor was found to have stolen access tokens belonging to a third-party external consultant’s account after downloading cracked software.
With these stolen tokens, the attacker was able to authenticate to the customer’s Azure environment and successfully modified a security rule to allow inbound SSH traffic from a specific IP range (i.e., securityRules/AllowCidrBlockSSHInbound). This was likely performed to ensure persistent access to internal cloud resources.
Detection and investigation of the threat
Darktrace / IDENTITY recognized that this activity was highly unusual, triggering the “Repeated Unusual SaaS Resource Creation” alert.
Cyber AI Analyst launched an autonomous investigation into additional suspicious cloud activities occurring around the same time from the same unusual location, correlating the individual events into a broader account hijack incident.
Figure 2: Cyber AI Analyst’s investigation into unusual cloud activity performed by the compromised account.
Figure 3: Surrounding resource creation events highlighted by Cyber AI Analyst.
Figure 4: Surrounding resource creation events highlighted by Cyber AI Analyst.
“Create resource service limit” events typically indicate the creation or modification of service limits (i.e., quotas) for a specific Azure resource type within a region. Meanwhile, “Registers the Capacity Resource Provider” events refer to the registration of the Microsoft Capacity resource provider within an Azure subscription, responsible for managing capacity-related resources, particularly those related to reservations and service limits. These events suggest that the threat actor was looking to create new cloud resources within the environment.
Around ten minutes later, Darktrace detected the threat actor creating or modifying an Azure disk associated with a virtual machine (VM), suggesting an attempt to create a rogue VM within the environment.
Threat actors can leverage such rogue VMs to hijack computing resources (e.g., by running cryptomining malware), maintain persistent access, move laterally within the cloud environment, communicate with command-and-control (C2) infrastructure, and stealthily deliver and deploy malware.
Persistence
Several weeks later, the compromised account was observed sending an invitation to collaborate to an external free mail (Google Mail) address.
Darktrace deemed this activity as highly anomalous, triggering a compliance alert for the customer to review and investigate further.
The next day, the threat actor further registered new multi-factor authentication (MFA) information. These actions were likely intended to maintain access to the compromised user account. The customer later confirmed this activity by reviewing the corresponding event logs within Darktrace.
Case 2 – Amazon Web Services
Figure 5: Simplified timeline of the attack on a customer’s AWS environment
In February 2025, another cloud-based compromised was observed on a UK-based customer subscribed to Darktrace’s Managed Detection and Response (MDR) service.
How the attacker gained access
The threat actor was observed leveraging likely previously compromised credential to access several AWS instances within customer’s Private Cloud environment and collecting and exfiltrating data, likely with the intention of deploying ransomware and holding the data for ransom.
Darktrace alerting to malicious activity
This observed activity triggered a number of alerts in Darktrace, including several high-priority Enhanced Monitoring alerts, which were promptly investigated by Darktrace’s Security Operations Centre (SOC) and raised to the customer’s security team.
The earliest signs of attack observed by Darktrace involved the use of two likely compromised credentials to connect to the customer’s Virtual Private Network (VPN) environment.
Internal reconnaissance
Once inside, the threat actor performed internal reconnaissance activities and staged the Rclone tool “ProgramData\rclone-v1.69.0-windows-amd64.zip”, a command-line program to sync files and directories to and from different cloud storage providers, to an AWS instance whose hostname is associated with a public key infrastructure (PKI) service.
The threat actor was further observed accessing and downloading multiple files hosted on an AWS file server instance, notably finance and investment-related files. This likely represented data gathering prior to exfiltration.
Shortly after, the PKI-related EC2 instance started making SSH connections with the Rclone SSH client “SSH-2.0-rclone/v1.69.0” to a RockHoster Virtual Private Server (VPS) endpoint (193.242.184[.]178), suggesting the threat actor was exfiltrating the gathered data using the Rclone utility they had previously installed. The PKI instance continued to make repeated SSH connections attempts to transfer data to this external destination.
Darktrace’s Autonomous Response
In response to this activity, Darktrace’s Autonomous Response capability intervened, blocking unusual external connectivity to the C2 server via SSH, effectively stopping the exfiltration of data.
This activity was further investigated by Darktrace’s SOC analysts as part of the MDR service. The team elected to extend the autonomously applied actions to ensure the compromise remained contained until the customer could fully remediate the incident.
Continued reconissance
Around the same time, the threat actor continued to conduct network scans using the Nmap tool, operating from both a separate AWS domain controller instance and a newly joined device on the network. These actions were accompanied by further internal data gathering activities, with around 5 GB of data downloaded from an AWS file server.
The two devices involved in reconnaissance activities were investigated and actioned by Darktrace SOC analysts after additional Enhanced Monitoring alerts had triggered.
Lateral movement attempts via RDP connections
Unusual internal RDP connections to a likely AWS printer instance indicated that the threat actor was looking to strengthen their foothold within the environment and/or attempting to pivot to other devices, likely in response to being hindered by Autonomous Response actions.
This triggered multiple scanning, internal data transfer and unusual RDP alerts in Darktrace, as well as additional Autonomous Response actions to block the suspicious activity.
Suspicious outbound SSH communication to known threat infrastructure
Darktrace subsequently observed the AWS printer instance initiating SSH communication with a rare external endpoint associated with the web hosting and VPS provider Host Department (67.217.57[.]252), suggesting that the threat actor was attempting to exfiltrate data to an alternative endpoint after connections to the original destination had been blocked.
Further investigation using open-source intelligence (OSINT) revealed that this IP address had previously been observed in connection with SSH-based data exfiltration activity during an Akira ransomware intrusion [1].
Once again, connections to this IP were blocked by Darktrace’s Autonomous Response and subsequently these blocks were extended by Darktrace’s SOC team.
The above behavior generated multiple Enhanced Monitoring alerts that were investigated by Darktrace SOC analysts as part of the Managed Threat Detection service.
Figure 5: Enhanced Monitoring alerts investigated by SOC analysts as part of the Managed Detection and Response service.
Final containment and collaborative response
Upon investigating the unusual scanning activity, outbound SSH connections, and internal data transfers, Darktrace analysts extended the Autonomous Response actions previously triggered on the compromised devices.
As the threat actor was leveraging these systems for data exfiltration, all outgoing traffic from the affected devices was blocked for an additional 24 hours to provide the customer’s security team with time to investigate and remediate the compromise.
Additional investigative support was provided by Darktrace analysts through the Security Operations Service, after the customer's opened of a ticket related to the unfolding incident.
Figure 8: Simplified timeline of the attack
Around the same time of the compromise in Case 2, Darktrace observed a similar incident on the cloud environment of a different customer.
Initial access
On this occasion, the threat actor appeared to have gained entry into the AWS-based Virtual Private Cloud (VPC) networkvia a SonicWall SMA 500v EC2 instance allowing inbound traffic on any port.
The instance received HTTPS connections from three rare Vultr VPS endpoints (i.e., 45.32.205[.]52, 207.246.74[.]166, 45.32.90[.]176).
Lateral movement and exfiltration
Around the same time, the EC2 instance started scanning the environment and attempted to pivot to other internal systems via RDP, notably a DC EC2 instance, which also started scanning the network, and another EC2 instance.
The latter then proceeded to transfer more than 230 GB of data to the rare external GTHost VPS endpoint 23.150.248[.]189, while downloading hundreds of GBs of data over SMB from another EC2 instance.
Figure 7: Cyber AI Analyst incident generated following the unusual scanning and RDP connections from the initial compromised device.
The same behavior was replicated across multiple EC2 instances, whereby compromised instances uploaded data over internal RDP connections to other instances, which then started transferring data to the same GTHost VPS endpoint over port 5000, which is typically used for Universal Plug and Play (UPnP).
What Darktrace detected
Darktrace observed the threat actor uploading a total of 718 GB to the external endpoint, after which they detonated ransomware within the compromised VPC networks.
This activity generated nine Enhanced Monitoring alerts in Darktrace, focusing on the scanning and external data activity, with the earliest of those alerts triggering around one hour after the initial intrusion.
Darktrace’s Autonomous Response capability was not configured to act on these devices. Therefore, the malicious activity was not autonomously blocked and escalated to the point of ransomware detonation.
Conclusion
This blog examined three real-world compromises in customer cloud environments each illustrating different stages in the attack lifecycle.
The first case showcased a notable progression from a SaaS compromise to a full cloud intrusion, emphasizing the critical role of anomaly detection when legitimate credentials are abused.
The latter two incidents demonstrated that while early detection is vital, the ability to autonomously block malicious activity at machine speed is often the most effective way to contain threats before they escalate.
Together, these incidents underscore the need for continuous visibility, behavioral analysis, and machine-speed intervention across hybrid environments. Darktrace's AI-driven detection and Autonomous Response capabilities, combined with expert oversight from its Security Operations Center, give defenders the speed and clarity they need to contain threats and reduce operational disruption, before the situation spirals.
Credit to Alexandra Sentenac (Senior Cyber Analyst) and Dylan Evans (Security Research Lead)
Top Eight Threats to SaaS Security and How to Combat Them
The latest on the identity security landscape
Following the mass adoption of remote and hybrid working patterns, more critical data than ever resides in cloud applications – from Salesforce and Google Workspace, to Box, Dropbox, and Microsoft 365.
As SaaS applications look set to remain an integral part of the digital estate, organizations are being forced to rethink how they protect their users and data in this area.
What is SaaS security?
SaaS security is the protection of cloud applications. It includes securing the apps themselves as well as the user identities that engage with them.
Below are the top eight threats that target SaaS security and user identities.
1. Account Takeover (ATO)
Attackers gain unauthorized access to a user’s SaaS or cloud account by stealing credentials through phishing, brute-force attacks, or credential stuffing. Once inside, they can exfiltrate data, send malicious emails, or escalate privileges to maintain persistent access.
2. Privilege escalation
Cybercriminals exploit misconfigurations, weak access controls, or vulnerabilities to increase their access privileges within a SaaS or cloud environment. Gaining admin or superuser rights allows attackers to disable security settings, create new accounts, or move laterally across the organization.
3. Lateral movement
Once inside a network or SaaS platform, attackers move between accounts, applications, and cloud workloads to expand their foot- hold. Compromised OAuth tokens, session hijacking, or exploited API connections can enable adversaries to escalate access and exfiltrate sensitive data.
4. Multi-Factor Authentication (MFA) bypass and session hijacking
Threat actors bypass MFA through SIM swapping, push bombing, or exploiting session cookies. By stealing an active authentication session, they can access SaaS environments without needing the original credentials or MFA approval.
5. OAuth token abuse
Attackers exploit OAuth authentication mechanisms by stealing or abusing tokens that grant persistent access to SaaS applications. This allows them to maintain access even if the original user resets their password, making detection and mitigation difficult.
6. Insider threats
Malicious or negligent insiders misuse their legitimate access to SaaS applications or cloud platforms to leak data, alter configurations, or assist external attackers. Over-provisioned accounts and poor access control policies make it easier for insiders to exploit SaaS environments.
SaaS applications rely on APIs for integration and automation, but attackers exploit insecure endpoints, excessive permissions, and unmonitored API calls to gain unauthorized access. API abuse can lead to data exfiltration, privilege escalation, and service disruption.
8. Business Email Compromise (BEC) via SaaS
Adversaries compromise SaaS-based email platforms (e.g., Microsoft 365 and Google Workspace) to send phishing emails, conduct invoice fraud, or steal sensitive communications. BEC attacks often involve financial fraud or data theft by impersonating executives or suppliers.
BEC heavily uses social engineering techniques, tailoring messages for a specific audience and context. And with the growing use of generative AI by threat actors, BEC is becoming even harder to detect. By adding ingenuity and machine speed, generative AI tools give threat actors the ability to create more personalized, targeted, and convincing attacks at scale.
Protecting against these SaaS threats
Traditionally, security leaders relied on tools that were focused on the attack, reliant on threat intelligence, and confined to a single area of the digital estate.
However, these tools have limitations, and often prove inadequate for contemporary situations, environments, and threats. For example, they may lack advanced threat detection, have limited visibility and scope, and struggle to integrate with other tools and infrastructure, especially cloud platforms.
AI-powered SaaS security stays ahead of the threat landscape
New, more effective approaches involve AI-powered defense solutions that understand the digital business, reveal subtle deviations that indicate cyber-threats, and action autonomous, targeted responses.