Blog
/
/
June 27, 2021

Post-Mortem Analysis of a SQL Server Exploit

Learn about the post-mortem analysis of a SQL Server exploit. Discover key insights and strategies to enhance your cybersecurity defenses.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Max Heinemeyer
Global Field CISO
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
27
Jun 2021

While SaaS and IoT devices are increasingly popular vectors of intrusion, server-side attacks remain a serious threat to organizations worldwide. With sophisticated vulnerability scanning tools, attackers can now pinpoint security flaws in seconds, finding points of entry across the attack surface. Human security teams often struggle to keep pace with the constant wave of newly documented vulnerabilities and patches.

Darktrace recently stopped a targeted cyber-attack by an unknown attacker. After the initial entry, the attacker exploited an unpatched vulnerability (CVE-2020-0618), granting a low-privileged credential the ability to remotely execute code. This enabled the attacker to spread laterally and eventually establish a foothold in the system by creating a new user account.

The server-side attack cycle: authenticates user; scans network; infects three servers; downloads malware; c2 traffic; creates new user.

Figure 1: Overview of the server-side attack cycle.

This blog breaks down the intrusion and explores how Darktrace’s Autonomous Response technology took three surgical actions to halt the attacker’s movements.

Unknown threat actors exploit a vulnerability

Initial compromise

At a financial firm in Canada with around 3,000 devices, Cyber AI detected the use of a new credential, ‘parents’. The attacker used this credential to access the company’s internal environment through the VPN. From there, the credential authenticated to a desktop using NT LAN Manager (NTLM). No further suspicious activity was observed.

NTLM is a popular attack vector for cyber-criminals as it is vulnerable to multiple methods of compromise, including brute-force and ‘pass the hash’. The initial access to the credential could have been obtained via phishing before Darktrace had been deployed.

Figure 2: The credential was first observed on the device five days prior to reconnaissance. The attacker performed reconnaissance and lateral movement for two days, until the compromised devices were taken down.

Internal reconnaissance

Five days later, the ‘parents’ credential was seen logging onto the desktop. The desktop began scanning the network – over 80 internal IPs – on Port 443 and 445.

Shortly after the scan, the device used Nmap to attempt to establish SMBv1 sessions to 139 internal IPs, using guest / user credentials. 79 out of the 278 sessions were successful, all using the login.

Figure 3: New failed internal connections performed by an initially infected desktop, in a similar incident. The graph highlights a surge in failed internal connections and model breaches.

The network scan was the first stage after intrusion, enabling the attacker to find out which services were running, before looking for unpatched vulnerabilities.

Nmap has multiple built-in functionalities which are often exploited for reconnaissance and lateral movement. In this case, it was being used to establish the SMBv1 sessions to the domain controller, saving the attacker from having to initiate SMBv1 sessions with each destination one by one. SMBv1 has well-known vulnerabilities and best practice is to disable it where possible.

Lateral movement

The desktop began controlling services (svcctl endpoint) on a SQL server. It was observed both creating and starting services (CreateServiceW, StartServiceW).

The desktop then initiated an unencrypted HTTP connection to a SQL Reporting server. This was the first HTTP connection between the two devices and the first time the user agent had been seen on the device.

A packet capture of the connection reveals a POST that is seen in an exploit of CVE-2020-0613. This vulnerability is a deserialization issue, whereby the server mishandles carefully crafted page requests and allows low-privileged accounts to establish a reverse shell and remotely execute code on the server.

Figure 4: A partial PCAP of the HTTP connection. The traffic matches the CVE-2020-0618 exploit, which enables Remote Code Execution (RCE) in SQL Server Reporting Services (SSRS).

Most movements were seen in East-West traffic, with readily-available remote procedure call (RPC) methods. Such connections are abundant in systems. Without learning an organization’s ‘pattern of life’, it would have been near-impossible to highlight the malicious connections.

Cyber AI detected connections to the svcctl endpoint, via the DCE-RPC endpoint. This is called the 'service control' endpoint and is used to remotely control running processes on a device.

During the lateral movement from the desktop, the HTTP POST request revealed that the desktop was exploiting CVE-2020-0613. The attacker had managed to find and exploit an existing vulnerability which hadn’t been patched.

Darktrace was the only tool which alerted to the HTTP connection, revealing this underlying (and concluding) exploit. The AI determined that the user agent was unusual for the device and for the wider organization, and that the connection was highly anomalous. This connection would have gone otherwise amiss, since HTTP connections are common in most digital environments.

Because the attacker on the desktop used readily-available tools and protocols, such as Nmap, DCE-RPC, and HTTP, the device went undetected by all the other cyber defenses. However, Cyber AI noticed multiple scanning and lateral movement anomalies – triggering high-fidelity detections which would have been alerted to with Proactive Threat Notifications.

Command and control (C2) communication

The next day, the attacker connected to an SNMP server from the VPN. The connection used the ‘parents’ RDP cookie.

Immediately after the RDP connection began, the server connected to Pastebin and downloaded small amounts of encrypted data. Pastebin was likely being used as a vector to drop malicious scripts onto the device.

The SNMP server then started controlling services (svcttl) on the SQL server: again, creating and starting services.

Following this, both the SQL server and the SNMP server made a high volume of SSL connections to a rare external domain. One upload to the destination was around 21 MB, but otherwise the connections were mostly the same packet size. This, among other factors, indicated that the destination was being used as a C2 server.

Figure 5: Example Cyber AI Analyst investigation into beaconing activity by a SQL server.

With just one compromised credential, the attacker was now connecting to the VPN and infecting multiple servers on the company’s internal network.

The attacker dropped scripts onto the host using Pastebin. Darktrace alerted on this because Pastebin is highly rare for the organization. In fact, these connections were the first time it had been seen. Most security tools would miss this, as Pastebin is a legitimate site and would not be blocked by open-source intelligence (OSINT).

Even if a lesser-known Pastebin alternative had been used – say, in an environment where Pastebin was blocked on the firewall but the alternative not — Darktrace would have picked up on it in exactly the same way.

The C2 beaconing endpoint – dropbox16[.]com – has no OSINT information available online. The connections were on Port 443 and nothing about them was notable except from their rarity on the company’s system. Darktrace sent alerts because of its high rarity, rather than relying on known signatures.

Achieve persistence

After another Pastebin pull, the attacker attempted to maintain a greater foothold and escalate privileges by creating a new user using the SamrCreateUser2InDomain operation (endpoint: samr).

To establish persistence, the attacker now created a new user through a specific DCE-RPC command to the domain controller. This was highly unusual activity for the device, and was given a 100% anomaly score for ‘New or Uncommon Occurrence’.

If Darktrace had not alerted on this activity, the attacker would have continued to access files and make further inroads in the company, extracting sensitive data and potentially installing ransomware. This could have led to sensitive data loss, reputational damage, and financial losses for the company.

The value of Autonomous Response

The organization had Antigena in passive mode, so although it was not able to respond autonomously, we have visibility into the actions that it would have taken.

Antigena would have taken three actions on the initially infected desktop, as shown in the table below. The actions would have taken effect immediately in response to the first scan and the first service control requests.

During the two days of reconnaissance and lateral movement activity, these were the only steps Antigena suggested. The steps were all directly relevant to the intrusion – there was no attempt to block anything unrelated to the attack, and no other Antigena actions were triggered during this period.

By surgically blocking connections on specific ports during the scanning activity and enforcing the ‘pattern of life’ on the infected desktop, Antigena would have paralyzed the attacker’s reconnaissance efforts.

Furthermore, unusual service control attempts performed by the device would have been halted, minimizing the damage to the targeted destination.

Antigena would have delivered these blocks directly or via whatever integration was most suitable for the customer, such as firewall integrations or NAC integrations.

Lessons learned

The threat story above demonstrates the importance of controlling the access granted to low-privileged credentials, as well as remaining up-to-date with security patches. Since such attacks take advantage of existing network infrastructure, it is extremely difficult to detect these anomalous connections without the use of AI.

There was a delay of several days between the initial use of the ‘parents’ credentials and the first signs of lateral movement. This dormancy period – between compromise and the start of internal activities – is commonly seen in attacks. It likely indicates that the attacker was checking initially if their access worked, and then re-visiting the victim for further compromise once their schedule allowed for it.

Stopping a server-side attack

This compromise is reflective of many real-life intrusions: attacks cannot be easily attributed and are often conducted by sophisticated, unidentified threat actors.

Nevertheless, Darktrace managed to detect each stage of the attack cycle: initial compromise, reconnaissance, lateral movement, established foothold, and privilege escalation, and had Antigena been in active mode, it would have blocked these connections, and even prevented the initial desktop from ever exploiting the SQL vulnerability, which allowed the attacker to execute code remotely.

One day later, after seeing the power of Autonomous Response, the company decided to deploy Antigena in active mode.

Thanks to Darktrace analyst Isabel Finn for her insights on the above threat find.

Darktrace model detections:

  • Device / Anomalous Nmap SMB Activity
  • Device / Network Scan - Low Anomaly Score
  • Device / Network Scan
  • Device / ICMP Address Scan
  • Device / Suspicious Network Scan Activity
  • Anomalous Connection / New or Uncommon Service Control
  • Device / Multiple Lateral Movement Model Breaches
  • Device / New User Agent To Internal Server
  • Compliance / Pastebin
  • Device / Repeated Unknown RPC Service Bind Errors
  • Anomalous Server Activity / Rare External from Server
  • Compromise / Unusual Connections to Rare Lets Encrypt
  • User / Anomalous Domain User Creation Or Addition To Group

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Max Heinemeyer
Global Field CISO

More in this series

No items found.

Blog

/

Email

/

May 1, 2026

How email-delivered prompt injection attacks can target enterprise AI – and why it matters

Default blog imageDefault blog image

What are email-delivered prompt injection attacks?

As organizations rapidly adopt AI assistants to improve productivity, a new class of cyber risk is emerging alongside them: email-delivered AI prompt injection. Unlike traditional attacks that target software vulnerabilities or rely on social engineering, this is the act of embedding malicious or manipulative instructions into content that an AI system will process as part of its normal workflow. Because modern AI tools are designed to ingest and reason over large volumes of data, including emails, documents, and chat histories, they can unintentionally treat hidden attacker-controlled text as legitimate input.  

At Darktrace, our analysis has shown an increase of 90% in the number of customer deployments showing signals associated with potential prompt injection attempts since we began monitoring for this type of activity in late 2025. While it is not always possible to definitively attribute each instance, internal scoring systems designed to identify characteristics consistent with prompt injection have recorded a growing number of high-confidence matches. The upward trend suggests that attackers are actively experimenting with these techniques.

Recent examples of prompt injection attacks

Two early examples of this evolving threat are HashJack and ShadowLeak, which illustrate prompt injection in practice.

HashJack is a novel prompt injection technique discovered in November 2025 that exploits AI-powered web browsers and agentic AI browser assistants. By hiding malicious instructions within the URL fragment (after the # symbol) of a legitimate, trusted website, attackers can trick AI web assistants into performing malicious actions – potentially inserting phishing links, fake contact details, or misleading guidance directly into what appears to be a trusted AI-generated output.

ShadowLeak is a prompt injection method to exfiltrate PII identified in September 2025. This was a flaw in ChatGPT (now patched by OpenAI) which worked via an agent connected to email. If attackers sent the target an email containing a hidden prompt, the agent was tricked into leaking sensitive information to the attacker with no user action or visible UI.

What’s the risk of email-delivered prompt injection attacks?

Enterprise AI assistants often have complete visibility across emails, documents, and internal platforms. This means an attacker does not need to compromise credentials or move laterally through an environment. If successful, they can influence the AI to retrieve relevant information seamlessly, without the labor of compromise and privilege escalation.

The first risk is data exfiltration. In a prompt injection scenario, malicious instructions may be embedded within an ordinary email. As in the ShadowLeak attack, when AI processes that content as part of a legitimate task, it may interpret the hidden text as an instruction. This could result in the AI disclosing sensitive data, summarizing confidential communications, or exposing internal context that would otherwise require significant effort to obtain.

The second risk is agentic workflow poisoning. As AI systems take on more active roles, prompt injection can influence how they behave over time. An attacker could embed instructions that persist across interactions, such as causing the AI to include malicious links in responses or redirect users to untrusted resources. In this way, the attacker inserts themselves into the workflow, effectively acting as a man-in-the-middle within the AI system.

Why can’t other solutions catch email-delivered prompt injection attacks?

AI prompt injection challenges many of the assumptions that traditional email security is built on. It does not fit the usual patterns of phishing, where the goal is to trick a user into clicking a link or opening an attachment.  

Most security solutions are designed to detect signals associated with user engagement: suspicious links, unusual attachments, or social engineering cues. Prompt injection avoids these indicators entirely, meaning there are fewer obvious red flags.

In this case, the intention is actually the opposite of user solicitation. The objective is simply for the email to be delivered and remain in the inbox, appearing benign and unremarkable. The malicious element is not something the recipient is expected to engage with, or even notice.

Detection is further complicated by the nature of the prompts themselves. Unlike known malware signatures or consistent phishing patterns, injected prompts can vary widely in structure and wording. This makes simple pattern-matching approaches, such as regex, unreliable. A broad rule set risks generating large numbers of false positives, while a narrow one is unlikely to capture the diversity of possible injections.

How does Darktrace catch these types of attacks?

The Darktrace approach to email security more generally is to look beyond individual indicators and assess context, which also applies here.  

For example, our prompt density score identifies clusters of prompt-like language within an email rather than just single occurrences. Instead of treating the presence of a phrase as a blocking signal, the focus is on whether there is an unusual concentration of these patterns in a way that suggests injection. Additional weighting can be applied where there are signs of obfuscation. For example, text that is hidden from the user – such as white font or font size zero – but still readable by AI systems can indicate an attempt to conceal malicious prompts.

This is combined with broader behavioral signals. The same communication context used to detect other threats remains relevant, such as whether the content is unusual for the recipient or deviates from normal patterns.

Ask your email provider about email-delivered AI prompt injection

Prompt injection targets not just employees, but the AI systems they rely on, so security approaches need to account for both.

Though there are clear indications of emerging activity, it remains to be seen how popular prompt injection will be with attackers going forward. Still, considering the potential impact of this attack type, it’s worth checking if this risk has been considered by your email security provider.

Questions to ask your email security provider

  • What safeguards are in place to prevent emails from influencing AI‑driven workflows over time?
  • How do you assess email content that’s benign for a human reader, but may carry hidden instructions intended for AI systems?
  • If an email contains no links, no attachments, and no social engineering cues, what signals would your platform use to identify malicious intent?

Visit the Darktrace / EMAIL product hub to discover how we detect and respond to advanced communication threats.  

Learn more about securing AI in your enterprise.

Continue reading
About the author
Kiri Addison
Senior Director of Product

Blog

/

AI

/

April 30, 2026

Mythos vs Ethos: Defending in an Era of AI‑Accelerated Vulnerability Discovery

mythos vulnerability discoveryDefault blog imageDefault blog image

Anthropic’s Mythos and what it means for security teams

Recent attention on systems such as Anthropic Mythos highlights a notable problem for defenders. Namely that disclosure’s role in coordinating defensive action is eroding.

As AI systems gain stronger reasoning and coding capability, their usefulness in analyzing complex software environments and identifying weaknesses naturally increases. What has changed is not attacker motivation, but the conditions under which defenders learn about and organize around risk. Vulnerability discovery and exploitation increasingly unfold in ways that turn disclosure into a retrospective signal rather than a reliable starting point for defense.

Faster discovery was inevitable and is already visible

The acceleration of vulnerability discovery was already observable across the ecosystem. Publicly disclosed vulnerabilities (CVEs) have grown at double-digit rates for the past two years, including a 32% increase in 2024 according to NIST, driven in part by AI even prior to Anthropic’s Mythos model. Most notably XBOW topped the HackerOne US bug bounty leaderboard, marking the first time an autonomous penetration tester had done so.  

The technical frontier for AI capabilities has been described elsewhere as jagged, and the implication is that Mythos is exceptional but not unique in this capability. While Mythos appears to make significant progress in complex vulnerability analysis, many other models are already able to find and exploit weaknesses to varying degrees.  

What matters here is not which model performs best, but the fact that vulnerability discovery is no longer a scarce or tightly bounded capability.

The consequence of this shift is not simply earlier discovery. It is a change in the defender-attacker race condition. Disclosure once acted as a rough synchronization point. While attackers sometimes had earlier knowledge, disclosure generally marked the moment when risk became visible and defensive action could be broadly coordinated. Increasingly, that coordination will no longer exist. Exploitation may be underway well before a CVE is published, if it is published at all.

Why patch velocity alone is not the answer

The instinctive response to this shift is to focus on patching faster, but treating patch velocity as the primary solution misunderstands the problem. Most organizations are already constrained in how quickly they can remediate vulnerabilities. Asset sprawl, operational risk, testing requirements, uptime commitments, and unclear ownership all limit response speed, even when vulnerabilities are well understood.

If discovery and exploitation now routinely precede disclosure, then patching cannot be the first line of defense. It becomes one necessary control applied within a timeline that has already shifted. This does not imply that organizations should patch less. It means that patching cannot serve as the organizing principle for defense.

Defense needs a more stable anchor

If disclosure no longer defines when defense begins, then defense needs a reference point that does not depend on knowing the vulnerability in advance.  

Every digital environment has a behavioral character. Systems authenticate, communicate, execute processes, and access resources in relatively consistent ways over time. These patterns are not static rules or signatures. They are learned behaviors that reflect how an organization operates.

When exploitation occurs, even via previously unknown vulnerabilities, those behavioral patterns change.

Attackers may use novel techniques, but they still need to gain access, create processes, move laterally, and will ultimately interact with systems in ways that diverge from what is expected. That deviation is observable regardless of whether the underlying weakness has been formally named.

In an environment where disclosure can no longer be relied on for timing or coordination, behavioral understanding is no longer an optional enhancement; it becomes the only consistently available defensive signal.

Detecting risk before disclosure

Darktrace’s threat research has consistently shown that malicious activity often becomes visible before public disclosure.

In multiple cases, including exploitation of Ivanti, SAP NetWeaver, and Trimble Cityworks, Darktrace detected anomalous behavior days or weeks ahead of CVE publication. These detections did not rely on signatures, threat intelligence feeds, or awareness of the vulnerability itself. They emerged because systems began behaving in ways that did not align with their established patterns.

This reflects a defensive approach grounded in ‘Ethos’, in contrast to the unbounded exploration represented by ‘Mythos’. Here, Mythos describes continuous vulnerability discovery at speed and scale. Ethos reflects an understanding of what is normal and expected within a specific environment, grounded in observed behavior.

Revisiting assume breach

These conditions reinforce a principle long embedded in Zero Trust thinking: assume breach.

If exploitation can occur before disclosure, patching vulnerabilities can no longer act as the organizing principle for defense. Instead, effective defense must focus on monitoring for misuse and constraining attacker activity once access is achieved. Behavioral monitoring allows organizations to identify early‑stage compromise and respond while uncertainty remains, rather than waiting for formal verification.

AI plays a critical role here, not by predicting every exploit, but by continuously learning what normal looks like within a specific environment and identifying meaningful deviation at machine speed. Identifying that deviation enables defenders to respond by constraining activity back towards normal patterns of behavior.

Not an arms race, but an asymmetry

AI is often framed as fueling an arms race between attackers and defenders. In practice, the more important dynamic is asymmetry.

Attackers operate broadly, scanning many environments for opportunities. Defenders operate deeply within their own systems, and it’s this business context which is so significant. Behavioral understanding gives defenders a durable advantage. Attackers may automate discovery, but they cannot easily reproduce what belonging looks like inside a particular organization.

A changed defensive model

AI‑accelerated vulnerability discovery does not mean defenders have lost. It does mean that disclosure‑driven, patch‑centric models no longer provide a sufficient foundation for resilience.

As vulnerability volumes grow and exploitation timelines compress, effective defense increasingly depends on continuous behavioral understanding, detection that does not rely on prior disclosure, and rapid containment to limit impact. In this model, CVEs confirm risk rather than define when defense begins.

The industry has already seen this approach work in practice. As AI continues to reshape both offense and defense, behavioral detection will move from being complementary to being essential.

Continue reading
About the author
Andrew Hollister
Principal Solutions Engineer, Cyber Technician
Your data. Our AI.
Elevate your network security with Darktrace AI