Blog
/
Email
/
April 10, 2023

Detecting Malicious Email Activity & AI Impersonating

Discover how two different phishing attempts from some known and unknown senders used a payroll diversion and credential sealing box link to harm users.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Isabelle Cheong
Cyber Security Analyst
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
10
Apr 2023

Social engineering has become widespread in the cyber threat landscape in recent years, and the near-universal use of social media today has allowed attackers to research and target victims more effectively. Social engineering involves manipulating users to carry out actions such as revealing sensitive information like login credentials or credit card details. It can also lead to user account compromises, causing huge disruption to an organization’s digital estate. 

As people use social media platforms not only for personal reasons, but also for business purposes, attackers gain information they can exploit in social engineering attacks. For example, a threat actor may attempt to impersonate a known individual or legitimate service to take advantage of a user’s established trust. This is a highly successful method of social engineering because mimicking known contacts makes it difficult for traditional security tools that rely on deny-lists to detect the attack.

In October 2022, Darktrace identified and responded to two separate malicious email campaigns in which threat actors attempted to impersonate known contacts in an effort to compromise customer devices. As it learns the normal behavior of every user in the email system, Darktrace was able to instantly detect these threats and mitigate them autonomously, preventing significant disruption to the customer networks.

Payroll Diversion Fraud Attempt Impersonating a Former Employee 

While a customer in the Canadian energy sector was trialing Darktrace in October 2022, Darktrace/Email™ identified a suspicious email seemingly sent from an employee within the organization. The email was sent to the Senior Director of Human Resources (HR) with a subject line of “Change in payroll Direct Deposit.” The email requested a change in bank account information for an employee. However, Darktrace recognized that the sender was using a free mail address that contained random letters, indicating it may have been algorithmically generated. Since this incident occurred during a trial, Darktrace/Email was not configured to take action. Otherwise, it would have prevented the email from landing in the inbox. In this case though, the email went through, bypassing all other security tools in place.

Although the email was from an unknown sender, the HR director believed the email could have been legitimate as the employee who appeared to be the sender had left the organization seven days prior and no longer had access to their corporate email account. However, after reviewing it in the Darktrace/Email dashboard, the customer grew suspicious and contacted the former employee directly to verify if the request was legitimate. The former employee validated the suspicions by confirming they had sent no such email.

Further investigation by the customer revealed that the former employee had been vocal about their departure on various social media platforms. This gave threat actors valuable information to believably impersonate the former employee and defraud the organization. 

Such attempts to target organizations’ HR departments and divert payroll are common tactics for cyber-criminals and are often identified by Darktrace/Email across the customer base. Darktrace/Email is able to instantly identify the indicators associated with these spoofing attempts and immediately bring them to the attention of the customer’s security team. 

Using Legitimate File Sharing Service to Share a Phishing Link 

On October 7, 2022, a customer in the Singaporean construction sector was targeted by a phishing campaign attempting to impersonate a law firm known to the organization. Almost 200 employees received an email with the subject line “Accepted: Valuation Agreement.” 

Figure 1: Sample of an UI view of the message held showing anomaly indicators, history, association, and validation.

Four days earlier, Darktrace observed communication between another email address associated with the law firm and an employee of the customer. Darktrace/Email noted that it was the first time this correspondent had sent emails to the customer. 

Figure 2: Metrics showing how well the sender’s domain is known within the digital environment.

The emails contained a highly unusual link to a file sharing service, (hxxps://ssvilvensstokes[.]app[.]box[.]com/notes), hidden behind the text “PREVIEW OR PRINT COPY OF DOCUMENT HERE.” Darktrace analysts investigated this event further and found that around 30 similar URLs had been identified as suspicious using OSINT security tools in October 2022, suggesting the customer was not the only target of this phishing campaign.

Figure 3: Preview of the phishing email’s body.
Figure 4: Darktrace’s evaluation of the link contained in the phishing email.

Additional OSINT work revealed that the link directed to a website which appeared to host a PDF file named “Valuation Agreement.” The recipient would then be prompted to follow another link (hulking-citrine-krypton[.]glitch[.]me), again hidden behind the text “OPEN OR ACCESS DOCUMENT HERE” to view the file. Subsequently, the user would be prompted to enter their Microsoft 365 credentials. 

Figure 5: The page displayed when the phishing link was clicked, viewed in a sandbox environment.
Figure 6: Example of a page shown when recipient clicks the second link, accessing “hulking-citrine-krypton[.]glitch[.]me”. 

This page contained the text “This document has been scanned for viruses by Norton Antivirus Security.” This is another example of threat actors’ employing social engineering techniques by impersonating well-known brands, such as established security vendors, to gain the trust of users and increase their likelihood of success.

It is highly probable that a real employee of the law firm had their account hijacked and that a malicious actor was exploiting it to send out these phishing emails en masse as part of a supply chain attack. In such cases, malicious actors rely on their targets’ trust of known contacts to not question departures from their normal conversations. 

Darktrace was able to instantly detect multiple anomalies in these emails, despite the fact that they were seemingly sent by known correspondents. The activity detected automatically triggered model breaches associated with unexpected and visually prominent links. As a result, Darktrace/Email responded by locking the link, stopping users from being able to click it.

Darktrace subsequently identified additional emails from this sender attempting to target other recipients within the company, triggering the model breaches associated with a surge in email sending indicative of a phishing campaign. In response, Darktrace/Email autonomously acted and filed these emails as junk. As more emails were detected across the customer’s environment, the anomaly score of the sender increased and Darktrace ultimately held back over 160 malicious emails, safeguarding recipients from potential account compromise.           

The following Darktrace/Email models were breached throughout the course of this phishing campaign:

  • Unusual/Sender Surge 
  • Unusual/Undisclosed Recipients 
  • Antigena Anomaly 
  • Association/Unlikely Recipient Association 
  • Link/Low Link Association 
  • Link/Visually Prominent Link 
  • Link/Visually Prominent Link Unexpected For Sender 
  • Unusual/New Sender Wide Distribution
  • Unusual/Undisclosed Recipients + New Address Known Domain

Conclusion

Social engineering plays a role in many of the major threats challenging current email cyber security, as attackers can use it to manipulate users into transferring money, revealing credentials, clicking malicious links, and more. 

The above threat stories happened before language generating AI became mainstream with the release of ChatGPT in December 2022. Now, it is even easier for malicious actors to generate sophisticated social engineering emails. By using social media posts as input, social engineering emails written by generative AI can be highly targeted and produced at scale. They often avoid the flags users are trained to look for, like poor grammar and spelling mistakes, and can hide payloads or forgo them entirely.

To mitigate the risk of possible social engineering attempts, it is recommended that organizations implement social media policies that advise employees to be cautious of what they post online and enact procedures to verify if fund transfer requests are legitimate.

Yet these policies are not enough on their own. Darktrace/Email can identify suspicious email traits, whether an email is sent from a known correspondent or an unknown sender. With Self-Learning AI, it knows an organization’s users better than any impersonator could. In this way, Darktrace/Email detects anomalies within emails and neutralizes malicious components at machine-speed, stopping attacks at their earliest stages, before employees fall victim. 

Appendices

List of Indicators of Compromise (IoCs)

Domain:

hxxps://ssvilvensstokes[.]app[.]box[.]com/notes/*?s=* - 1st external link (seen in email)

hxxps://hulking-citrine-krypton[.]glitch[.]me/flk.html - 2nd external link, masked behind “OPEN OR ACCESS DOCUMENT HERE”

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Isabelle Cheong
Cyber Security Analyst

More in this series

No items found.

Blog

/

/

May 12, 2026

Resilience at the Speed of AI: Defending the Modern Campus with Darktrace

Default blog imageDefault blog image

Why higher education is a different cybersecurity battlefield

After four decades in IT, now serving as both CIO and CISO, I’ve learned one simple truth: cybersecurity is never “done.” It’s a constant game of cat and mouse. Criminals evolve. Technologies advance. Regulations expand. But in higher education, the challenge is uniquely complex.

Unlike a bank or a military installation, we can’t lock down networks to a narrow set of approved applications. Higher education environments are open by design. Students collaborate globally, faculty conduct cutting-edge research, and administrators manage critical operations, all of which require seamless access to the internet, global networks, cloud platforms, and connected systems.

Combine that openness with expanding regulatory mandates and tight budgets, and the balancing act becomes clear.

Threat actors don’t operate under the same constraints. Often well-funded and sponsored by nation-states with significant resources, they’re increasingly organized, strategic, and innovative.

That sophistication shows up in the tactics we face every day, from social engineering and ransomware to AI-driven impersonation attacks. We’re dealing with massive volumes of data, countless signals, and a very small window between detection and damage.

No human team, no matter how talented or how numerous, can manually sift through that noise at the speed required.

Discovering a force multiplier

Nothing in cybersecurity is 100% foolproof. I never “set it and forget it.” But for institutions balancing rising threats and finite resources, the Darktrace ActiveAI Security Platform™ offers something incredibly valuable: peace of mind through speed and scale.

It closes the gap between detection and response in a way humans can’t possibly match. At the speed of light, it can quarantine, investigate, and contain anomalous activity.

I’ve purchased and deployed Darktrace three separate times at three different institutions because I’ve seen firsthand what it can do and what it enables teams like mine to achieve.

I first encountered Darktrace while serving as CIO for a large multi-campus college system. What caught my attention was Darktrace's Self-Learning AI, and its ability to learn what "normal" looked like across our network. Instead of relying solely on static signatures or rigid rules, Darktrace built a behavioral baseline unique to our environment and alerted us in real time when something simply didn’t look right.

In higher education, where strict lockdowns aren’t realistic, that behavioral model made all the difference. We deployed it across five campuses, and the impact was immediate. Operating 24/7, Darktrace surfaced threats in ways our team couldn’t replicate manually.

Over time, the Darktrace platform evolved alongside the changing threat landscape, expanding into intrusion prevention, cloud visibility, and email security. At subsequent institutions, including Washington College, Darktrace was one of my first strategic investments.

Revealing the hidden threat other tools missed

One of the most surprising investigations of my career involved a data leak. Leadership suspected sensitive information from high-level meetings was being exposed, but our traditional tools couldn’t provide any answers.

Using Darktrace’s deep network visibility, down to packet-level data, we traced unusual connections to our CCTV camera system, which had been configured with a manufacturer’s default password. A small group of employees had hacked into the CCTV cameras, accessed audio-enabled recordings from boardroom meetings, and stored copies locally.

No other tool in our environment could have surfaced those connections the way Darktrace did. It was a clear example of why using AI to deeply understand how your organization, systems, and tools normally behave, matters: threats and risks don’t always look the way we expect.

Elevating a D-rating into a A-level security program

When I arrived at my last CISO role, the institution had recently experienced a significant ransomware attack. Attackers located  data  which informed their setting  ransom demands to an amount they knew would likely result in payment. It was a sobering example of how calculated and strategic modern cybercriminals have become.

Third-party cyber ratings reflected that reality, with a  D rating.

To raise the bar, we implemented a comprehensive security program and integrated layered defenses; -deploying state of the art tools and methods-  across the environment, with Darktrace at its core.

After a 90-day learning period to establish our behavioral baseline, we transitioned the platform into fully autonomous mode. In a single 30-day span, Darktrace conducted more than 2,500 investigations and autonomously resolved 92% of all false positives.

For a small team, that’s transformative. Instead of drowning in alerts, my staff focused on less than  200 meaningful cases that warranted human review.

Today, we maintain a perfect A rating from third-party assessors and have remained cybersafe.

Peace of mind isn’t about complacency

The effect of Darktrace as a force multiplier has a real human impact.

With the time reclaimed through automation, we expanded community education programs and implemented simulated phishing exercises. Through sustained training and awareness efforts, we reduced social engineering susceptibility from nearly 45% to under 5%.

On a personal level, Darktrace allows me to sleep better at night and take time off knowing we have intelligent systems monitoring and responding around the clock. For any CIO or CISO carrying institutional risk on their shoulders, that matters.

The next era: AI vs. AI

A new chapter in cybersecurity is unfolding as adversaries leverage AI to enhance scale, speed, and believability. Phishing campaigns are more personalized, impersonation attempts are more precise, and deepfake video technology, including live video, is disturbingly authentic. At the same time, organizations are rapidly adopting AI across their own environments —from GenAI assistants to embedded tools to autonomous agents. These systems don’t operate within fixed rules. They act across email, cloud, SaaS, and identity systems, often with broad permissions, and their behavior can evolve over time in ways that are difficult to predict or control.

That creates a new kind of security challenge. It’s not just about defending against AI-powered threats but understanding and governing how AI behaves within your environment, including what it can access, how it acts, and where risk begins to emerge.

From my perspective, this is a natural next step for Darktrace.

Darktrace brings a level of maturity and behavioral understanding uniquely suited to the complexity of AI environments. Self-Learning AI learns the normal patterns of each business to interpret context, uncover subtle intent, and detect meaningful deviations without relying on predefined rules or signatures. Extending into securing AI by bringing real-time visibility and control to GenAI assistants, AI agents, development environments and Shadow AI, feels like the logical evolution of what Darktrace already does so well.

Just as importantly, Darktrace is already built for dynamic, cross-domain environments where risk doesn’t sit in a single tool or control plane. In higher education, activity already spans multiple systems and, with AI, that interconnection only accelerates.

Having deployed Darktrace multiple times, I have confidence it’s uniquely positioned to lead in this space and help organizations adopt AI with greater visibility and control.

---

Since authoring this blog, Irving Bruckstein has transitioned to the role of Chief Executive Officer of the Cyberaigroup.

Continue reading
About the author
Irving Bruckstein
CEO CyberAIgroup

Blog

/

/

May 11, 2026

The Next Step After Mythos: Defending in a World Where Compromise is Expected

mythos cybersecurityDefault blog imageDefault blog image

Is Anthropic’s Mythos a turning point for cybersecurity?

Anthropic’s recent announcements around their Mythos model, alongside the launch of Project Glasswing, have generated significant interest across the cybersecurity industry.

The closed-source nature of the Mythos model has understandably attracted a degree of skepticism around some of the claims being made. Additionally, Project Glasswing was initially positioned as a way for software vendors to accelerate the proactive discovery of vulnerabilities in their own code; however, much of the attention has focused on the potential for AI to identify exploitable vulnerabilities for those with malicious intent.

Putting questions around the veracity of those claims to one side – which, for what it’s worth, do appear to be at least partially endorsed by independent bodies such as the UK’s AI Security Institute – this should not be viewed as a critical turning point for the industry. Rather, it reflects the natural direction of travel.

How Mythos affects cybersecurity teams  

At Darktrace, extolling the virtues of AI within cybersecurity is understandably close to our hearts. However, taking a step back from the hype, we’d like to consider what developments like this mean for security teams.

Whether it’s Mythos or another model yet to be released, it’s worth remembering that there is no fundamental difference between an AI discovered vulnerability and one discovered by a human. The change is in the pace of discovery and, some may argue, the lower the barrier to entry.

In the hands of a software developer, this is unquestionably positive. Faster discovery enables earlier remediation and more proactive security. But in the hands of an attacker, the same capability will likely lead to a greater number of exploitable vulnerabilities being used in the wild and, critically, vulnerabilities that are not yet known to either the vendor or the end user.

That said, attackers have always been able to find exploitable vulnerabilities and use them undetected for extended periods of time. The use of AI does not fundamentally change this reality, but it does make the process faster and, unfortunately, more likely to occur at scale.

While tools such as Darktrace / Attack Surface Management and / Proactive Exposure Management  can help security teams prioritize where to patch, the emergence of AI-driven vulnerability discovery reinforces an important point: patching alone is not a sufficient control against modern cyber-attacks.

Rethinking defense for a world where compromise is expected

Rather than assuming vulnerabilities can simply be patched away, defenders are better served by working from the assumption that their software is already vulnerable - and always will be -and build their security strategy accordingly.

Under that assumption, defenders should expect initial access, particularly across internet exposed assets, to become easier for attackers. What matters then is how quickly that foothold is detected, contained, and prevented from expanding.

For defenders, this places renewed emphasis on a few core capabilities:

  • Secure-by-design architectures and blast radius reduction, particularly around identity, MFA, segmentation, and Zero Trust principles
  • Early, scalable detection and containment, favoring behavioral and context-driven signals over signatures alone
  • Operational resilience, with the expectation of more frequent early-stage incidents that must be managed without burning out teams

How Darktrace helps organizations proactively defend against cyber threats

At Darktrace, we support security teams across all three of these critical capabilities through a multi-layered AI approach. Our Self-Learning AI learns what’s normal for your organization, enabling real-time threat detection, behavioral prediction, incident investigation and autonomous response. - all while empowering your security team with visibility and control.

To learn more about Darktrace’s application of AI to cybersecurity download our White Paper here.  

Reducing blast radius through visibility and control

Secure-by-design principles depend on understanding how users, devices, and systems behave. By learning the normal patterns of identity and network activity, Darktrace helps teams identify when access is being misused or when activity begins to move beyond expected boundaries. This makes it possible to detect and contain lateral movement early, limiting how far an attacker can progress even after initial access.

Detecting and containing threats at the earliest stage  

As AI accelerates vulnerability discovery, defenders need to identify exploitation before it is formally recognized. Darktrace’s behavioral understanding approach enables detection of subtle deviations from normal activity, including those linked to previously unknown vulnerabilities.

A key example of this is our research on identifying cyber threats before public CVE disclosures, demonstrating that assessing activity against what is normal for a specific environment, rather than relying on predefined indicators of compromise, enables detection of intrusions exploiting previously unknown vulnerabilities days or even weeks before details become publicly available.

Additionally, our Autonomous Response capability provides fast, targeted containment focused on the most concerning events, while allowing normal business operations to continue. This has consistently shown that even when attackers use techniques never seen before, Darktrace’s Autonomous Response can contain threats before they have a chance to escalate.

Scaling response without increasing operational burden

As early-stage incidents become more frequent, the ability to investigate and respond efficiently becomes critical. Darktrace’s Cyber AI Analyst’s AI-driven investigation capabilities automatically correlate activity across the environment, prioritizing the most significant threats and reducing the need for manual triage. This allows security teams to respond faster and more consistently, without increasing workload or burnout.

What effective defense looks like in an AI-accelerated landscape

Developments like Mythos highlight a reality that has been building for some time: the window between exposure and exploitation is shrinking, and in many cases, it may disappear entirely. In that environment, relying on patching alone becomes increasingly reactive, leaving little room to respond once access has been established.

The more durable approach is to assume that compromise will occur and focus on controlling what happens next. That means identifying early signs of misuse, containing threats before they spread, and maintaining visibility across the environment so that isolated signals can be understood in context.

AI plays a role on both sides of this equation. While it enables attackers to move faster, it also gives defenders the ability to detect subtle changes in behavior, prioritize what matters, and respond in real time. The advantage will not come from adopting AI in isolation, but from applying it in a way that reduces the gap between detection and action.

AI may be accelerating parts of the attack lifecycle, but the fundamentals of defense, detection, and containment still apply. If anything, they matter more than ever – and AI is just as powerful a tool for defenders as it is for attackers.

To learn more about Darktrace and Mythos read more on our blog: Mythos vs Ethos: Defending in an Era of AI‑Accelerated Vulnerability Discovery

[related-resource]

Continue reading
About the author
Toby Lewis
Head of Threat Analysis
Your data. Our AI.
Elevate your network security with Darktrace AI