Blog
/
/
April 29, 2020

How Email Attackers Are Buying Domain Names to Get Inboxes

Explore how mass domain purchasing allows cyber-criminals to stay ahead of legacy email tools — and how cyber AI stops the threats that slip through.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
29
Apr 2020

It is by now common knowledge that the vast majority of cyber-threats start with an email. In the current working conditions, this is more true than ever – with a recent study reporting a 30,000% increase in phishing, websites, and malware targeting remote users.

Many email security tools struggle to detect threats they encounter for the first time. Attackers know this and are leveraging many techniques to take advantage of this fundamental flaw. This includes automation to mutate common threat variants, resulting in a massive increase in unknown threats. Another technique, which will be the focus of this blog post, is the rapid and widespread creation of new domains in order to evade reputation checks and signature-based detection.

The recent surge in domain creation

While traditional tools have to rely on identifying campaigns and patterns across multiple emails to establish whether or not an email is malicious, Cyber AI technology doesn’t require classifying emails into buckets in order to know they don’t belong. There is no need, therefore, to actively track campaigns. But as security researchers, it’s hard to miss some trends.

Since the coronavirus outbreak, we have seen the number of domains registered related to COVID-19 increase by 130,000. In this time, 60% of all spear phishing threats neutralized by Antigena Email were related to COVID-19 or remote work. Another recent study determined that 10,000 coronavirus-related domains are created every day, with roughly nine out of ten of these either malicious or attempting to generate sales of fake products.

With attackers also taking advantage of changing online behaviors arising from the pandemic, another trend we’ve seen is the proliferation of the keyword ‘Zoom’ in some of the unpopular domains that bypassed traditional tools, as attackers leverage the video conferencing platform’s recent rise in usage.

“I believe that hackers identified coronavirus as something users are desperate to find information on. Panic leads to irrational thinking and people forget the basics of cyber security.”

— COO, Atlas VPN

I recently wrote a blog post on the idea of ‘fearware’ and why it’s so successful. Right now, people are desperate for information, and attackers know this. Cyber-criminals play into fear, uncertainty, and doubt (FUD) through a number of mechanisms, and we have since seen a variety of imaginative attempts to engage recipients. These emails range from fake ‘virus trackers’, to sending emails purporting to be from Amazon, claiming an unmanageable rise in newly registered accounts, and demanding “re-registration” of the recipient’s credit card details should they wish to keep their account.

Domain name purchasing: A vicious cycle

Purchasing thousands of new domains and sending malicious emails en masse is a tried and tested technique that cyber-criminals have been leveraging for decades. Now with automation, they’re doing it faster than ever before.

Here’s why it works.

Traditional security tools work by analyzing emails in isolation, measuring them against static blacklists of ‘known bads’. By way of analogy, the gateway tool here is acting like a security guard standing at the perimeter of an organization’s physical premises, asking every individual who enters: “are you malicious?”

The binary answer to this sole question is extracted by looking at some metadata around the email, including the sender’s IP, their email address domain, and any embedded links or attachments. They analyze this data in a vacuum, and at face value, with no consideration towards the relationship between that data, the recipient, and the rest of the business. They run reputation checks, asking “have I seen this IP or domain before?” Crucially, if the answer is no, they let them straight through.

To spell that out, if the domain is brand new, it won’t have a reputation, and as these traditional tools have a limited ability to identify potential harmful elements via any other means, they have no choice but to let them in by default.

These methods barely scratch the surface of a much wider range of characteristics that a malicious email might contain. And as email threats get ever more sophisticated, the ‘innocent until proven guilty approach’ is not enough. For a comprehensive check, we would want to ask: does the domain have any previous relationship with the recipient? The organization as a whole? Does it look suspiciously visually similar to other domains? Is this the first time we’ve seen an inbound email from this user? Has anybody in the organization ever shared a link with this domain? Has any user ever visited this link?

Legacy tools are blatantly asking the wrong questions, to which attackers know the answers. And usually, they can skirt by these inattentive security guards by paying just a few pennies for new domains.

How to buy your way in

Let’s look at the situation from an attacker’s perspective. They just need one email to land and it could be keys to the kingdom, so an upfront purchase of a few thousand new domains will almost inevitably pay off. And they’d pay the price as long as it’s working and they’re profiting.

This is exactly what attackers are doing. Newly-registered domains consistently get through gateways until these traditional tools are armed with enough information to determine that the domains are bad, by which point thousands or even millions of emails could have been successfully delivered. As soon as the attack infrastructure is worn out, the attackers will abandon it, and very easily just purchase and deploy a new set of domains.

And so, the vicious cycle continues. Like a game of ‘whack-a-mole’, these legacy ‘solutions’ will continue to hammer down on recognized ‘bad’ emails – all the while more malicious domains are being created in the thousands in preparation for the next campaign. This is the ‘Domain Game’, and it’s a hard game for defenders to win.

Asking the right questions

Thankfully, the solution to this problem is as simple as the problem itself. It requires a movement away from the legacy approach and towards deploying technology that is up to par with the speed and scale of today’s attackers.

In the last two years, new technologies have emerged that leverage AI, seeking to understand the human behind the email address. Rather than inspecting incoming traffic at the surface-level and asking binary questions, this paradigm shift away from this insufficient legacy approach asks the right questions: not simply “are you malicious?”, but crucially: “do you belong?”

Informed by a nuanced understanding of the recipient, their peers, and the organization at large, every inbound, outbound, and internal email is analyzed in context, and is then re-analyzed over and over again in light of evolving evidence. Asking the right questions and understanding the human invariably sets a far higher standard for acceptable catch rates with unknown threats on first encounter. This approach far outpaces traditional email defenses which have proven to fail and leave companies and their employees vulnerable to malicious emails sitting in their inboxes.

Rather than desperately bashing away at blacklisted domains and IP addresses in an ill-fated attempt to beat the attackers, we can change the game altogether, tilting the scales in favor of the defenders – securing our inboxes and our organizations at large.

Learn more about Antigena Email.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product

More in this series

No items found.

Blog

/

AI

/

May 12, 2026

Resilience at the Speed of AI: Defending the Modern Campus with Darktrace

Default blog imageDefault blog image

Why higher education is a different cybersecurity battlefield

After four decades in IT, now serving as both CIO and CISO, I’ve learned one simple truth: cybersecurity is never “done.” It’s a constant game of cat and mouse. Criminals evolve. Technologies advance. Regulations expand. But in higher education, the challenge is uniquely complex.

Unlike a bank or a military installation, we can’t lock down networks to a narrow set of approved applications. Higher education environments are open by design. Students collaborate globally, faculty conduct cutting-edge research, and administrators manage critical operations, all of which require seamless access to the internet, global networks, cloud platforms, and connected systems.

Combine that openness with expanding regulatory mandates and tight budgets, and the balancing act becomes clear.

Threat actors don’t operate under the same constraints. Often well-funded and sponsored by nation-states with significant resources, they’re increasingly organized, strategic, and innovative.

That sophistication shows up in the tactics we face every day, from social engineering and ransomware to AI-driven impersonation attacks. We’re dealing with massive volumes of data, countless signals, and a very small window between detection and damage.

No human team, no matter how talented or how numerous, can manually sift through that noise at the speed required.

Discovering a force multiplier

Nothing in cybersecurity is 100% foolproof. I never “set it and forget it.” But for institutions balancing rising threats and finite resources, the Darktrace ActiveAI Security Platform™ offers something incredibly valuable: peace of mind through speed and scale.

It closes the gap between detection and response in a way humans can’t possibly match. At the speed of light, it can quarantine, investigate, and contain anomalous activity.

I’ve purchased and deployed Darktrace three separate times at three different institutions because I’ve seen firsthand what it can do and what it enables teams like mine to achieve.

I first encountered Darktrace while serving as CIO for a large multi-campus college system. What caught my attention was Darktrace's Self-Learning AI, and its ability to learn what "normal" looked like across our network. Instead of relying solely on static signatures or rigid rules, Darktrace built a behavioral baseline unique to our environment and alerted us in real time when something simply didn’t look right.

In higher education, where strict lockdowns aren’t realistic, that behavioral model made all the difference. We deployed it across five campuses, and the impact was immediate. Operating 24/7, Darktrace surfaced threats in ways our team couldn’t replicate manually.

Over time, the Darktrace platform evolved alongside the changing threat landscape, expanding into intrusion prevention, cloud visibility, and email security. At subsequent institutions, including Washington College, Darktrace was one of my first strategic investments.

Revealing the hidden threat other tools missed

One of the most surprising investigations of my career involved a data leak. Leadership suspected sensitive information from high-level meetings was being exposed, but our traditional tools couldn’t provide any answers.

Using Darktrace’s deep network visibility, down to packet-level data, we traced unusual connections to our CCTV camera system, which had been configured with a manufacturer’s default password. A small group of employees had hacked into the CCTV cameras, accessed audio-enabled recordings from boardroom meetings, and stored copies locally.

No other tool in our environment could have surfaced those connections the way Darktrace did. It was a clear example of why using AI to deeply understand how your organization, systems, and tools normally behave, matters: threats and risks don’t always look the way we expect.

Elevating a D-rating into a A-level security program

When I arrived at my last CISO role, the institution had recently experienced a significant ransomware attack. Attackers located  data  which informed their setting  ransom demands to an amount they knew would likely result in payment. It was a sobering example of how calculated and strategic modern cybercriminals have become.

Third-party cyber ratings reflected that reality, with a  D rating.

To raise the bar, we implemented a comprehensive security program and integrated layered defenses; -deploying state of the art tools and methods-  across the environment, with Darktrace at its core.

After a 90-day learning period to establish our behavioral baseline, we transitioned the platform into fully autonomous mode. In a single 30-day span, Darktrace conducted more than 2,500 investigations and autonomously resolved 92% of all false positives.

For a small team, that’s transformative. Instead of drowning in alerts, my staff focused on less than  200 meaningful cases that warranted human review.

Today, we maintain a perfect A rating from third-party assessors and have remained cybersafe.

Peace of mind isn’t about complacency

The effect of Darktrace as a force multiplier has a real human impact.

With the time reclaimed through automation, we expanded community education programs and implemented simulated phishing exercises. Through sustained training and awareness efforts, we reduced social engineering susceptibility from nearly 45% to under 5%.

On a personal level, Darktrace allows me to sleep better at night and take time off knowing we have intelligent systems monitoring and responding around the clock. For any CIO or CISO carrying institutional risk on their shoulders, that matters.

The next era: AI vs. AI

A new chapter in cybersecurity is unfolding as adversaries leverage AI to enhance scale, speed, and believability. Phishing campaigns are more personalized, impersonation attempts are more precise, and deepfake video technology, including live video, is disturbingly authentic. At the same time, organizations are rapidly adopting AI across their own environments —from GenAI assistants to embedded tools to autonomous agents. These systems don’t operate within fixed rules. They act across email, cloud, SaaS, and identity systems, often with broad permissions, and their behavior can evolve over time in ways that are difficult to predict or control.

That creates a new kind of security challenge. It’s not just about defending against AI-powered threats but understanding and governing how AI behaves within your environment, including what it can access, how it acts, and where risk begins to emerge.

From my perspective, this is a natural next step for Darktrace.

Darktrace brings a level of maturity and behavioral understanding uniquely suited to the complexity of AI environments. Self-Learning AI learns the normal patterns of each business to interpret context, uncover subtle intent, and detect meaningful deviations without relying on predefined rules or signatures. Extending into securing AI by bringing real-time visibility and control to GenAI assistants, AI agents, development environments and Shadow AI, feels like the logical evolution of what Darktrace already does so well.

Just as importantly, Darktrace is already built for dynamic, cross-domain environments where risk doesn’t sit in a single tool or control plane. In higher education, activity already spans multiple systems and, with AI, that interconnection only accelerates.

Having deployed Darktrace multiple times, I have confidence it’s uniquely positioned to lead in this space and help organizations adopt AI with greater visibility and control.

---

Since authoring this blog, Irving Bruckstein has transitioned to the role of Chief Executive Officer of the Cyberaigroup.

Continue reading
About the author
Irving Bruckstein
CEO CyberAIgroup

Blog

/

AI

/

May 11, 2026

The Next Step After Mythos: Defending in a World Where Compromise is Expected

mythos cybersecurityDefault blog imageDefault blog image

Is Anthropic’s Mythos a turning point for cybersecurity?

Anthropic’s recent announcements around their Mythos model, alongside the launch of Project Glasswing, have generated significant interest across the cybersecurity industry.

The closed-source nature of the Mythos model has understandably attracted a degree of skepticism around some of the claims being made. Additionally, Project Glasswing was initially positioned as a way for software vendors to accelerate the proactive discovery of vulnerabilities in their own code; however, much of the attention has focused on the potential for AI to identify exploitable vulnerabilities for those with malicious intent.

Putting questions around the veracity of those claims to one side – which, for what it’s worth, do appear to be at least partially endorsed by independent bodies such as the UK’s AI Security Institute – this should not be viewed as a critical turning point for the industry. Rather, it reflects the natural direction of travel.

How Mythos affects cybersecurity teams  

At Darktrace, extolling the virtues of AI within cybersecurity is understandably close to our hearts. However, taking a step back from the hype, we’d like to consider what developments like this mean for security teams.

Whether it’s Mythos or another model yet to be released, it’s worth remembering that there is no fundamental difference between an AI discovered vulnerability and one discovered by a human. The change is in the pace of discovery and, some may argue, the lower the barrier to entry.

In the hands of a software developer, this is unquestionably positive. Faster discovery enables earlier remediation and more proactive security. But in the hands of an attacker, the same capability will likely lead to a greater number of exploitable vulnerabilities being used in the wild and, critically, vulnerabilities that are not yet known to either the vendor or the end user.

That said, attackers have always been able to find exploitable vulnerabilities and use them undetected for extended periods of time. The use of AI does not fundamentally change this reality, but it does make the process faster and, unfortunately, more likely to occur at scale.

While tools such as Darktrace / Attack Surface Management and / Proactive Exposure Management  can help security teams prioritize where to patch, the emergence of AI-driven vulnerability discovery reinforces an important point: patching alone is not a sufficient control against modern cyber-attacks.

Rethinking defense for a world where compromise is expected

Rather than assuming vulnerabilities can simply be patched away, defenders are better served by working from the assumption that their software is already vulnerable - and always will be -and build their security strategy accordingly.

Under that assumption, defenders should expect initial access, particularly across internet exposed assets, to become easier for attackers. What matters then is how quickly that foothold is detected, contained, and prevented from expanding.

For defenders, this places renewed emphasis on a few core capabilities:

  • Secure-by-design architectures and blast radius reduction, particularly around identity, MFA, segmentation, and Zero Trust principles
  • Early, scalable detection and containment, favoring behavioral and context-driven signals over signatures alone
  • Operational resilience, with the expectation of more frequent early-stage incidents that must be managed without burning out teams

How Darktrace helps organizations proactively defend against cyber threats

At Darktrace, we support security teams across all three of these critical capabilities through a multi-layered AI approach. Our Self-Learning AI learns what’s normal for your organization, enabling real-time threat detection, behavioral prediction, incident investigation and autonomous response. - all while empowering your security team with visibility and control.

To learn more about Darktrace’s application of AI to cybersecurity download our White Paper here.  

Reducing blast radius through visibility and control

Secure-by-design principles depend on understanding how users, devices, and systems behave. By learning the normal patterns of identity and network activity, Darktrace helps teams identify when access is being misused or when activity begins to move beyond expected boundaries. This makes it possible to detect and contain lateral movement early, limiting how far an attacker can progress even after initial access.

Detecting and containing threats at the earliest stage  

As AI accelerates vulnerability discovery, defenders need to identify exploitation before it is formally recognized. Darktrace’s behavioral understanding approach enables detection of subtle deviations from normal activity, including those linked to previously unknown vulnerabilities.

A key example of this is our research on identifying cyber threats before public CVE disclosures, demonstrating that assessing activity against what is normal for a specific environment, rather than relying on predefined indicators of compromise, enables detection of intrusions exploiting previously unknown vulnerabilities days or even weeks before details become publicly available.

Additionally, our Autonomous Response capability provides fast, targeted containment focused on the most concerning events, while allowing normal business operations to continue. This has consistently shown that even when attackers use techniques never seen before, Darktrace’s Autonomous Response can contain threats before they have a chance to escalate.

Scaling response without increasing operational burden

As early-stage incidents become more frequent, the ability to investigate and respond efficiently becomes critical. Darktrace’s Cyber AI Analyst’s AI-driven investigation capabilities automatically correlate activity across the environment, prioritizing the most significant threats and reducing the need for manual triage. This allows security teams to respond faster and more consistently, without increasing workload or burnout.

What effective defense looks like in an AI-accelerated landscape

Developments like Mythos highlight a reality that has been building for some time: the window between exposure and exploitation is shrinking, and in many cases, it may disappear entirely. In that environment, relying on patching alone becomes increasingly reactive, leaving little room to respond once access has been established.

The more durable approach is to assume that compromise will occur and focus on controlling what happens next. That means identifying early signs of misuse, containing threats before they spread, and maintaining visibility across the environment so that isolated signals can be understood in context.

AI plays a role on both sides of this equation. While it enables attackers to move faster, it also gives defenders the ability to detect subtle changes in behavior, prioritize what matters, and respond in real time. The advantage will not come from adopting AI in isolation, but from applying it in a way that reduces the gap between detection and action.

AI may be accelerating parts of the attack lifecycle, but the fundamentals of defense, detection, and containment still apply. If anything, they matter more than ever – and AI is just as powerful a tool for defenders as it is for attackers.

To learn more about Darktrace and Mythos read more on our blog: Mythos vs Ethos: Defending in an Era of AI‑Accelerated Vulnerability Discovery

[related-resource]

Continue reading
About the author
Toby Lewis
Head of Threat Analysis
Your data. Our AI.
Elevate your network security with Darktrace AI