ブログ
/
Email
/
April 10, 2023

Employee-Conscious Email Security Solutions in the Workforce

Email threats commonly affect organizations. Read Darktrace's expert insights on how to safeguard your business by educating employees about email security.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product
Written by
Carlos Gray
Senior Product Marketing Manager, Email
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
10
Apr 2023

When considering email security, IT teams have historically had to choose between excluding employees entirely, or including them but giving them too much power and implementing unenforceable, trust-based policies that try to make up for it. 

However, just because email security should not rely on employees, this does not mean they should be excluded entirely. Employees are the ones interacting with emails daily, and their experiences and behaviors can provide valuable security insights and even influence productivity. 

AI technology supports employee engagement in this non-intrusive, nuanced way to not only maintain email security, but also enhance it. 

Finding a Balance of Employee Involvement in Security Strategies

Historically, security solutions offered ‘all or nothing’ approaches to employee engagement. On one hand, when employees are involved, they are unreliable. Employees cannot all be experts in security on top of their actual job responsibilities, and mistakes are bound to happen in fast-paced environments.  

Although there have been attempts to raise security awareness, they often have shortcomings, as training emails lack context and realism, leaving employees with poor understandings that often lead to reporting emails that are actually safe. Having users constantly triaging their inboxes and reporting safe emails wastes time that takes away from their own productivity as well as the productivity of the security team.

Other historic forms of employee involvement also put security at risk. For example, users could create blanket rules through feedback, which could lead to common problems like safe-listing every email that comes from the gmail.com domain. Other times, employees could choose for themselves to release emails without context or limitations, introducing major risks to the organization. While these types of actions include employees to participate in security, they do so at the cost of security. 

Even lower stakes employee involvement can prove ineffective. For example, excessive warnings when sending emails to external contacts can lead to banner fatigue. When employees see the same warning message or alert at the top of every message, it’s human nature that they soon become accustomed and ultimately immune to it.

On the other hand, when employees are fully excluded from security, an opportunity is missed to fine-tune security according to the actual users and to gain feedback on how well the email security solution is working. 

So, both options of historically conventional email security, to include or exclude employees, prove incapable of leveraging employees effectively. The best email security practice strikes a balance between these two extremes, allowing more nuanced interactions that maintain security without interrupting daily business operations. This can be achieved with AI that tailors the interactions specifically to each employee to add to security instead of detracting from it. 

Reducing False Reports While Improving Security Awareness Training 

Humans and AI-powered email security can simultaneously level up by working together. AI can inform employees and employees can inform AI in an employee-AI feedback loop.  

By understanding ‘normal’ behavior for every email user, AI can identify unusual, risky components of an email and take precise action based on the nature of the email to neutralize them, such as rewriting links, flattening attachments, and moving emails to junk. AI can go one step further and explain in non-technical language why it has taken a specific action, which educates users. In contrast to point-in-time simulated phishing email campaigns, this means AI can share its analysis in context and in real time at the moment a user is questioning an email. 

The employee-AI feedback loop educates employees so that they can serve as additional enrichment data. It determines the appropriate levels to inform and teach users, while not relying on them for threat detection

In the other direction, the AI learns from users’ activity in the inbox and gradually factors this into its decision-making. This is not a ‘one size fits all’ mechanism – one employee marking an email as safe will never result in blanket approval across the business – but over time, patterns can be observed and autonomous decision-making enhanced.  

Figure 1: The employee-AI feedback loop increases employee understanding without putting security at risk.

The employee-AI feedback loop draws out the maximum potential benefits of employee involvement in email security. Other email security solutions only consider the security team, enhancing its workflow but never considering the employees that report suspicious emails. Employees who try to do the right thing but blindly report emails never learn or improve and end up wasting their own time. By considering employees and improving security awareness training, the employee-AI feedback loop can level up users. They learn from the AI explanations how to identify malicious components, and so then report fewer emails but with greater accuracy. 

While AI programs have classically acted like black boxes, Darktrace trains its AI on the best data, the organization’s actual employees, and invites both the security team and employees to see the reasoning behind its conclusions. Over time, employees will trust themselves more as they better learn how to discern unsafe emails. 

Leveraging AI to Generate Productivity Gains

Uniquely, AI-powered email security can have effects outside of security-related areas. It can save time by managing non-productive email. As the AI constantly learns employee behavior in the inbox, it becomes extremely effective at detecting spam and graymail – emails that aren't necessarily malicious, but clutter inboxes and hamper productivity. It does this on a per-user basis, specific to how each employee treats spam, graymail, and newsletters. The AI learns to detect this clutter and eventually learns which to pull from the inbox, saving time for the employees. This highlights how security solutions can go even further than merely protecting the email environment with a light touch, to the point where AI can promote productivity gains by automating tasks like inbox sorting.

Preventing Email Mishaps: How to Deal with Human Error

Improved user understanding and decision making cannot stop natural human error. Employees are bound to make mistakes and can easily send emails to the wrong people, especially when Outlook auto-fills the wrong recipient. This can have effects ranging anywhere from embarrassing to critical, with major implications on compliance, customer trust, confidential intellectual property, and data loss. 

However, AI can help reduce instances of accidentally sending emails to the wrong people. When a user goes to send an email in Outlook, the AI will analyze the recipients. It considers the contextual relationship between the sender and recipients, the relationships the recipients have with each other, how similar each recipient’s name and history is to other known contacts, and the names of attached files.  

If the AI determines that the email is outside of a user’s typical behavior, it may alert the user. Security teams can customize what the AI does next: it can block the email, block the email but allow the user to override it, or do nothing but invite the user to think twice. Since the AI analyzes each email, these alerts are more effective than consistent, blanket alerts warning about external recipients, which often go ignored. With this targeted approach, the AI prevents data leakage and reduces cyber risk. 

Since the AI is always on and continuously learning, it can adapt autonomously to employee changes. If the role of an employee evolves, the AI will learn the new normal, including common behaviors, recipients, attached file names, and more. This allows the AI to continue effectively flagging potential instances of human error, without needing manual rule changes or disrupting the employee’s workflow. 

Email Security Informed by Employee Experience

As the practical users of email, employees should be considered when designing email security. This employee-conscious lens to security can strengthen defenses, improve productivity, and prevent data loss.  

In these ways, email security can benefit both employees and security teams. Employees can become another layer of defense with improved security awareness training that cuts down on false reports of safe emails. This insight into employee email behavior can also enhance employee productivity by learning and sorting graymail. Finally, viewing security in relation to employees can help security teams deploy tools that reduce data loss by flagging misdirected emails. With these capabilities, Darktrace/Email™ enables security teams to optimize the balance of employee involvement in email security.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Dan Fein
VP, Product
Written by
Carlos Gray
Senior Product Marketing Manager, Email

Blog

/

/

November 19, 2025

Securing Generative AI: Managing Risk in Amazon Bedrock with Darktrace / CLOUD

Default blog imageDefault blog image

Security risks and challenges of generative AI in the enterprise

Generative AI and managed foundation model platforms like Amazon Bedrock are transforming how organizations build and deploy intelligent applications. From chatbots to summarization tools, Bedrock enables rapid agent development by connecting foundation models to enterprise data and services. But with this flexibility comes a new set of security challenges, especially around visibility, access control, and unintended data exposure.

As organizations move quickly to operationalize generative AI, traditional security controls are struggling to keep up. Bedrock’s multi-layered architecture, spanning agents, models, guardrails, and underlying AWS services, creates new blind spots that standard posture management tools weren’t designed to handle. Visibility gaps make it difficult to know which datasets agents can access, or how model outputs might expose sensitive information. Meanwhile, developers often move faster than security teams can review IAM permissions or validate guardrails, leading to misconfigurations that expand risk. In shared-responsibility environments like AWS, this complexity can blur the lines of ownership, making it critical for security teams to have continuous, automated insight into how AI systems interact with enterprise data.

Darktrace / CLOUD provides comprehensive visibility and posture management for Bedrock environments, automatically detecting and proactively scanning agents and knowledge bases, helping teams secure their AI infrastructure without slowing down expansion and innovation.

A real-world scenario: When access goes too far

Consider a scenario where an organization deploys a Bedrock agent to help internal staff quickly answer business questions using company knowledge. The agent was connected to a knowledge base pointing at documents stored in Amazon S3 and given access to internal services via APIs.

To get the system running quickly, developers assigned the agent a broad execution role. This role granted access to multiple S3 buckets, including one containing sensitive customer records. The over-permissioning wasn’t malicious; it stemmed from the complexity of IAM policy creation and the difficulty of identifying which buckets held sensitive data.

The team assumed the agent would only use the intended documents. However, they did not fully consider how employees might interact with the agent or how it might act on the data it processed.  

When an employee asked a routine question about quarterly customer activity, the agent surfaced insights that included regulated data, revealing it to someone without the appropriate access.

This wasn’t a case of prompt injection or model manipulation. The agent simply followed instructions and used the resources it was allowed to access. The exposure was valid under IAM policy, but entirely unintended.

How Darktrace / CLOUD prevents these risks

Darktrace / CLOUD helps organizations avoid scenarios like unintended data exposure by providing layered visibility and intelligent analysis across Bedrock and SageMaker environments. Here’s how each capability works in practice:

Configuration-level visibility

Bedrock deployments often involve multiple components: agents, guardrails, and foundation models, each with its own configuration. Darktrace / CLOUD indexes these configurations so teams can:

  1. Inspect deployed agents and confirm they are connected only to approved data sources.
  2. Track evaluation job setups and their links to Amazon S3 datasets, uncovering hidden data flows that could expose sensitive information.
  3. Maintain full awareness of all AI components, reducing the chance of overlooked assets introducing risk.

By unifying configuration data across Bedrock, SageMaker, and other AWS services, Darktrace / CLOUD provides a single source of truth for AI asset visibility. Teams can instantly see how each component is configured and whether it aligns with corporate security policies. This eliminates guesswork, accelerates audits, and helps prevent misaligned settings from creating data exposure risks.

 Agents for bedrock relationship views.
Figure 1: Agents for bedrock relationship views

Architectural awareness

Complex AI environments can make it difficult to understand how components interact. Darktrace / CLOUD generates real-time architectural diagrams that:

  1. Visualize relationships between agents, models, and datasets.
  1. Highlight unintended data access paths or risk propagation across interconnected services.

This clarity helps security teams spot vulnerabilities before they lead to exposure. By surfacing these relationships dynamically, Darktrace / CLOUD enables proactive risk management, helping teams identify architectural drift, redundant data connections, or unmonitored agents before attackers or accidental misuse can exploit them. This reduces investigation time and strengthens compliance confidence across AI workloads.

Figure 2: Full Bedrock agent architecture including lambda and IAM permission mapping
Figure 2: Full Bedrock agent architecture including lambda and IAM permission mapping

Access & privilege analysis

IAM permissions apply to every AWS service, including Bedrock. When Bedrock agents assume IAM roles that were broadly defined for other workloads, they often inherit excessive privileges. Without strict least-privilege controls, the agent may have access to far more data and services than required, creating avoidable security exposure. Darktrace / CLOUD:

  1. Reviews execution roles and user permissions to identify excessive privileges.
  2. Flags anomalies that could enable privilege escalation or unauthorized API actions.

This ensures agents operate within the principle of least privilege, reducing attack surface. Beyond flagging risky roles, Darktrace / CLOUD continuously learns normal patterns of access to identify when permissions are abused or expanded in real time. Security teams gain context into why an action is anomalous and how it could affect connected assets, allowing them to take targeted remediation steps that preserve productivity while minimizing exposure.

Misconfiguration detection

Misconfigurations are a leading cause of cloud security incidents. Darktrace / CLOUD automatically detects:

  1. Publicly accessible S3 buckets that may contain sensitive training data.
  2. Missing guardrails in Bedrock deployments, which can allow inappropriate or sensitive outputs.
  3. Other issues such as lack of encryption, direct internet access, and root access to models.  

By surfacing these risks early, teams can remediate before they become exploitable. Darktrace / CLOUD turns what would otherwise be manual reviews into automated, continuous checks, reducing time to discovery and preventing small oversights from escalating into full-scale incidents. This automated assurance allows organizations to innovate confidently while keeping their AI systems compliant and secure by design.

Configuration data for Anthropic foundation model
Figure 3: Configuration data for Anthropic foundation model

Behavioral anomaly detection

Even with correct configurations, behavior can signal emerging threats. Using AWS CloudTrail, Darktrace / CLOUD:

  1. Monitors for unusual data access patterns, such as agents querying unexpected datasets.
  2. Detects anomalous training job invocations that could indicate attempts to pollute models.

This real-time behavioral insight helps organizations respond quickly to suspicious activity. Because it learns the “normal” behavior of each Bedrock component over time, Darktrace / CLOUD can detect subtle shifts that indicate emerging risks, before formal indicators of compromise appear. The result is faster detection, reduced investigation effort, and continuous assurance that AI-driven workloads behave as intended.

Conclusion

Generative AI introduces transformative capabilities but also complex risks that evolve alongside innovation. The flexibility of services like Amazon Bedrock enables new efficiencies and insights, yet even legitimate use can inadvertently expose sensitive data or bypass security controls. As organizations embrace AI at scale, the ability to monitor and secure these environments holistically, without slowing development, is becoming essential.

By combining deep configuration visibility, architectural insight, privilege and behavior analysis, and real-time threat detection, Darktrace gives security teams continuous assurance across AI tools like Bedrock and SageMaker. Organizations can innovate with confidence, knowing their AI systems are governed by adaptive, intelligent protection.

[related-resource]

Continue reading
About the author
Adam Stevens
Senior Director of Product, Cloud | Darktrace

Blog

/

Network

/

November 19, 2025

Unmasking Vo1d: Inside Darktrace’s Botnet Detection

Default blog imageDefault blog image

What is Vo1d APK malware?

Vo1d malware first appeared in the wild in September 2024 and has since evolved into one of the most widespread Android botnets ever observed. This large-scale Android malware primarily targets smart TVs and low-cost Android TV boxes. Initially, Vo1d was identified as a malicious backdoor capable of installing additional third-party software [1]. Its functionality soon expanded beyond the initial infection to include deploying further malicious payloads, running proxy services, and conducting ad fraud operations. By early 2025, it was estimated that Vo1d had infected 1.3 to 1.6 million devices worldwide [2].

From a technical perspective, Vo1d embeds components into system storage to enable itself to download and execute new modules at any time. External researchers further discovered that Vo1d uses Domain Generation Algorithms (DGAs) to create new command-and-control (C2) domains, ensuring that regardless of existing servers being taken down, the malware can quickly reconnect to new ones. Previous published analysis identified dozens of C2 domains and hundreds of DGA seeds, along with new downloader families. Over time, Vo1d has grown increasingly sophisticated with clear signs of stronger obfuscation and encryption methods designed to evade detection [2].

Darktrace’s coverage

Earlier this year, Darktrace observed a surge in Vo1d-related activity across customer environments, with the majority of affected customers based in South Africa. Devices that had been quietly operating as expected began exhibiting unusual network behavior, including excessive DNS lookups. Open-source intelligence (OSINT) has long highlighted South Africa as one of the countries most impacted by Vo1d infections [2].

What makes the recent activity particularly interesting is that the surge observed by Darktrace appears to be concentrated specifically in South African environments. This localized spike suggests that a significant number of devices may have been compromised, potentially due to vulnerable software, outdated firmware, or even preloaded malware. Regions with high prevalence of low-cost, often unpatched devices are especially susceptible, as these everyday consumer electronics can be quietly recruited into the botnet’s network. This specifically appears to be the case with South Africa, where public reporting has documented widespread use of low-cost boxes, such as non-Google-certified Android TV sticks, that frequently ship with outdated firmware [3].

The initial triage highlighted the core mechanism Vo1d uses to remain resilient: its use of DGA. A DGA deterministically creates a large list of pseudo-random domain names on a predictable schedule. This enables the malware to compute hundreds of candidate domains using the same algorithm, instead of using a hard-coded single C2 hostname that defenders could easily block or take down. To ensure reproducible from the infected device’s perspective, Vo1d utilizes DGA seeds. These seeds might be a static string, a numeric value, or a combination of underlying techniques that enable infected devices to generate the same list of candidate domains for a time window, provided the same DGA code, seed, and date are used.

Interestingly, Vo1d’s DGA seeds do not appear to be entirely unpredictable, and the generated domains lack fully random-looking endings. As observed in Figure 1, there is a clear pattern in the names generated. In this case, researchers identified that while the first five characters would change to create the desired list of domain names, the trailing portion remained consistent as part of the seed: 60b33d7929a, which OSINT sources have linked to the Vo1d botnet. [2]. Darktrace’s Threat Research team also identified a potential second DGA seed, with devices in some cases also engaging in activity involving hostnames matching the regular expression /[a-z]{5}fc975904fc9\.(com|top|net). This second seed has not been reported by any OSINT vendors at the time of writing.

Another recurring characteristic observed across multiple cases was the choice of top-level domains (TLDs), which included .com, .net, and .top.

Figure 1: Advanced Search results showing DNS lookups, providing a glimpse on the DGA seed utilized.

The activity was detected by multiple models in Darktrace / NETWORK™, which triggered on devices making an unusually large volume of DNS requests for domains uncommon across the network.

During the network investigation, Darktrace analysts traced Vo1d’s infrastructure and uncovered an interesting pattern related to responder ASNs. A significant number of connections pointed to AS16509 (AMAZON-02). By hosting redirectors or C2 nodes inside major cloud environments, Vo1d is able to gain access to highly available and geographically diverse infrastructure. When one node is taken down or reported, operators can quickly enable a new node under a different IP within the same ASN. Another feature of cloud infrastructure that hardens Vo1d’s resilience is the fact that many organizations allow outbound connections to cloud IP ranges by default, assuming they are legitimate. Despite this, Darktrace was able to identify the rarity of these endpoints, identifying the unusualness of the activity.

Analysts further observed that once a generated domain successfully resolved, infected devices consistently began establishing outbound connections to ephemeral port ranges like TCP ports 55520 and 55521. These destination ports are atypical for standard web or DNS traffic. Even though the choice of high-numbered ports appears random, it is likely far from not accidental. Commonly used ports such as port 80 (HTTP) or 443 (HTTPS) are often subject to more scrutiny and deeper inspection or content filtering, making them riskier for attackers. On the other hand, unregistered ports like 55520 and 55521 are less likely to be blocked, providing a more covert channel that blends with outbound TCP traffic. This tactic helps evade firewall rules that focus on common service ports. Regardless, Darktrace was able to identify external connections on uncommon ports to locations that the network does not normally visit.

The continuation of the described activity was identified by Darktrace’s Cyber AI Analyst, which correlated individual events into a broader interconnected incident. It began with the multiple DNS requests for the algorithmically generated domains, followed by repeated connections to rare endpoints later confirmed as attacker-controlled infrastructure. Cyber AI Analyst’s investigation further enabled it to categorize the events as part of the “established foothold” phase of the attack.

Figure 2: Cyber AI Analyst incident illustrating the transition from DNS requests for DGA domains to connections with resolved attacker-controlled infrastructure.

Conclusion

The observations highlighted in this blog highlight the precision and scale of Vo1d’s operations, ranging from its DGA-generated domains to its covert use of high-numbered ports. The surge in affected South African environments illustrate how regions with many low-cost, often unpatched devices can become major hubs for botnet activity. This serves as a reminder that even everyday consumer electronics can play a role in cybercrime, emphasizing the need for vigilance and proactive security measures.

Credit to Christina Kreza (Cyber Analyst & Team Lead) and Eugene Chua (Principal Cyber Analyst & Team Lead)

Edited by Ryan Traill (Analyst Content Lead)

Appendices

Darktrace Model Detections

  • Anomalous Connection / Devices Beaconing to New Rare IP
  • Anomalous Connection / Multiple Connections to New External TCP Port
  • Anomalous Connection / Multiple Failed Connections to Rare Endpoint
  • Compromise / DGA Beacon
  • Compromise / Domain Fluxing
  • Compromise / Fast Beaconing to DGA
  • Unusual Activity / Unusual External Activity

List of Indicators of Compromise (IoCs)

  • 3.132.75[.]97 – IP address – Likely Vo1d C2 infrastructure
  • g[.]sxim[.]me – Hostname – Likely Vo1d C2 infrastructure
  • snakeers[.]com – Hostname – Likely Vo1d C2 infrastructure

Selected DGA IoCs

  • semhz60b33d7929a[.]com – Hostname – Possible Vo1d C2 DGA endpoint
  • ggqrb60b33d7929a[.]com – Hostname – Possible Vo1d C2 DGA endpoint
  • eusji60b33d7929a[.]com – Hostname – Possible Vo1d C2 DGA endpoint
  • uacfc60b33d7929a[.]com – Hostname – Possible Vo1d C2 DGA endpoint
  • qilqxfc975904fc9[.]top – Hostname – Possible Vo1d C2 DGA endpoint

MITRE ATT&CK Mapping

  • T1071.004 – Command and Control – DNS
  • T1568.002 – Command and Control – Domain Generation Algorithms
  • T1568.001 – Command and Control – Fast Flux DNS
  • T1571 – Command and Control – Non-Standard Port

[1] https://news.drweb.com/show/?lng=en&i=14900

[2] https://blog.xlab.qianxin.com/long-live-the-vo1d_botnet/

[3] https://mybroadband.co.za/news/broadcasting/596007-warning-for-south-africans-using-specific-types-of-tv-sticks.html

The content provided in this blog is published by Darktrace for general informational purposes only and reflects our understanding of cybersecurity topics, trends, incidents, and developments at the time of publication. While we strive to ensure accuracy and relevance, the information is provided “as is” without any representations or warranties, express or implied. Darktrace makes no guarantees regarding the completeness, accuracy, reliability, or timeliness of any information presented and expressly disclaims all warranties.

Nothing in this blog constitutes legal, technical, or professional advice, and readers should consult qualified professionals before acting on any information contained herein. Any references to third-party organizations, technologies, threat actors, or incidents are for informational purposes only and do not imply affiliation, endorsement, or recommendation.

Darktrace, its affiliates, employees, or agents shall not be held liable for any loss, damage, or harm arising from the use of or reliance on the information in this blog.

The cybersecurity landscape evolves rapidly, and blog content may become outdated or superseded. We reserve the right to update, modify, or remove any content.

Continue reading
About the author
Christina Kreza
Cyber Analyst
Your data. Our AI.
Elevate your network security with Darktrace AI