Blog
/
No items found.
/
April 10, 2023

Employee-Conscious Email Security Solutions in the Workforce

Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
10
Apr 2023
Email threats commonly affect organizations. Read Darktrace's expert insights on how to safeguard your business by educating employees about email security.

When considering email security, IT teams have historically had to choose between excluding employees entirely, or including them but giving them too much power and implementing unenforceable, trust-based policies that try to make up for it. 

However, just because email security should not rely on employees, this does not mean they should be excluded entirely. Employees are the ones interacting with emails daily, and their experiences and behaviors can provide valuable security insights and even influence productivity. 

AI technology supports employee engagement in this non-intrusive, nuanced way to not only maintain email security, but also enhance it. 

Finding a Balance of Employee Involvement in Security Strategies

Historically, security solutions offered ‘all or nothing’ approaches to employee engagement. On one hand, when employees are involved, they are unreliable. Employees cannot all be experts in security on top of their actual job responsibilities, and mistakes are bound to happen in fast-paced environments.  

Although there have been attempts to raise security awareness, they often have shortcomings, as training emails lack context and realism, leaving employees with poor understandings that often lead to reporting emails that are actually safe. Having users constantly triaging their inboxes and reporting safe emails wastes time that takes away from their own productivity as well as the productivity of the security team.

Other historic forms of employee involvement also put security at risk. For example, users could create blanket rules through feedback, which could lead to common problems like safe-listing every email that comes from the gmail.com domain. Other times, employees could choose for themselves to release emails without context or limitations, introducing major risks to the organization. While these types of actions include employees to participate in security, they do so at the cost of security. 

Even lower stakes employee involvement can prove ineffective. For example, excessive warnings when sending emails to external contacts can lead to banner fatigue. When employees see the same warning message or alert at the top of every message, it’s human nature that they soon become accustomed and ultimately immune to it.

On the other hand, when employees are fully excluded from security, an opportunity is missed to fine-tune security according to the actual users and to gain feedback on how well the email security solution is working. 

So, both options of historically conventional email security, to include or exclude employees, prove incapable of leveraging employees effectively. The best email security practice strikes a balance between these two extremes, allowing more nuanced interactions that maintain security without interrupting daily business operations. This can be achieved with AI that tailors the interactions specifically to each employee to add to security instead of detracting from it. 

Reducing False Reports While Improving Security Awareness Training 

Humans and AI-powered email security can simultaneously level up by working together. AI can inform employees and employees can inform AI in an employee-AI feedback loop.  

By understanding ‘normal’ behavior for every email user, AI can identify unusual, risky components of an email and take precise action based on the nature of the email to neutralize them, such as rewriting links, flattening attachments, and moving emails to junk. AI can go one step further and explain in non-technical language why it has taken a specific action, which educates users. In contrast to point-in-time simulated phishing email campaigns, this means AI can share its analysis in context and in real time at the moment a user is questioning an email. 

The employee-AI feedback loop educates employees so that they can serve as additional enrichment data. It determines the appropriate levels to inform and teach users, while not relying on them for threat detection

In the other direction, the AI learns from users’ activity in the inbox and gradually factors this into its decision-making. This is not a ‘one size fits all’ mechanism – one employee marking an email as safe will never result in blanket approval across the business – but over time, patterns can be observed and autonomous decision-making enhanced.  

Figure 1: The employee-AI feedback loop increases employee understanding without putting security at risk.

The employee-AI feedback loop draws out the maximum potential benefits of employee involvement in email security. Other email security solutions only consider the security team, enhancing its workflow but never considering the employees that report suspicious emails. Employees who try to do the right thing but blindly report emails never learn or improve and end up wasting their own time. By considering employees and improving security awareness training, the employee-AI feedback loop can level up users. They learn from the AI explanations how to identify malicious components, and so then report fewer emails but with greater accuracy. 

While AI programs have classically acted like black boxes, Darktrace trains its AI on the best data, the organization’s actual employees, and invites both the security team and employees to see the reasoning behind its conclusions. Over time, employees will trust themselves more as they better learn how to discern unsafe emails. 

Leveraging AI to Generate Productivity Gains

Uniquely, AI-powered email security can have effects outside of security-related areas. It can save time by managing non-productive email. As the AI constantly learns employee behavior in the inbox, it becomes extremely effective at detecting spam and graymail – emails that aren't necessarily malicious, but clutter inboxes and hamper productivity. It does this on a per-user basis, specific to how each employee treats spam, graymail, and newsletters. The AI learns to detect this clutter and eventually learns which to pull from the inbox, saving time for the employees. This highlights how security solutions can go even further than merely protecting the email environment with a light touch, to the point where AI can promote productivity gains by automating tasks like inbox sorting.

Preventing Email Mishaps: How to Deal with Human Error

Improved user understanding and decision making cannot stop natural human error. Employees are bound to make mistakes and can easily send emails to the wrong people, especially when Outlook auto-fills the wrong recipient. This can have effects ranging anywhere from embarrassing to critical, with major implications on compliance, customer trust, confidential intellectual property, and data loss. 

However, AI can help reduce instances of accidentally sending emails to the wrong people. When a user goes to send an email in Outlook, the AI will analyze the recipients. It considers the contextual relationship between the sender and recipients, the relationships the recipients have with each other, how similar each recipient’s name and history is to other known contacts, and the names of attached files.  

If the AI determines that the email is outside of a user’s typical behavior, it may alert the user. Security teams can customize what the AI does next: it can block the email, block the email but allow the user to override it, or do nothing but invite the user to think twice. Since the AI analyzes each email, these alerts are more effective than consistent, blanket alerts warning about external recipients, which often go ignored. With this targeted approach, the AI prevents data leakage and reduces cyber risk. 

Since the AI is always on and continuously learning, it can adapt autonomously to employee changes. If the role of an employee evolves, the AI will learn the new normal, including common behaviors, recipients, attached file names, and more. This allows the AI to continue effectively flagging potential instances of human error, without needing manual rule changes or disrupting the employee’s workflow. 

Email Security Informed by Employee Experience

As the practical users of email, employees should be considered when designing email security. This employee-conscious lens to security can strengthen defenses, improve productivity, and prevent data loss.  

In these ways, email security can benefit both employees and security teams. Employees can become another layer of defense with improved security awareness training that cuts down on false reports of safe emails. This insight into employee email behavior can also enhance employee productivity by learning and sorting graymail. Finally, viewing security in relation to employees can help security teams deploy tools that reduce data loss by flagging misdirected emails. With these capabilities, Darktrace/Email™ enables security teams to optimize the balance of employee involvement in email security.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Author
Dan Fein
VP, Product

Based in New York, Dan joined Darktrace’s technical team in 2015, helping customers quickly achieve a complete and granular understanding of Darktrace’s product suite. Dan has a particular focus on Darktrace/Email, ensuring that it is effectively deployed in complex digital environments, and works closely with the development, marketing, sales, and technical teams. Dan holds a Bachelor’s degree in Computer Science from New York University.

Carlos Gray
Product Manager

Carlos Gonzalez Gray is a Product Marketing Manager at Darktrace, based in the Madrid Office. As an email security Subject Matter Expert he collaborates with the global product team to align each product with the company’s ethos and ensures Darktrace are continuously pushing the boundaries of innovation. His prior role at Darktrace was in Sales Engineering, leading the Iberian team and specializing in both the email and OT sectors. Additionally, his prior experience as a consultant to IBEX 35 companies in Spain has made him well-versed in compliance, auditing, and data privacy. Carlos holds an Honors BA in Political Science and a Masters in Cybersecurity from IE University.

Book a 1-1 meeting with one of our experts
Share this article

Blog

/

January 29, 2025

/

Inside the SOC

Bytesize Security: Insider Threats in Google Workspace

Default blog imageDefault blog image

What is an insider threat?

An insider threat is a cyber risk originating from within an organization. These threats can involve actions such as an employee inadvertently clicking on a malicious link (e.g., a phishing email) or an employee with malicious intent conducting data exfiltration for corporate sabotage.

Insiders often exploit their knowledge and access to legitimate corporate tools, presenting a continuous risk to organizations. Defenders must protect their digital estate against threats from both within and outside the organization.

For example, in the summer of 2024, Darktrace / IDENTITY successfully detected a user in a customer environment attempting to steal sensitive data from a trusted Google Workspace service. Despite the use of a legitimate and compliant corporate tool, Darktrace identified anomalies in the user’s behavior that indicated malicious intent.

Attack overview: Insider threat

In June 2024, Darktrace detected unusual activity involving the Software-as-a-Service (SaaS) account of a former employee from a customer organization. This individual, who had recently left the company, was observed downloading a significant amount of data in the form of a “.INDD” file (an Adobe InDesign document typically used to create page layouts [1]) from Google Drive.

While the use of Google Drive and other Google Workspace platforms was not unexpected for this employee, Darktrace identified that the user had logged in from an unfamiliar and suspicious IPv6 address before initiating the download. This anomaly triggered a model alert in Darktrace / IDENTITY, flagging the activity as potentially malicious.

A Model Alert in Darktrace / IDENTITY showing the unusual “.INDD” file being downloaded from Google Workspace.
Figure 1: A Model Alert in Darktrace / IDENTITY showing the unusual “.INDD” file being downloaded from Google Workspace.

Following this detection, the customer reached out to Darktrace’s Security Operations Center (SOC) team via the Security Operations Support service for assistance in triaging and investigating the incident further. Darktrace’s SOC team conducted an in-depth investigation, enabling the customer to identify the exact moment of the file download, as well as the contents of the stolen documents. The customer later confirmed that the downloaded files contained sensitive corporate data, including customer details and payment information, likely intended for reuse or sharing with a new employer.

In this particular instance, Darktrace’s Autonomous Response capability was not active, allowing the malicious insider to successfully exfiltrate the files. If Autonomous Response had been enabled, Darktrace would have immediately acted upon detecting the login from an unusual (in this case 100% rare) location by logging out and disabling the SaaS user. This would have provided the customer with the necessary time to review the activity and verify whether the user was authorized to access their SaaS environments.

Conclusion

Insider threats pose a significant challenge for traditional security tools as they involve internal users who are expected to access SaaS platforms. These insiders have preexisting knowledge of the environment, sensitive data, and how to make their activities appear normal, as seen in this case with the use of Google Workspace. This familiarity allows them to avoid having to use more easily detectable intrusion methods like phishing campaigns.

Darktrace’s anomaly detection capabilities, which focus on identifying unusual activity rather than relying on specific rules and signatures, enable it to effectively detect deviations from a user’s expected behavior. For instance, an unusual login from a new location, as in this example, can be flagged even if the subsequent malicious activity appears innocuous due to the use of a trusted application like Google Drive.

Credit to Vivek Rajan (Cyber Analyst) and Ryan Traill (Analyst Content Lead)

Appendices

Darktrace Model Detections

SaaS / Resource::Unusual Download Of Externally Shared Google Workspace File

References

[1]https://www.adobe.com/creativecloud/file-types/image/vector/indd-file.html

MITRE ATT&CK Mapping

Technqiue – Tactic – ID

Data from Cloud Storage Object – COLLECTION -T1530

Continue reading
About the author
Vivek Rajan
Cyber Analyst

Blog

/

January 28, 2025

/
No items found.

Reimaginar su SOC: cómo lograr una seguridad de red proactiva

Default blog imageDefault blog image

Introduction: Challenges and solutions to SOC efficiency

For Security Operation Centers (SOCs), reliance on signature or rule-based tools – solutions that are always chasing the latest update to prevent only what is already known – creates an excess of false positives. SOC analysts are therefore overwhelmed by a high volume of context-lacking alerts, with human analysts able to address only about 10% due to time and resource constraints. This forces many teams to accept the risks of addressing only a fraction of the alerts while novel threats go completely missed.

74% of practitioners are already grappling with the impact of an AI-powered threat landscape, which amplifies challenges like tool sprawl, alert fatigue, and burnout. Thus, achieving a resilient network, where SOC teams can spend most of their time getting proactive and stopping threats before they occur, feels like an unrealistic goal as attacks are growing more frequent.

Despite advancements in security technology (advanced detection systems with AI, XDR tools, SIEM aggregators, etc...), practitioners are still facing the same issues of inefficiency in their SOC, stopping them from becoming proactive. How can they select security solutions that help them achieve a proactive state without dedicating more human hours and resources to managing and triaging alerts, tuning rules, investigating false positives, and creating reports?

To overcome these obstacles, organizations must leverage security technology that is able to augment and support their teams. This can happen in the following ways:

  1. Full visibility across the modern network expanding into hybrid environments
  2. Have tools that identifies and stops novel threats autonomously, without causing downtime
  3. Apply AI-led analysis to reduce time spent on manual triage and investigation

Your current solutions might be holding you back

Traditional cybersecurity point solutions are reliant on using global threat intelligence to pattern match, determine signatures, and consequently are chasing the latest update to prevent only what is known. This means that unknown threats will evade detection until a patient zero is identified. This legacy approach to threat detection means that at least one organization needs to be ‘patient zero’, or the first victim of a novel attack before it is formally identified.

Even the point solutions that claim to use AI to enhance threat detection rely on a combination of supervised machine learning, deep learning, and transformers to

train and inform their systems. This entails shipping your company’s data out to a large data lake housed somewhere in the cloud where it gets blended with attack data from thousands of other organizations. The resulting homogenized dataset gets used to train AI systems — yours and everyone else’s — to recognize patterns of attack based on previously encountered threats.

While using AI in this way reduces the workload of security teams who would traditionally input this data by hand, it emanates the same risk – namely, that AI systems trained on known threats cannot deal with the threats of tomorrow. Ultimately, it is the unknown threats that bring down an organization.

The promise and pitfalls of XDR in today's threat landscape

Enter Extended Detection and Response (XDR): a platform approach aimed at unifying threat detection across the digital environment. XDR was developed to address the limitations of traditional, fragmented tools by stitching together data across domains, providing SOC teams with a more cohesive, enterprise-wide view of threats. This unified approach allows for improved detection of suspicious activities that might otherwise be missed in siloed systems.

However, XDR solutions still face key challenges: they often depend heavily on human validation, which can aggravate the already alarmingly high alert fatigue security analysts experience, and they remain largely reactive, focusing on detecting and responding to threats rather than helping prevent them. Additionally, XDR frequently lacks full domain coverage, relying on EDR as a foundation and are insufficient in providing native NDR capabilities and visibility, leaving critical gaps that attackers can exploit. This is reflected in the current security market, with 57% of organizations reporting that they plan to integrate network security products into their current XDR toolset[1].

Why settling is risky and how to unlock SOC efficiency

The result of these shortcomings within the security solutions market is an acceptance of inevitable risk. From false positives driving the barrage of alerts, to the siloed tooling that requires manual integration, and the lack of multi-domain visibility requiring human intervention for business context, security teams have accepted that not all alerts can be triaged or investigated.

While prioritization and processes have improved, the SOC is operating under a model that is overrun with alerts that lack context, meaning that not all of them can be investigated because there is simply too much for humans to parse through. Thus, teams accept the risk of leaving many alerts uninvestigated, rather than finding a solution to eliminate that risk altogether.

Darktrace / NETWORK is designed for your Security Operations Center to eliminate alert triage with AI-led investigations , and rapidly detect and respond to known and unknown threats. This includes the ability to scale into other environments in your infrastructure including cloud, OT, and more.

Beyond global threat intelligence: Self-Learning AI enables novel threat detection & response

Darktrace does not rely on known malware signatures, external threat intelligence, historical attack data, nor does it rely on threat trained machine learning to identify threats.

Darktrace’s unique Self-learning AI deeply understands your business environment by analyzing trillions of real-time events that understands your normal ‘pattern of life’, unique to your business. By connecting isolated incidents across your business, including third party alerts and telemetry, Darktrace / NETWORK uses anomaly chains to identify deviations from normal activity.

The benefit to this is that when we are not predefining what we are looking for, we can spot new threats, allowing end users to identify both known threats and subtle, never-before-seen indicators of malicious activity that traditional solutions may miss if they are only looking at historical attack data.

AI-led investigations empower your SOC to prioritize what matters

Anomaly detection is often criticized for yielding high false positives, as it flags deviations from expected patterns that may not necessarily indicate a real threat or issues. However, Darktrace applies an investigation engine to automate alert triage and address alert fatigue.

Darktrace’s Cyber AI Analyst revolutionizes security operations by conducting continuous, full investigations across Darktrace and third-party alerts, transforming the alert triage process. Instead of addressing only a fraction of the thousands of daily alerts, Cyber AI Analyst automatically investigates every relevant alert, freeing up your team to focus on high-priority incidents and close security gaps.

Powered by advanced machine-learning techniques, including unsupervised learning, models trained by expert analysts, and tailored security language models, Cyber AI Analyst emulates human investigation skills, testing hypotheses, analyzing data, and drawing conclusions. According to Darktrace Internal Research, Cyber AI Analyst typically provides a SOC with up to  50,000 additional hours of Level 2 analysis and written reporting annually, enriching security operations by producing high level incident alerts with full details so that human analysts can focus on Level 3 tasks.

Containing threats with Autonomous Response

Simply quarantining a device is rarely the best course of action - organizations need to be able to maintain normal operations in the face of threats and choose the right course of action. Different organizations also require tailored response functions because they have different standards and protocols across a variety of unique devices. Ultimately, a ‘one size fits all’ approach to automated response actions puts organizations at risk of disrupting business operations.

Darktrace’s Autonomous Response tailors its actions to contain abnormal behavior across users and digital assets by understanding what is normal and stopping only what is not. Unlike blanket quarantines, it delivers a bespoke approach, blocking malicious activities that deviate from regular patterns while ensuring legitimate business operations remain uninterrupted.

Darktrace offers fully customizable response actions, seamlessly integrating with your workflows through hundreds of native integrations and an open API. It eliminates the need for costly development, natively disarming threats in seconds while extending capabilities with third-party tools like firewalls, EDR, SOAR, and ITSM solutions.

Unlocking a proactive state of security

Securing the network isn’t just about responding to incidents — it’s about being proactive, adaptive, and prepared for the unexpected. The NIST Cybersecurity Framework (CSF 2.0) emphasizes this by highlighting the need for focused risk management, continuous incident response (IR) refinement, and seamless integration of these processes with your detection and response capabilities.

Despite advancements in security technology, achieving a proactive posture is still a challenge to overcome because SOC teams face inefficiencies from reliance on pattern-matching tools, which generate excessive false positives and leave many alerts unaddressed, while novel threats go undetected. If SOC teams are spending all their time investigating alerts then there is no time spent getting ahead of attacks.

Achieving proactive network resilience — a state where organizations can confidently address challenges at every stage of their security posture — requires strategically aligned solutions that work seamlessly together across the attack lifecycle.

References

1.       Market Guide for Extended Detection and Response, Gartner, 17thAugust 2023 - ID G00761828

Continue reading
About the author
Your data. Our AI.
Elevate your network security with Darktrace AI