Blog
/
No items found.
/
September 23, 2024
No items found.

How AI can help CISOs navigate the global cyber talent shortage

Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
23
Sep 2024
The global cybersecurity skills gap is widening, leaving many organizations vulnerable to increasing cyber threats. This blog explores how CISOs can implement AI strategies to make the most of their existing workforce through automation, consolidation and education.

The global picture

4 million cybersecurity professionals are needed worldwide to protect and defend the digital world – twice the number currently in the workforce.1

Innovative technologies are transforming business operations, enabling access to new markets, personalized customer experiences, and increased efficiency. However, this digital transformation also challenges Security Operations Centers (SOCs) with managing and protecting a complex digital environment without additional resources or advanced skills.

At the same time, the cybersecurity industry is suffering a severe global skills shortage, leaving many SOCs understaffed and under-skilled. With a 72% increase in data breaches from 2021-20232, SOCs are dealing with overwhelming alert volumes from diverse security tools. Nearly 60% of cybersecurity professionals report burnout3, leading to high turnover rates. Consequently, only a fraction of alerts are thoroughly investigated, increasing the risk of undetected breaches. More than half of organizations that experienced breaches in 2024 admitted to having short-staffed SOCs.4

How AI can help organizations do more with less

Cyber defense needs to evolve at the same pace as cyber-attacks, but the global skills shortage is making that difficult. As threat actors increasingly abuse AI for malicious purposes, using defensive AI to enable innovation and optimization at scale is reshaping how organizations approach cybersecurity.

The value of AI isn’t in replacing humans, but in augmenting their efforts and enabling them to scale their defense capabilities and their value to the organization. With AI, cybersecurity professionals can operate at digital speed, analyzing vast data sets, identifying more vulnerabilities with higher accuracy, responding and triaging faster, reducing risks, and implementing proactive measures—all without additional staff.

Research indicates that organizations leveraging AI and automation extensively in security functions—such as prevention, detection, investigation, or response—reduced their average mean time to identify (MTTI) and mean time to contain (MTTC) data breaches by 33% and 43%, respectively. These organizations also managed to contain breaches nearly 100 days faster on average compared to those not using AI and automation.5

First, you've got to apply the right AI to the right security challenge. We dig into how different AI technologies can bridge specific skills gaps in the CISO’s Guide to Navigating the Cybersecurity Skills Shortage.

Cases in point: AI as a human force multiplier

Let’s take a look at just some of the cybersecurity challenges to which AI can be applied to scale defense efforts and relieve the burden on the SOC. We go further into real-life examples in our white paper.

Automated threat detection and response

AI enables 24/7 autonomous response, eliminating the need for after-hours SOC shifts and providing security leaders with peace of mind. AI can scale response efforts by analyzing vast amounts of data in real time, identifying anomalies, and initiating precise autonomous actions to contain incidents, which buys teams time for investigation and remediation.  

Triage and investigation

AI enhances the triage process by automatically categorizing and prioritizing security alerts, allowing cybersecurity professionals to focus on the most critical threats. It creates a comprehensive picture of an attack, helps identify its root cause, and generates detailed reports with key findings and recommended actions.  

Automation also significantly reduces overwhelming alert volumes and high false positive rates, enabling analysts to concentrate on high-priority threats and engage in more proactive and strategic initiatives.

Eliminating silos and improving visibility across the enterprise

Security and IT teams are overwhelmed by the technological complexity of operating multiple tools, resulting in manual work and excessive alerts. AI can correlate threats across the entire organization, enhancing visibility and eliminating silos, thereby saving resources and reducing complexity.

With 88% of organizations favoring a platform approach over standalone solutions, many are consolidating their tech stacks in this direction. This consolidation provides native visibility across clouds, devices, communications, locations, applications, people, and third-party security tools and intelligence.

Upskilling your existing talent in AI

As revealed in the State of AI Cybersecurity Survey 2024, only 26% of cybersecurity professionals say they have a full understanding of the different types of AI in use within security products.6

Understanding AI can upskill your existing staff, enhancing their expertise and optimizing business outcomes. Human expertise is crucial for the effective and ethical integration of AI. To enable true AI-human collaboration, cybersecurity professionals need specific training on using, understanding, and managing AI systems. To make this easier, the Darktrace ActiveAI Security Platform is designed to enable collaboration and reduce the learning curve – lowering the barrier to entry for junior or less skilled analysts.  

However, to bridge the immediate expertise gap in managing AI tools, organizations can consider expert managed services that take the day-to-day management out of the SOC’s hands, allowing them to focus on training and proactive initiatives.

Conclusion

Experts predict the cybersecurity skills gap will continue to grow, increasing operational and financial risks for organizations. AI for cybersecurity is crucial for CISOs to augment their teams and scale defense capabilities with speed, scalability, and predictive insights, while human expertise remains vital for providing the intuition and problem-solving needed for responsible and efficient AI integration.

If you’re thinking about implementing AI to solve your own cyber skills gap, consider the following:

  • Select an AI cybersecurity solution tailored to your specific business needs
  • Review and streamline existing workflows and tools – consider a platform-based approach to eliminate inefficiencies
  • Make use of managed services to outsource AI expertise
  • Upskill and reskill existing talent through training and education
  • Foster a knowledge-sharing culture with access to knowledge bases and collaboration tools

Interested in how AI could augment your SOC to increase efficiency and save resources? Read our longer CISO’s Guide to Navigating the Cybersecurity Skills Shortage.

And to better understand cybersecurity practitioners' attitudes towards AI, check out Darktrace’s State of AI Cybersecurity 2024 report.

References

  1. https://www.isc2.org/research  
  2. https://www.forbes.com/advisor/education/it-and-tech/cybersecurity-statistics/  
  3. https://www.informationweek.com/cyber-resilience/the-psychology-of-cybersecurity-burnout  
  4. https://www.ibm.com/downloads/cas/1KZ3XE9D  
  5. https://www.ibm.com/downloads/cas/1KZ3XE9D  
  6. https://darktrace.com/resources/state-of-ai-cyber-security-2024
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Author
The Darktrace Community
Book a 1-1 meeting with one of our experts
Share this article

More in this series

No items found.

Blog

/

October 10, 2024

/

Email

How Darktrace won an email security trial by learning the business, not the breach

Default blog imageDefault blog image

Recently, Darktrace ran a customer trial of our email security product for a leading European infrastructure operator looking to upgrade its email protection.

During this prospective customer trial, Darktrace encountered several security incidents that penetrated existing security layers. Two of these incidents were Business Email Compromise (BEC) attacks, which we’re going to take a closer look at here.  

Darktrace was deployed for a trial at the same time as two other email security vendors, who were also being evaluated by the prospective customer. Darktrace’s superior detection of threats in this trial laid the groundwork for the respective company to choose our product.

Let’s dig into some of the elements of this Darktrace tech win and how they came to light during this trial.

Why truly intelligent AI starts learning from scratch

Darktrace’s detection capabilities are powered by true unsupervised machine learning, which detects anomalous activity from its ever-evolving understanding of normal for every unique environment. Consequently, it learns every business from the beginning, training on an organization’s data to understand normal for its users, devices, assets and the millions of connections between them.  

This learning period takes around a week, during which the AI hones its understanding of the business to a precise degree. At this stage, the system may produce some noise or lack precision, but this is a testament to our unsupervised machine learning. Unlike solutions that promise faster results by relying on preset assumptions, our AI takes the necessary time to learn from scratch, ensuring a deeper understanding and increasingly accurate detection over time.

Real threats detected by Darktrace

Attack 1: Supply chain attack

BEC and supply chain attacks are notoriously difficult to detect, as they take advantage of established, trusted senders.  

This attack came from a legitimate server via a known supplier with which the prospective customer had active and ongoing communication. Using the compromised account, the attacker didn’t just send out randomized spam, they crafted four sophisticated social engineering emails with the aim of soliciting users to click on a link – directly tapping into existing conversations. Darktrace / EMAIL was configured in passive mode during this trial; it would otherwise have held the emails before they arrived in the inbox. Luckily in this instance, one user reported the email to the CISO before any other users clicked the link. Upon investigation, the link contained timed ransomware detonation.  

Darktrace was the only vendor that caught any of these four emails. Our unique behavioral AI approach enables Darktrace / EMAIL to protect customers from even the most sophisticated attacks that abuse prior trust and relationships.

How did Darktrace catch this attack that other vendors missed?

With traditional email security, security teams have been obliged to allow entire organizations to eliminate false positives – on the premise that it’s easier to make a broad decision based on an entire known domain and assume that potential risk of a supply chain attack.

By contrast, Darktrace adopts a zero trust mentality, analyzing every email to understand whether communication that has previously been safe remains safe. That’s why Darktrace is uniquely positioned to detect BEC, based on its deep learning of internal and external users. Because it creates individual profiles for every account, group and business composed of multiple signals, it can detect deviations in their communication patterns based on the context and content of each message. We think of this as the ‘self-learning’ vs ‘learning the breach’ differentiator.

Fig 1: Darktrace analysis of one of four malicious emails sent by the trusted supplier. It gives it an anomaly score of 100, despite it being from a known correspondent with a known domain relationship and moderate mailing history.

If set in autonomous mode where it can apply actions, Darktrace / EMAIL would have quarantined all four emails. Using machine learning indicators such as ‘Inducement Shift’ and ‘General Behavioral Anomaly’, it deemed the four emails ‘Out of Character’. It also identified the link as highly likely to be phishing, based purely on its context. These indicators are critical because the link itself belonged to a widely used legitimate domain, leveraging their established internet reputation to appear safe.  

Around an hour later the supplier regained control of the account and sent a legitimate email alerting a wide distribution list to the phishing emails sent. Darktrace was able to discern the previously sent malicious emails from the current legitimate emails and allowed these emails through. Compared to other vendors that have a static understanding of malicious which needs to be updated (in cases like this, once a supplier is de-compromised), Darktrace’s deep understanding of external entities enables further nuance and precision in determining good from bad.

Fig 2: Darktrace let through four emails (subject line: Virus E-Mail) from the supplier once they had regained control of the compromised account, with a limited anomaly score despite having held the previous malicious emails. If any actions had been taken a red icon would show on the right-hand side – in this instance Darktrace did not take action and let the emails through.

Attack 2: Microsoft 365 account takeover

As part of building behavioral profiles of every email user, Darktrace analyzes their wider account activity. Account activity, such as unusual login patterns and administrative activity, is a key variable to detect account compromise before malicious activity occurs, but it also feeds into Darktrace’s understanding of which emails should belong in every user’s inbox.  

When the customer experienced an account compromise on day two of the trial, Darktrace began an investigation and was able to provide the full breakdown and scope of the incident.

The account was compromised via an email, which Darktrace would have blocked if it had been deployed autonomously at the time. Once the account had been compromised, detection details included:

  • Unusual Login and Account Update
  • Multiple Unusual External Sources for SaaS Credential
  • Unusual Activity Block
  • Login From Rare Endpoint While User is Active
Fig 3: Darktrace flagged the following indicators of compromise that deviated from normal behavior for the user in question, signaling an account takeover

With Darktrace / EMAIL, every user is analyzed for behavioral signals including authentication and configuration activity. Here the unusual login, credential input and rare endpoint were all clear signals a compromised account, contextualized against what is normal for that employee. Because Darktrace isn’t looking at email security merely from the perspective of the inbox. It constantly reevaluates the identity of each individual, group and organization (as defined by their behavioral signals), to determine precisely what belongs in the inbox and what doesn’t.  

In this instance, Darktrace / EMAIL would have blocked the incident were it not deployed in passive mode. In the initial intrusion it would have blocked the compromising email. And once the account was compromised, it would have taken direct blocking actions on the account based on the anomalous activity it detected, providing an extra layer of defense beyond the inbox.  

Account takeover protection is always part of Darktrace / EMAIL, which can be extended to fully cover Microsoft 365 SaaS with Darktrace / IDENTITY. By bringing SaaS activity into scope, security teams also benefit from an extended set of use cases including compliance and resource management.

Why this customer committed to Darktrace / EMAIL

“Darktrace was the only AI vendor that showed learning,” – CISO, Trial Customer

Throughout this trial, Darktrace evolved its understanding of the trial customer’s business and its email users. It identified attacks that other vendors did not, while allowing safe emails through. Furthermore, the CISO explicitly cited Darktrace as the only technology that demonstrated autonomous learning. As well as catching threats that other vendors did not, the CISO saw maturity areas such as how Darktrace dealt with non-productive mail and business-as-usual emails, without any user input.  Because of the nature of unsupervised ML, Darktrace’s learning of right and wrong will never be static or complete – it will continue to revise its understanding and adapt to the changing business and communications landscape.

This case study highlights a key tenet of Darktrace’s philosophy – that a rules and tuning-based approach will always be one step behind. Delivering benign emails while holding back malicious emails from the same domain demonstrates that safety is not defined in a straight line, or by historical precedent. Only by analyzing every email in-depth for its content and context can you guarantee that it belongs.  

While other solutions are making efforts to improve a static approach with AI, Darktrace’s AI remains truly unsupervised so it is dynamic enough to catch the most agile and evolving threats. This is what allows us to protect our customers by plugging a vital gap in their security stack that ensures they can meet the challenges of tomorrow's email attacks.

Interested in learning more about Darktrace / EMAIL? Check out our product hub.

Continue reading
About the author
Carlos Gray
Product Manager

Blog

/

October 4, 2024

/

Inside the SOC

From Call to Compromise: Darktrace’s Response to a Vishing-Induced Network Attack

Default blog imageDefault blog image

What is vishing?

Vishing, or voice phishing, is a type of cyber-attack that utilizes telephone devices to deceive targets. Threat actors typically use social engineering tactics to convince targets that they can be trusted, for example, by masquerading as a family member, their bank, or trusted a government entity. One method frequently used by vishing actors is to intimidate their targets, convincing them that they may face monetary fines or jail time if they do not provide sensitive information.

What makes vishing attacks dangerous to organizations?

Vishing attacks utilize social engineering tactics that exploit human psychology and emotion. Threat actors often impersonate trusted entities and can make it appear as though a call is coming from a reputable or known source.  These actors often target organizations, specifically their employees, and pressure them to obtain sensitive corporate data, such as privileged credentials, by creating a sense of urgency, intimidation or fear. Corporate credentials can then be used to gain unauthorized access to an organization’s network, often bypassing traditional security measures and human security teams.

Darktrace’s coverage of vishing attack

On August 12, 2024, Darktrace / NETWORK identified malicious activity on the network of a customer in the hospitality sector. The customer later confirmed that a threat actor had gained unauthorized access through a vishing attack. The attacker successfully spoofed the IT support phone number and called a remote employee, eventually leading to the compromise.

Figure 1: Timeline of events in the kill chain of this attack.

Establishing a Foothold

During the call, the remote employee was requested to authenticate via multi-factor authentication (MFA). Believing the caller to be a member of their internal IT support, using the legitimate caller ID, the remote user followed the instructions and confirmed the MFA prompt, providing access to the customer’s network.

This authentication allowed the threat actor to login into the customer’s environment by proxying through their Virtual Private Network (VPN) and gain a foothold in the network. As remote users are assigned the same static IP address when connecting to the corporate environment, the malicious actor appeared on the network using the correct username and IP address. While this stealthy activity might have evaded traditional security tools and human security teams, Darktrace’s anomaly-based threat detection identified an unusual login from a different hostname by analyzing NTLM requests from the static IP address, which it determined to be anomalous.

Observed Activity

  • On 2024-08-12 the static IP was observed using a credential belonging to the remote user to initiate an SMB session with an internal domain controller, where the authentication method NTLM was used
  • A different hostname from the usual hostname associated with this remote user was identified in the NTLM authentication request sent from a device with the static IP address to the domain controller
  • This device does not appear to have been seen on the network prior to this event.

Darktrace, therefore, recognized that this login was likely made by a malicious actor.

Internal Reconnaissance

Darktrace subsequently observed the malicious actor performing a series of reconnaissance activities, including LDAP reconnaissance, device hostname reconnaissance, and port scanning:

  • The affected device made a 53-second-long LDAP connection to another internal domain controller. During this connection, the device obtained data about internal Active Directory (AD) accounts, including the AD account of the remote user
  • The device made HTTP GET requests (e.g., HTTP GET requests with the Target URI ‘/nice ports,/Trinity.txt.bak’), indicative of Nmap usage
  • The device started making reverse DNS lookups for internal IP addresses.
Figure 2: Model alert showing the IP address from which the malicious actor connected and performed network scanning activities via port 9401.
Figure 3: Model Alert Event Log showing the affected device connecting to multiple internal locations via port 9401.

Lateral Movement

The threat actor was also seen making numerous failed NTLM authentication requests using a generic default Windows credential, indicating an attempt to brute force and laterally move through the network. During this activity, Darktrace identified that the device was using a different hostname than the one typically used by the remote employee.

Cyber AI Analyst

In addition to the detection by Darktrace / NETWORK, Darktrace’s Cyber AI Analyst launched an autonomous investigation into the ongoing activity. The investigation was able to correlate the seemingly separate events together into a broader incident, continuously adding new suspicious linked activities as they occurred.

Figure 4: Cyber AI Analyst investigation showing the activity timeline, and the activities associated with the incident.

Upon completing the investigation, Cyber AI Analyst provided the customer with a comprehensive summary of the various attack phases detected by Darktrace and the associated incidents. This clear presentation enabled the customer to gain full visibility into the compromise and understand the activities that constituted the attack.

Figure 5: Cyber AI Analyst displaying the observed attack phases and associated model alerts.

Darktrace Autonomous Response

Despite the sophisticated techniques and social engineering tactics used by the attacker to bypass the customer’s human security team and existing security stack, Darktrace’s AI-driven approach prevented the malicious actor from continuing their activities and causing more harm.

Darktrace’s Autonomous Response technology is able to enforce a pattern of life based on what is ‘normal’ and learned for the environment. If activity is detected that represents a deviation from expected activity from, a model alert is triggered. When Darktrace’s Autonomous Response functionality is configured in autonomous response mode, as was the case with the customer, it swiftly applies response actions to devices and users without the need for a system administrator or security analyst to perform any actions.

In this instance, Darktrace applied a number of mitigative actions on the remote user, containing most of the activity as soon as it was detected:

  • Block all outgoing traffic
  • Enforce pattern of life
  • Block all connections to port 445 (SMB)
  • Block all connections to port 9401
Figure 6: Darktrace’s Autonomous Response actions showing the actions taken in response to the observed activity, including blocking all outgoing traffic or enforcing the pattern of life.

Conclusion

This vishing attack underscores the significant risks remote employees face and the critical need for companies to address vishing threats to prevent network compromises. The remote employee in this instance was deceived by a malicious actor who spoofed the phone number of internal IT Support and convinced the employee to perform approve an MFA request. This sophisticated social engineering tactic allowed the attacker to proxy through the customer’s VPN, making the malicious activity appear legitimate due to the use of static IP addresses.

Despite the stealthy attempts to perform malicious activities on the network, Darktrace’s focus on anomaly detection enabled it to swiftly identify and analyze the suspicious behavior. This led to the prompt determination of the activity as malicious and the subsequent blocking of the malicious actor to prevent further escalation.

While the exact motivation of the threat actor in this case remains unclear, the 2023 cyber-attack on MGM Resorts serves as a stark illustration of the potential consequences of such threats. MGM Resorts experienced significant disruptions and data breaches following a similar vishing attack, resulting in financial and reputational damage [1]. If the attack on the customer had not been detected, they too could have faced sensitive data loss and major business disruptions. This incident underscores the critical importance of robust security measures and vigilant monitoring to protect against sophisticated cyber threats.

Credit to Rajendra Rushanth (Cyber Security Analyst) and Ryan Traill (Threat Content Lead)

Appendices

Darktrace Model Detections

  • Device / Unusual LDAP Bind and Search Activity
  • Device / Attack and Recon Tools
  • Device / Network Range Scan
  • Device / Suspicious SMB Scanning Activity
  • Device / RDP Scan
  • Device / UDP Enumeration
  • Device / Large Number of Model Breaches
  • Device / Network Scan
  • Device / Multiple Lateral Movement Model Breaches (Enhanced Monitoring)
  • Device / Reverse DNS Sweep
  • Device / SMB Session Brute Force (Non-Admin)

List of Indicators of Compromise (IoCs)

IoC - Type – Description

/nice ports,/Trinity.txt.bak - URI – Unusual Nmap Usage

MITRE ATT&CK Mapping

Tactic – ID – Technique

INITIAL ACCESS – T1200 – Hardware Additions

DISCOVERY – T1046 – Network Service Scanning

DISCOVERY – T1482 – Domain Trust Discovery

RECONNAISSANCE – T1590 – IP Addresses

T1590.002 – DNS

T1590.005 – IP Addresses

RECONNAISSANCE – T1592 – Client Configurations

T1592.004 – Client Configurations

RECONNAISSANCE – T1595 – Scanning IP Blocks

T1595.001 – Scanning IP Blocks

T1595.002 – Vulnerability Scanning

References

[1] https://www.bleepingcomputer.com/news/security/securing-helpdesks-from-hackers-what-we-can-learn-from-the-mgm-breach/

Continue reading
About the author
Rajendra Rushanth
Cyber Analyst
Your data. Our AI.
Elevate your network security with Darktrace AI