Blog
/
AI
/
March 7, 2024

Defending Against the New Normal in Cybercrime: AI

This blog outlines research & data points on the evolving threat landscape, the impact of malicious AI, and why proactive cyber readiness is essential.
Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Max Heinemeyer
Global Field CISO
Default blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog imageDefault blog image
07
Mar 2024

AI in Cyber Security

Over the last 18 months, discussions about artificial intelligence (AI) – specifically generative AI – ranged from excitement and optimism about its transformative potential to fear and uncertainty about the new risks it introduces.  

New research1 commissioned by Darktrace shows that 89 percent of IT security teams polled globally believe AI-augmented cyber threats will have a significant impact on their organization within the next two years, yet 60 percent believe they are currently unprepared to defend against these attacks. Their concerns include increased volume and sophistication of malware that targets known vulnerabilities and increased exposure of sensitive or proprietary information from using generative AI tools.  

At Darktrace, we monitor trends across our global customer base to understand how the challenges facing security teams are evolving alongside industry advancements in AI. We’ve observed that AI, automation, and cybercrime-as-a-service have increased the speed, sophistication and efficacy of cyber security attacks.  

How AI Impacts Phishing Attempts

Darktrace has observed immediate impacts on phishing, which remains one of the most common forms of attack. In April 2023, Darktrace shared research that found a 135 percent increase in ‘novel social engineering attacks’ in the first two months of 2023, corresponding with the widespread adoption of ChatGPT2. These phishing attacks showed a strong linguistic deviation – semantically and syntactically – compared to other phishing emails, which suggested to us that generative AI is providing an avenue for threat actors to craft sophisticated and targeted attacks at speed and scale. A year later, we’ve seen this trend continue. Darktrace customers received approximately 2,867,000 phishing emails in December 2023 alone, a 14 percent increase on what was observed months prior in September3. Between September and December 2023, phishing attacks that used novel social engineering techniques grew by 35 percent on average across the Darktrace customer base4.  

These observations reinforce trends that others in the industry have shared. For example, Microsoft and OpenAI recently published research on tactics, techniques, and procedures (TTPs) augmented by large language models (LLMs) that they have observed nation-state threat actors using. That includes using LLMs to draft and generate social engineering attacks, inform reconnaissance, assist with vulnerability research and more.  

The Rise of Cybercrime-as-as-a-Service

The increasing cyber challenge facing defenders cannot be attributed to AI alone. The rise of cybercrime as-a-service is also changing the dynamic. Darktrace’s 2023 End of Year Threat Report found that cybercrime-as-a-service continue to dominate the threat landscape, with malware-as-a-Service (MaaS) and ransomware-as-a-Service (RaaS) tools making up most malicious tools in use by attackers. The as-a-Service ecosystem can provide attackers with everything from pre-made malware to templates for phishing emails, payment processing systems and even helplines to enable bad actors to mount attacks with limited technical knowledge.  

These trends make it clear that attackers now have a more widely accessible toolbox that reduces their barriers.

AI Enabling Accidental Insider Threats

However, the new risks facing businesses aren’t from external threat actors alone. Use of generative AI tools within the enterprise introduces a new category of accidental insider threats. Employees using generative AI tools now have easier access to more organizational data than ever before. Even the most well-intentioned employee could unintentionally leak or access restricted, sensitive data via these tools. In the second half of 2023, we observed that approximately half of Darktrace customers had employees accessing generative AI services. As this continues to increase, organizations need policies in place to guide the use cases for generative AI tools as well as strong data governance and the ability to enforce these policies to minimize risk.  

It is inevitable that AI will increase the risks and threats facing an organization, but this is not an unsolvable challenge from a defensive perspective. While advancements in generative AI may be worsening issues like novel social engineering and creating new types of accidental insider threats, AI itself offers a strong defense.  

The Shift to Proactive Cyber Readiness

According to the World Economic Forum’s Global Cybersecurity Outlook 2024, the number of organizations that “maintain minimum viable cyber resilience is down 30 percent compared to 2023”, and “while large organizations have demonstrated gains in cyber resilience, small and medium-sized companies showed significant decline.” The importance of cyber resilience cannot be understated in the face of today’s increasingly as-a-service, automated, and AI-augmented threat landscape.  

Historically, organizations wait for incidents to happen and rely on known attack data for threat detection and response, making it nearly impossible to identify never-before-seen threats. The traditional security stack has also relied heavily on point solutions focused on protecting different pieces of the digital environment, with individual tools for endpoint, email, network, on-premises data centers, SaaS applications, cloud, OT and beyond. These point solutions fail to correlate disparate incidents to form a complete picture of an orchestrated attack. Even with the addition of tools that can stitch together events from across the enterprise, they are in a reactive state that focuses heavily on threat detection and response.  

Organizations need to evolve from a reactive posture to a stance of proactive cyber readiness. To do so, they need an approach that proactively identifies internal and external vulnerabilities, identifies gaps in security policy and process before an attack occurs, breaks down silos to investigate all threats (known and unknown) during an attack, and uplifts the human analyst beyond menial tasks to incident validation and recovery after an attack.  

AI can help break down silos within the SOC and provide a more proactive approach to scale up and augment defenders. It provides richer context when it is fed information from multiple systems, data sets, and tools within the stack and can build an in-depth, real-time behavioural understanding of a business that humans alone cannot.

Lessons From AI in the SOC

At Darktrace, we’ve been applying AI to the challenge of cyber security for more than ten years, and we know that proactive cyber readiness requires the right mix of people, process, and technology.  

When the right AI is applied responsibly to the right cyber security challenge, the impact on both the human security team and the business is profound.

AI can bring machine speed and scale to some of the most time-intensive, error-prone, and psychologically draining components of cyber security, helping humans focus on the value-added work that only they can provide. Incident response and continuous monitoring are two areas where AI has already been proven to effectively augment defenders. For example, a civil engineering company used Darktrace’s AI to uplift its SOC team from the repetitive, manual tasks of analyzing and responding to email incidents. The analysts estimated they were each spending 10 hours per week on email incident analysis. With AI autonomously analyzing and responding to email incidents, the analysts could gain approximately 20 percent of their time back to focus on proactive cyber security measures

An effective human-AI partnership is key to proactive cyber readiness and can directly benefit the work-life of defenders. It can help to reduce burnout, support data-driven decision-making, and reduce the reliance on hard-to-find, specialized talent that has created a skills shortage in cyber security for many years. Most importantly, AI can free up team members to focus on more meaningful tasks, such as compliance initiatives, user education, and sophisticated threat hunting.  

Advancements in AI are happening at a rapid pace. As we’ve already observed, attackers will be watching these developments and looking for ways to use it to their advantage. Luckily, AI has already proved to be an asset for defenders, and embracing a proactive approach to cyber resilience can help organizations increase their readiness for this next phase. Prioritizing cyber security will be an enabler of innovation and progress as AI development continues.  

--

Join Darktrace on 9 April for a virtual event to explore the latest innovations needed to get ahead of the rapidly evolving threat landscape. Register today to hear more about our latest innovations coming to Darktrace’s offerings.

References

[1] The survey was undertaken by AimPoint Group & Dynata on behalf Darktrace between December 2023 & January 2024. The research polled 1773 security professionals in positions across the security team from junior roles to CISOs, across 14 countries – Australia, Brazil, France, Germany, Italy, Japan, Mexico, Netherlands, Singapore, Spain, Sweden, UAE, UK, and USA.

[2] Based on the average change in email attacks between January and February 2023 detected across Darktrace/Email deployments with control of outliers.

[3] Average calculated across Darktrace customers from 31st August to 21st December.

[4] Average calculated across Darktrace customers from 31st August to 21st December. Novel social engineering attacks use linguistic techniques that are different to techniques used in the past, as measured by a combination of semantics, phrasing, text volume, punctuation, and sentence length.

Inside the SOC
Darktrace cyber analysts are world-class experts in threat intelligence, threat hunting and incident response, and provide 24/7 SOC support to thousands of Darktrace customers around the globe. Inside the SOC is exclusively authored by these experts, providing analysis of cyber incidents and threat trends, based on real-world experience in the field.
Written by
Max Heinemeyer
Global Field CISO

More in this series

No items found.

Blog

/

Proactive Security

/

January 7, 2026

How a leading bank is prioritizing risk management to power a resilient future

Default blog imageDefault blog image

As one of the region’s most established financial institutions, this bank sits at the heart of its community’s economic life – powering everything from daily transactions to business growth and long-term wealth planning. Its blend of physical branches and advanced digital services gives customers the convenience they expect and the personal trust they rely on. But as the financial world becomes more interconnected and adversaries more sophisticated, safeguarding that trust requires more than traditional cybersecurity. It demands a resilient, forward-leaning approach that keeps pace with rising threats and tightening regulatory standards.

A complex risk landscape demands a new approach

The bank faced a challenge familiar across the financial sector: too many tools, not enough clarity. Vulnerability scans, pen tests, and risk reports all produced data, yet none worked together to show how exposures connected across systems or what they meant for day-to-day operations. Without a central platform to link and contextualize this data, teams struggled to see how individual findings translated into real exposure across the business.

  • Fragmented risk assessments: Cyber and operational risks were evaluated in silos, often duplicated across teams, and lacked the context needed to prioritize what truly mattered.
  • Limited executive visibility: Leadership struggled to gain a complete, real-time view of trends or progress, making risk ownership difficult to enforce.
  • Emerging compliance pressure: This gap also posed compliance challenges under the EU’s Digital Operational Resilience Act (DORA), which requires financial institutions to demonstrate continuous oversight, effective reporting, and the ability to withstand and recover from cyber and IT disruptions.
“The issue wasn’t the lack of data,” recalls the bank’s Chief Technology Officer. “The challenge was transforming that data into a unified, contextualized picture we could act on quickly and decisively.”

As the bank advanced its digital capabilities and embraced cloud services, its risk environment became more intricate. New pathways for exploitation emerged, human factors grew harder to quantify, and manual processes hindered timely decision-making. To maintain resilience, the security team sought a proactive, AI-powered platform that could consolidate exposures, deliver continuous insight, and ensure high-value risks were addressed before they escalated.

Choosing Darktrace to unlock proactive cyber resilience

To reclaim control over its fragmented risk landscape, the bank selected Darktrace / Proactive Exposure Management™ for cyber risk insight. The solution’s ability to consolidate scanner outputs, pen test results, CVE data, and operational context into one AI-powered view made it the clear choice. Darktrace delivered comprehensive visibility the team had long been missing.

By shifting from a reactive model to proactive security, the bank aimed to:

  • Improve resilience and compliance with DORA
  • Prioritize remediation efforts with greater accuracy
  • Eliminate duplicated work across teams
  • Provide leadership with a complete view of risk, updated continuously
  • Reduce the overall likelihood of attack or disruption

The CTO explains: “We needed a solution that didn’t just list vulnerabilities but showed us what mattered most for our business – how risks connected, how they could be exploited, and what actions would create the biggest reduction in exposure. Darktrace gave us that clarity.”

Targeting the risks that matter most

Darktrace / Proactive Exposure Management offered the bank a new level of visibility and control by continuously analyzing misconfigurations, critical attack paths, human communication patterns, and high-value assets. Its AI-driven risk scoring allowed the team to understand which vulnerabilities had meaningful business impact, not just which were technically severe.

Unifying exposure across architectures

Darktrace aggregates and contextualizes data from across the bank’s security stack, eliminating the need to manually compile or correlate findings. What once required hours of cross-team coordination now appears in a single, continuously updated dashboard.

Revealing an adversarial view of risk

The solution maps multi-stage, complex attack paths across network, cloud, identity systems, email environments, and endpoints – highlighting risks that traditional CVE lists overlook.

Identifying misconfigurations and controlling gaps

Using Self-Learning AI, Darktrace / Proactive Exposure Management spots misconfigurations and prioritizes them based on MITRE adversary techniques, business context, and the bank’s unique digital environment.

Enhancing red-team and pen test effectiveness

By directing testers to the highest-value targets, Darktrace removes guesswork and validates whether defenses hold up against realistic adversarial behavior.

Supporting DORA compliance

From continuous monitoring to executive-ready reporting, the solution provides the transparency and accountability the bank needs to demonstrate operational resilience frameworks.

Proactive security delivers tangible outcomes

Since deploying Darktrace / Proactive Exposure Management, the bank has significantly strengthened its cybersecurity posture while improving operational efficiency.

Greater insight, smarter prioritization, stronger defensee

Security teams are now saving more than four hours per week previously spent aggregating and analyzing risk data. With a unified view of their exposure, they can focus directly on remediation instead of manually correlating multiple reports.

Because risks are now prioritized based on business impact and real-time operational context, they no longer waste time on low-value tasks. Instead, critical issues are identified and resolved sooner, reducing potential windows for exploitation and strengthening the bank’s ongoing resilience against both known and emerging threats.

“Our goal was to move from reactive to proactive security,” the CTO says. “Darktrace didn’t just help us achieve that, it accelerated our roadmap. We now understand our environment with a level of clarity we simply didn’t have before.”

Leadership clarity and stronger governance

Executives and board stakeholders now receive clear, organization-wide visibility into the bank’s risk posture, supported by consistent reporting that highlights trends, progress, and areas requiring attention. This transparency has strengthened confidence in the bank’s cyber resilience and enabled leadership to take true ownership of risk across the institution.

Beyond improved visibility, the bank has also deepened its overall governance maturity. Continuous monitoring and structured oversight allow leaders to make faster, more informed decisions that strategically align security efforts with business priorities. With a more predictable understanding of exposure and risk movement over time, the organization can maintain operational continuity, demonstrate accountability, and adapt more effectively as regulatory expectations evolve.

Trading stress for control

With Darktrace, leaders now have the clarity and confidence they need to report to executives and regulators with accuracy. The ability to see organization-wide risk in context provides assurance that the right issues are being addressed at the right time. That clarity is also empowering security analysts who no longer shoulder the anxiety of wondering which risks matter most or whether something critical has slipped through the cracks. Instead, they’re working with focus and intention, redirecting hours of manual effort into strategic initiatives that strengthen the bank’s overall resilience.

Prioritizing risk to power a resilient future

For this leading financial institution, Darktrace / Proactive Exposure Management has become the foundation for a more unified, data-driven, and resilient cybersecurity program. With clearer, business-relevant priorities, stronger oversight, and measurable efficiency gains, the bank has strengthened its resilience and met demanding regulatory expectations without adding operational strain.

Most importantly, it shifted the bank’s security posture from a reactive stance to a proactive, continuous program. Giving teams the confidence and intelligence to anticipate threats and safeguard the people and services that depend on them.

Continue reading
About the author
Kelland Goodin
Product Marketing Specialist

Blog

/

AI

/

January 5, 2026

How to Secure AI in the Enterprise: A Practical Framework for Models, Data, and Agents

How to secure AI in the enterprise: A practical framework for models, data, and agents Default blog imageDefault blog image

Introduction: Why securing AI is now a security priority

AI adoption is at the forefront of the digital movement in businesses, outpacing the rate at which IT and security professionals can set up governance models and security parameters. Adopting Generative AI chatbots, autonomous agents, and AI-enabled SaaS tools promises efficiency and speed but also introduces new forms of risk that traditional security controls were never designed to manage. For many organizations, the first challenge is not whether AI should be secured, but what “securing AI” actually means in practice. Is it about protecting models? Governing data? Monitoring outputs? Or controlling how AI agents behave once deployed?  

While demand for adoption increases, securing AI use in the enterprise is still an abstract concept to many and operationalizing its use goes far beyond just having visibility. Practitioners need to also consider how AI is sourced, built, deployed, used, and governed across the enterprise.

The goal for security teams: Implement a clear, lifecycle-based AI security framework. This blog will demonstrate the variety of AI use cases that should be considered when developing this framework and how to frame this conversation to non-technical audiences.  

What does “securing AI” actually mean?

Securing AI is often framed as an extension of existing security disciplines. In practice, this assumption can cause confusion.

Traditional security functions are built around relatively stable boundaries. Application security focuses on code and logic. Cloud security governs infrastructure and identity. Data security protects sensitive information at rest and in motion. Identity security controls who can access systems and services. Each function has clear ownership, established tooling, and well-understood failure modes.

AI does not fit neatly into any of these categories. An AI system is simultaneously:

  • An application that executes logic
  • A data processor that ingests and generates sensitive information
  • A decision-making layer that influences or automates actions
  • A dynamic system that changes behavior over time

As a result, the security risks introduced by AI cuts across multiple domains at once. A single AI interaction can involve identity misuse, data exposure, application logic abuse, and supply chain risk all within the same workflow. This is where the traditional lines between security functions begin to blur.

For example, a malicious prompt submitted by an authorized user is not a classic identity breach, yet it can trigger data leakage or unauthorized actions. An AI agent calling an external service may appear as legitimate application behavior, even as it violates data sovereignty or compliance requirements. AI-generated code may pass standard development checks while introducing subtle vulnerabilities or compromised dependencies.

In each case, no single security team “owns” the risk outright.

This is why securing AI cannot be reduced to model safety, governance policies, or perimeter controls alone. It requires a shared security lens that spans development, operations, data handling, and user interaction. Securing AI means understanding not just whether systems are accessed securely, but whether they are being used, trained, and allowed to act in ways that align with business intent and risk tolerance.

At its core, securing AI is about restoring clarity in environments where accountability can quickly blur. It is about knowing where AI exists, how it behaves, what it is allowed to do, and how its decisions affect the wider enterprise. Without this clarity, AI becomes a force multiplier for both productivity and risk.

The five categories of AI risk in the enterprise

A practical way to approach AI security is to organize risk around how AI is used and where it operates. The framework below defines five categories of AI risk, each aligned to a distinct layer of the enterprise AI ecosystem  

How to Secure AI in the Enterprise:

  • Defending against misuse and emergent behaviors
  • Monitoring and controlling AI in operation
  • Protecting AI development and infrastructure
  • Securing the AI supply chain
  • Strengthening readiness and oversight

Together, these categories provide a structured lens for understanding how AI risk manifests and where security teams should focus their efforts.

1. Defending against misuse and emergent AI behaviors

Generative AI systems and agents can be manipulated in ways that bypass traditional controls. Even when access is authorized, AI can be misused, repurposed, or influenced through carefully crafted prompts and interactions.

Key risks include:

  • Malicious prompt injection designed to coerce unwanted actions
  • Unauthorized or unintended use cases that bypass guardrails
  • Exposure of sensitive data through prompt histories
  • Hallucinated or malicious outputs that influence human behavior

Unlike traditional applications, AI systems can produce harmful outcomes without being explicitly compromised. Securing this layer requires monitoring intent, not just access. Security teams need visibility into how AI systems are being prompted, how outputs are consumed, and whether usage aligns with approved business purposes

2. Monitoring and controlling AI in operation

Once deployed, AI agents operate at machine speed and scale. They can initiate actions, exchange data, and interact with other systems with little human oversight. This makes runtime visibility critical.

Operational AI risks include:

  • Agents using permissions in unintended ways
  • Uncontrolled outbound connections to external services or agents
  • Loss of forensic visibility into ephemeral AI components
  • Non-compliant data transmission across jurisdictions

Securing AI in operation requires real-time monitoring of agent behavior, centralized control points such as AI gateways, and the ability to capture agent state for investigation. Without these capabilities, security teams may be blind to how AI systems behave once live, particularly in cloud-native or regulated environments.

3. Protecting AI development and infrastructure

Many AI risks are introduced long before deployment. Development pipelines, infrastructure configurations, and architectural decisions all influence the security posture of AI systems.

Common risks include:

  • Misconfigured permissions and guardrails
  • Insecure or overly complex agent architectures
  • Infrastructure-as-Code introducing silent misconfigurations
  • Vulnerabilities in AI-generated code and dependencies

AI-generated code adds a new dimension of risk, as hallucinated packages or insecure logic may be harder to detect and debug than human-written code. Securing AI development means applying security controls early, including static analysis, architectural review, and continuous configuration monitoring throughout the build process.

4. Securing the AI supply chain

AI supply chains are often opaque. Models, datasets, dependencies, and services may come from third parties with varying levels of transparency and assurance.

Key supply chain risks include:

  • Shadow AI tools used outside approved controls
  • External AI agents granted internal access
  • Suppliers applying AI to enterprise data without disclosure
  • Compromised models, training data, or dependencies

Securing the AI supply chain requires discovering where AI is used, validating the provenance and licensing of models and data, and assessing how suppliers process and protect enterprise information. Without this visibility, organizations risk data leakage, regulatory exposure, and downstream compromise through trusted integrations.

5. Strengthening readiness and oversight

Even with strong technical controls, AI security fails without governance, testing, and trained teams. AI introduces new incident scenarios that many security teams are not yet prepared to handle.

Oversight risks include:

  • Lack of meaningful AI risk reporting
  • Untested AI systems in production
  • Security teams untrained in AI-specific threats

Organizations need AI-aware reporting, red and purple team exercises that include AI systems, and ongoing training to build operational readiness. These capabilities ensure AI risks are understood, tested, and continuously improved, rather than discovered during a live incident.

Reframing AI security for the boardroom

AI security is not just a technical issue. It is a trust, accountability, and resilience issue. Boards want assurance that AI-driven decisions are reliable, explainable, and protected from tampering.

Effective communication with leadership focuses on:

  • Trust: confidence in data integrity, model behavior, and outputs
  • Accountability: clear ownership across teams and suppliers
  • Resilience: the ability to operate, audit, and adapt under attack or regulation

Mapping AI security efforts to recognized frameworks such as ISO/IEC 42001 and the NIST AI Risk Management Framework helps demonstrate maturity and aligns AI security with broader governance objectives.

Conclusion: Securing AI is a lifecycle challenge

The same characteristics that make AI transformative also make it difficult to secure. AI systems blur traditional boundaries between software, users, and decision-making, expanding the attack surface in subtle but significant ways.

Securing AI requires restoring clarity. Knowing where AI exists, how it behaves, who controls it, and how it is governed. A framework-based approach allows organizations to innovate with AI while maintaining trust, accountability, and control.

The journey to secure AI is ongoing, but it begins with understanding the risks across the full AI lifecycle and building security practices that evolve alongside the technology.

Continue reading
About the author
Brittany Woodsmall
Product Marketing Manager, AI & Attack Surface
Your data. Our AI.
Elevate your network security with Darktrace AI