Within cyber security, crises are a regular occurrence. Whether due to the ever-changing tactics of threat actors or the emergence of new vulnerabilities, security teams find themselves under significant pressure and frequently find themselves in what psychologists term "crisis states."1
A crisis state refers to an internal state marked by confusion and anxiety to such an extent that previously effective coping mechanisms give way to ineffective decision-making and behaviors.2
Given the prevalence of crises in the field of cyber security, practitioners are more prone to consistently making illogical choices due to the intense pressure they experience. They also grapple with a constant influx of rapidly changing information, the need for swift decision-making, and the severe consequences of errors in judgment. They are often asked to assess hundreds of variables and uncertain factors.
The frequency of crisis states is expected to rise as generative AI empowers cyber criminals to accelerate the speed, scale, and sophistication of their attacks.
Why is it so challenging to operate effectively and efficiently during a crisis state? Several factors come into play.
Firstly, individuals are inclined to rely on their instincts, rendering them susceptible to cognitive biases. This makes it increasingly difficult to assimilate new information, process it appropriately, and arrive at logical decisions. Since crises strike unexpectedly and escalate rapidly into new unknowns, responders experience heightened stress, doubt and insecurity when deciding on a course of action.
These cognitive biases manifest in various forms. For instance, confirmation bias prompts people to seek out information that aligns with their pre-existing beliefs, while hindsight bias makes past events seem more predictable in light of present context and information.
Crises also have a profound impact on information processing and decision-making. People tend to simplify new information and often cling to the initial information they receive rather than opting for the most rational decision.
For instance, if an organization has successfully thwarted a ransomware attack in the past, a defender might assume that employing the same countermeasures will suffice for a subsequent attack. However, ransomware tactics are constantly evolving, and a subsequent attack could employ different strategies that evade the previous defenses. In a crisis state, individuals may revert to their prior strategy instead of adapting based on the latest information.
Given there are deeply embedded psychological tendencies and hard-wired decision-making processes leading to a reduction in logic during a crisis, humans need support from technology that does not suffer from the same limitations, particularly in the post-incident phase, where stress levels go into overdrive.
In the era of rapidly evolving novel attacks, security teams require a different approach: AI.
AI can serve as a valuable tool to augment human decision-making, from detection to incident response and mitigation. This is precisely why Darktrace introduced HEAL, which leverages self-learning AI to assist teams in increasing their cyber resilience and managing live incidents, helping to alleviate the cognitive burden they face.
Darktrace HEAL™ learns from your environment, including data points from real incidents and generates simulations to identify the most effective approach for remediation and restoring normal operations. This reduces the overwhelming influx of information and facilitates more effective decision-making during critical moments.
Furthermore, HEAL offers security teams the opportunity to safely simulate realistic attacks within their own environment. Using specific data points from the native environment, simulated incidents prepare security teams for a variety of circumstances which can be reviewed on a regular basis to encourage effective habit forming and reduce cognitive biases from a one-size-fits-all approach. This allows them to anticipate how attacks might unfold and better prepare themselves psychologically for potential real-world incidents.
With the right models and data, AI can significantly mitigate human bias by providing remediation recommendations grounded in evidence and providing proportionate responses based on empirical evidence rather than personal interpretations or instincts. It can act as a guiding light through the chaos of an attack, providing essential support to human security teams.
1 www.cybersecuritydive.com/news/incident-response-impacts-wellbeing/633593
2 blog.bcm-institute.org/crisis-management/making-decision-during-a-crisis