White Paper
Webinar
Towards Responsible AI in Cybersecurity



Towards Responsible AI in Cybersecurity
AI is transforming cybersecurity, enabling organizations to prevent cyber-attacks and enhance their detection, response, and recovery systems. But to fully realize this potential, customers, governments and society need to be confident that AI cybersecurity systems are being developed and applied in a way that is secure and trustworthy.
As a company that has been applying AI to cybersecurity since our founding in 2013, Darktrace understands the power of these technologies to protect organizations, and the importance of doing so responsibly. As such, we are proud to be among the first major AI-powered cybersecurity companies to release principles for responsible AI use in cybersecurity.
Our approach is informed by collaboration with global experts in academia and governments, and analysis of approaches including the US NIST AI Risk Management Framework, the EU AI Act, the UK’s AI White Paper and OECD’s AI Principles. It is rooted in international norms and adapted to be operational within the cybersecurity industry.
Why Responsible AI Matters in Cybersecurity
AI's power in cybersecurity is undeniable: by applying the right AI technique to the right task, AI can revolutionize our defense mechanisms, making them more adaptive, proactive, and resilient against evolving threats. Cybersecurity is a field where AI can have a truly transformative and beneficial impact to society.
We believe that to realize this potential, vendors should be able to demonstrate to their customers that their AI is built with responsible and trustworthy principles in mind. Darktrace's whitepaper addresses this need by outlining five principles for responsible AI use in cybersecurity:
1. Privacy
In cybersecurity, privacy matters as organizations and individuals frequently interact with sensitive data. For instance, threat detection systems and log aggregation tools may handle Personally Identifiable Information (PII) which can be linked back to an individual's activity over their network. Security professionals must therefore handle this sensitive data with care, especially when considering the severe non-compliance penalties for privacy infringements under data protection laws such as the EU GDPR or California’s CPRA.
With the advent of AI, new challenges to privacy have emerged. Traditional AI cybersecurity systems, for instance, will turn to external databases of historic attack data to detect known attacks. In doing so, they may inadvertently collect and train their models on sensitive data or PII.
Darktrace enables privacy throughout the design, development and deployment of our AI:
- Design: we take a non-invasive approach to data analysis. Rather than analyzing content, our AI focuses primarily on metadata.
- Development: we ensure that our models are trained on privacy-preserving datasets.
- Deployment: we take our algorithms to our customers’ data and infrastructure.
2. Interpretability
Interpretability is crucial to ensure that security professionals understand the reasoning behind alerts in order to confidently leverage a system’s outputs. It is also crucial to improving model performance by facilitating the identification of features influencing a model’s performance rate.
An interpretable cybersecurity system, that is, one which contextualizes alerts, enables cybersecurity teams to focus on the most critical incidents at hand. Interpretability, in this sense, is an enabler for more effective threat detection and remediation for security professionals.
That’s why we’ve built interpretability at the core of Darktrace’s AI. Our Cyber AI Analyst, for instance, has the ability to question data, understand and tests hypotheses before reaching a conclusion – all at machine speed. Users can also review the steps Cyber AI Analyst took to investigate a model alert at the hypothesis-level - including those which did not result in an incident - providing interpretability and transparency about the activity deemed worthy of an incident.
3. Security and Robustness
As a cybersecurity company, we understand that security is a prerequisite to building trust in AI systems. With the widespread adoption of AI, and the risks posed by adversarial Machine Learning attacks, new security risks have emerged for users.
AI cybersecurity systems therefore need to reach the highest levels of security and robustness for users to trust that their system will work appropriately, should an incident occur.
As such, Darktrace continuously tests and evaluates our AI models to ensure they are secure and resilient. We also conduct internal penetration tests and red team exercises to identify and mitigate potential vulnerabilities. Finally, Darktrace is certified against the leading information security and privacy standards, including ISO 27001 and 27018.
4. Accuracy
AI systems should strive for high accuracy, detecting genuine threats while minimizing false positives. Accuracy underpins an AI users’ ability to trust that their cybersecurity system will come to the right conclusion and that it can confidently leverage outcomes.
Rather than relying on historical attack data and lists of ‘known bads’, Darktrace deploys its AI directly within our customers’ environments, continuously learning the intricacies that form part of their digital estate. This means that our AI is uniquely positioned to detect novel or so-called ‘zero-day’ attacks.
Our ability to leverage multiple AI methods, through our multi-layered AI approach, means that our Self-Learning AI can adapt itself dynamically to different customer environments and avoids overly relying on any one detection method, minimizing the risk of problematic assumptions. We also leverage Bayesian meta-classifier techniques to autonomously fine tune the underlying detector models. This helps to ensure that a model is not over-firing or behaving in an inaccurate way.
5. Do No Harm
Cybersecurity is a field in which AI can be used for ‘good’, driving growth, strengthening national security and protecting people. In realizing this, we should avoid the potential for wider harm to fairness or sustainability principles.
Our AI systems is designed to guarantee user control and build an effective human-AI partnership that optimizes the SOC operator’s experience. Our software is also built to ensure that the output of our AI is agnostic to a person’s characteristics such as gender, race, or employment status. Finally, we aim to achieve Net Zero by 2040 through technology that is more sustainable by-design and have installed a programme of reduced hardware sales.
Conclusion
We hope that this whitepaper is a valuable resource for organizations, industry peers, academics, and governments, helping to ensure that society can embrace the AI-powered cybersecurity revolution.
Gartner, Magic Quadrant for Email Security Platforms, Max Taggett, Nikul Patel, Franz Hinner, Deepak Mishra, 16 December 2024Gartner does not endorse any vendor, product or service depicted in its research publications and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose. GARTNER is a registered trademark and service mark of Gartner and Magic Quadrant and Peer Insights are a registered trademark, of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved. Gartner Peer Insights content consists of the opinions of individual end users based on their own experiences with the vendors listed on the platform, should not be construed as statements of fact, nor do they represent the views of Gartner or its affiliates. Gartner does not endorse any vendor, product or service depicted in this content nor makes any warranties, expressed or implied, with respect to this content, about its accuracy or completeness, including any warranties of merchantability or fitness for a particular purpose.


