Generative AI and Large Language Model (LLM) tools have entered the mainstream of public consciousness this year, with people using the likes of OpenAI’s ChatGPT and Google Bard for everything from helping web searches to using the AI capabilities to drive efficiency in the workplace.
At Darktrace, we have long understood the potential for AI to be one of the most transformative technological opportunities of our time. Our Darktrace Cyber AI Research Centre in Cambridge has been researching and developing AI tools for over a decade – tools like Darktrace DETECT™ and RESPOND™ which use a variety of AI technology to keep 8,400 customers around the world safe from cyber disruption.
As pioneers of AI and understanding its potential to change the world, we recognize that in 2023, the AI genie is out of the bottle. AI tools are rapidly becoming part of our day to day lives.
74% of active customer deployments have employees using generative AI tools in the workplace [1]
While generative AI tools have the power to increase productivity and augment human creativity, businesses need to move quickly to keep up with the pace of innovation. These tools carry potential privacy and security risks if used incorrectly or without proper policies in place that match the unique needs of the business – creating challenges for CISOs.
Privacy and Security Risks with Generative AI
Government agencies like the UK’s National Cyber Security Centre (NCSC) have already issued guidance about the need to manage risk when using generative AI tools and other LLMs in the workplace. In the United States, the Cybersecurity and Infrastructure Agency (CISA) has also expressed concerns about the security implications of generative AI.
One of the reasons for this is because LLMs can learn from your prompts, storing information entered and using it to train datasets. With that data in the system, it is possible that if someone enters the right prompt, the LLM could potentially use your company’s data in response to a query.
And if the information you entered contains sensitive files or data such as intellectual property or know-how, financial reports, confidential internal documents, or sales numbers, it could become part of the third-party AI model and potentially available to others, creating privacy, intellectual property, and security risks if the appropriate guardrails are not in place.
How Darktrace Helps Manage Generative AI Use
In response to the growing use of generative AI tools, Darktrace has announced new risk and compliance models to help Darktrace customers address concerns around the risk of IP loss and data leakage.
We’re excited about how immensely powerful these generative AI tools are, with the capability to help people and businesses work efficiently– but like any other technology, there’s the risk that they could be inadvertently misused if not managed or monitored correctly. That’s why the new risk and compliance models for Darktrace DETECT™ and RESPOND™ make it easier for customers to put guardrails in place to monitor, and when necessary, respond to activity and connections to generative AI and LLM tools such as AutoGPT, ChatGPT, Stable Diffusion, Claude, and more.
Each business will have its own distinct policies and needs related to generative AI tools, so we’ve also made it easier for customers to add their own list of tools to monitor for.
Darktrace’s Self-Learning AI makes it possible to detect generative AI activity that may deviate from company policies or best practices. We bring our AI to each customer’s data, and it learns the day-to-day workings of every user, asset, and device – building an understanding of your business’s unique ‘pattern of life’. That’s why it can detect even subtle anomalies that could indicate a threat to your business and autonomously respond, containing the threat in seconds.
In May 2023, Darktrace Self-Learning AI detected and prevented an upload of over 1GB of data to a generative AI tool at one of its customers. [2]
With these guardrails in place, Darktrace customers can take advantage of the opportunity using generative AI and LLMs provide, while remaining protected against the potential security, IP, and privacy risks.
Using AI Safely and Responsibly
At Darktrace, we believe that recent advances in generative AI and LLMs are an important addition to the growing arsenal of AI techniques that will transform cyber security. After all, we have been utilizing AI, including LLMs and generative AI, across all of our products for years – including in Cyber AI Analyst for real time analysis of incidents, helping Darktrace customers use the power of AI to stay protected from cyber threats.
But we also believe in the responsible development and deployment of different AI techniques, which is why we are providing the tools customers need to use AI safely and responsibly.
Our Self-Learning AI is already helping more than 8,400 businesses fight back and protect themselves against cyber threats and disruptions for the past ten years – with these new tools, CISOs can ensure that productivity is boosted by generative AI, without needing to worry about the potential security risks. Our AI learns the business in real time, all the time. It’s a Self-Learning AI. And the impact we’ve seen on improved security outcomes has been enormous.
Self-Learning AI informs Darktrace’s Cyber AI Loop, an interconnected, comprehensive set of dynamically related capabilities working together autonomously to create a continuous feedback loop to prevent, detect, respond, and heal from cyber-attacks. Ensuring that data, people, and businesses stay protected from cyber threats.
References
[1] Based on data obtained on June 2nd, 2023, from active customer deployments with Call Home enabled, where Darktrace detected generative AI activity at some point.
[2] Based on data obtained on June 2nd, 2023, from active customer deployments with Call Home enabled, where Darktrace detected generative AI activity at some point.