It’s no secret that AI is a trending topic hot on many organisations’ agendas and strategies. However, with the release of ChatGPT, and Machine Learning continuously evolving, cybersecurity service providers, ramsac, are advising businesses not to blindly jump in with AI. From malicious code to leaked data, using an LLM (large language model) could be detrimental to your organisation if used improperly.
The dangers of AI for businesses
With the media storm around ChatGPT, the site currently has around 100 million users and is visited 1 billion times every month. As an LLM it uses deep learning to provide answers to queries, statements or requests in a human-like manner. So, how is this dangerous?
LLMs rely on accessible data from the open internet to inform queries and responses for uses. Of the 100 million users, and the billions of requests already logged on ChatGPT, it’s possible for the organisation running the LLM to learn from this to store data for future responses. Think about it, ChatGPT doesn’t ask for your permission before use. As LLMs are unable to decipher confidential information against readily available information, company secrets or intellectual property could be leaked and lost.
What should businesses do when using LLMs?
– Avoid using public LLMs for business-specific tasks or information, such as reviewing redundancy options
– Use an LLM from a cloud provider or self-hosted as this is a safer option
– Consider the queries and requests before submitting them to LLMs as it’s possible for this information to be hacked and leaked
– Avoid including sensitive information on public LLMs, such as confidential data
– Submit business critical queries on private or self-hosted LLMs only
– Ensure up-to-date cybersecurity monitoring is enabled and active so breaches and threats can be detected
Without proper consideration for the queries and requests posted, information can be carelessly leaked which could result in major disruption and damage to an organisation. Unfortunately, it’s possible for LLMs to be hacked, exposing all queries alongside sensitive information. Around 39% of UK businesses were victims of a cyber-attack in 2022 and this is only set to rise in 2023 if minimal action is taken to protect businesses.
How do AI and LLMs affect business cybersecurity?
As technology develops, cybercriminals are also capable of evolving their methods too. Although the full extent of cybercrime is yet to be realised, it’s clear that more sophisticated phishing scams will most likely arise from LLM usage. It is currently the most common form of cybercrime, with around 3.4 billion emails sent every day. Cyber attackers will be able to script and automate communication without spelling errors, making them less suspicious.
Bog standard anti-virus software is now redundant, especially as threats continue to adapt, evolve and learn. That’s why an always-on approach is necessary. Cybersecurity monitoring, running 24/7, is vital to tackle increasing threats and the sheer amount of event data and trends occurring online. Without proper consideration before using AI and LLM, it could put your business at risk.