Is ChatGPT safe enough to rely on for business needs?

Last updated:
Jun 30, 2023

The idea of a free-to-use AI tool that compiles information, analyses data at a far faster rate than humans, and improves efficiency is a compelling temptation for employees.

In fact, KPMG recently published figures that projected a £31 billion boost to the UK economy from the use of generative AI in the workplace, increasing productivity by 1.2%.

However, this idea is based on a high level of trust in ChatGPT and tools like it to maintain their confidentiality and integrity, and on users to secure their accounts appropriately.

This has been called into question when Group-IB, a cyber security firm, revealed that over 100,000 ChatGPT login credentials have been compromised and are for sale on the DarkWeb. The breach of information is not from ChatGPT itself, but instead is a result of credentials gathered from infiltrated devices.

This is a thorn in the side of users and businesses that rely on ChatGPT for workplace tasks and have inputted confidential or sensitive data in the tool.

Raccoon Stealer, a specific malware-as-a-service, was used to gather credentials and personal information from victims through the tool, and only costs criminals $200 a month to use. Raccoon Stealer is known for stealing victim’s device information, username and passwords stored in the browser, browser autofill information, payment information such as credit card details, Crypto wallets, files and screenshots of the infected system. The information is then available to buy on the DarkWeb.

It is highly suspected that Raccoon Stealer is run by Russian cyber criminals as a result of the malware going out of service briefly (March 2022) due to the loss of a developer who needed to work on a ‘special operation’ (the Russian invasion of Ukraine). The Raccoon Telegram channel that criminals use is also made up of mainly Russian speakers.  

Businesses should be concerned with any employees using ChatGPT, particularly if the credentials used are company email addresses, potentially opening the door to a greater frequency of phishing campaigns or spear phishing. Furthermore, if employees are using the history function on ChatGPT or other tools, they are at risk of leaking confidential data or information.

If you’re concerned about how your employees are using ChatGPT for work, encourage them to practice good cyber hygiene with the following:

  • Multi-factor authentication should be in place for any account, with no exceptions.
  • Update data classification policies to include what type of information is safe to use on AI platforms giving your employees clarity.
  • Encourage good data hygiene, such as anonymising any data inputted into ChatGPT.

Do not completely ban the use of ChatGPT across the organisation, as this may result in incentivising employees to make poor decisions such as downloading information to a personal device and using it on there. This would be counterintuitive as the cyber security team cannot monitor device use and invites the added risk of a data breach.

ChatGPT can be a tool for employees to improve their efficiency and make productivity gains, however all use should be monitored and employees should be given the correct information to protect themselves when using it for business purposes.  

In other AI news

Thomson Reuters has acquired legal generative AI firm Casetext for $650 million. Casetext helps lawyers with their tool, CoCounsel, to review documents, deposition preparation, research memos and conduct analysis of contracts. This acquisition is against the backdrop of an increase in Thomson Reuter’s general interest in investing in more AI companies.

The US Department of Defense has established “Intelligent Generation of Tools for Security” (INGOTS), an AI cyber defence program which automates threat hunting and categorises vulnerabilities before attackers can exploit them. The categorisation of vulnerabilities will be based on severity and the potential for multiple vulnerabilities to be compounded and therefore increase risk. DARPA, the architects of INGOTS, will roll out the program in three phases which includes producing new tools and techniques, testing and validating them, and rolling them out. This is with the addition of outreach in the form of hackathons, demonstrations, and meetings with government partners. The product will ultimately assist cyber security experts in patching before vulnerabilities are exploited.

China is stepping up the AI race against ChatGPT by declaring Baidu’s generative AI metrics are beating OpenAI’s. Baidu’s tool is called Ernie Bot (due to be released) and they claim to have run a test using AGIEval and C-Eval datasets, but did not go into further detail about how Ernie beat ChatGPT. Ernie Bot will support external plugins once it is released. This reflects the intense race currently ongoing between China and Western companies to produce the most advanced generative AI. One to watch is exactly how China distributes generative AI tools and to what extent their datasets are limited to reflect the Chinese Communist Party’s views of freedom of information.

What's inside?

CONTRIBUTORS
Sneha Dawda
Consultant, Crisis & Security Strategy
View profile
LATEST RELATED CONTENT

Stay a step ahead in an increasingly complex and unpredictable world

Our consultants stay on top of the latest megatrends that influence how organisations are attacked, whether related to terrorism, criminality, war or cyber.

We document their analysis here. Be the first to see it.

Subscribe