Skip to main content

Is ChatGPT safe to use? What you need to know

A woman using a laptop

What is ChatGPT?

Its creators define ChatGPT as “an artificial intelligence trained to assist with a variety of tasks”. Essentially, it’s an AI-powered chatbot that can generate natural language responses to user queries. ChatGPT is powered by a machine learning algorithm which has been trained on millions of documents. Because of the language model it uses, ChatGPT can produce human-sounding text and chat with people (hence the ‘chat’ in its name).

Simpler versions of language-based AI have been publicly available for years, but ChatGPT is the most advanced version so far. ChatGPT was created by OpenAI, which is an AI and research company.

What does ChatGPT stand for?

The GPT in ChatGPT stands for Generative Pre-trained Transformer.

What can ChatGPT do?

With its powerful natural language processing capabilities, and its ability to generate quick personalized responses, ChatGPT can be used in a variety of business settings, such as customer service, customer support, marketing automation, and more. Many predict that ChatGPT will revolutionize business, even if the precise or full impact is still to be determined.

The tool is also useful for individual users. Example uses of ChatGPT for individuals might include:

  • Summarizing a document with key points or actions
  • Translating text into other languages
  • Writing or debugging code
  • Answering specific queries
  • Explaining how to do certain tasks
  • Writing music or poems

ChatGPT can be used in multiple languages and is mostly available around the world (although it is banned in some countries because of data protection laws). Because ChatGPT’s responses are generated by an AI language model, they are not always accurate or complete. It’s important to evaluate the information ChatGPT provides and to consult additional sources where appropriate.

Is ChatGPT safe?

A key concern is whether AI language models like ChatGPT can be used in harmful ways. Potential ChatGPT security risks include:

Spam and phishing

One of the most common online threats is phishing. Phishing scams can often be easy to recognize, since they can contain spelling mistakes, poor grammar, and awkward phrasing. This usually happens when scammers are creating messages in a language which isn’t their first language. ChatGPT now gives scammers all over the world a high level of fluency in English (and other languages) – helping to improve their phishing messages and therefore make them harder to spot. In addition, because of the vast amounts of data the model is trained on, it is easier than ever for scammers to create convincing emails in the style of the company they are masquerading as.

It is also possible that fraudsters could use OpenAI’s technology to create a convincing fake customer service chatbot – which could have the potential to trick people out of their money.

Data leaks

In March 2023, ChatGPT creator OpenAI identified a problem, and ChatGPT was down for several hours. During that time, a few ChatGPT users saw the conversation history of other users. There were also reports that payment-related information from ChatGPT-Plus subscribers (who pay for an enhanced version of the app) might have leaked as well.

OpenAI published a report on the incident and addressed the cause of the problem. That doesn’t prevent new issues from arising in future. With any online service, there is a risk of accidental leaks and cybersecurity breaches from the growing number of hackers.

Privacy concerns

Justifiable, the potential for data misuse is a valid safety concern. OpenAI’s ChatGPT FAQs suggest you don’t share sensitive information and warns users that prompts can’t be deleted. The same FAQs states that ChatGPT saves conversations which are reviewed by OpenAI for training purposes.

A woman looking at her mobile phone

Contribution to spreading fake news and misinformation

In a world where misinformation, disinformation, and fake news can spread quickly online, some worry that ChatGPT could contribute to the problem. ChatGPT’s responses are based upon information it was trained on, much of it sourced from the internet. The app then creates a sequence of words that are likely to follow each other as its response. Because both misinformation and disinformation exist on the internet, the scope for ChatGPT to respond with incorrect information also exists. ChatGPT can also be used to impersonate individuals, manipulating others in the process.

Potential for bias

Even if the information is correct, it may display a political or other type of bias. As with any machine learning model, ChatGPT reflects the biases of its training data. If that data is biased, then its outputs may also be biased – with the potential for unfair, discriminatory, or even offensive responses. ChatGPT has reportedly taken steps to identify and avoid answering politically charged questions – but it’s worth being aware of potential bias when using the service.

Identity theft and information gathering

Malicious actors can use ChatGPT to gather information for harmful purposes. Since the chatbot has been trained on large volumes of data, it knows a great deal of information that could be used for harm if placed in the wrong hands.

For example, in one instance, ChatGPT was prompted to disclose what IT system a specific bank uses. Collating information in the public domain, the chatbot listed the various IT systems the bank in question uses. This is an example of a malicious actor using ChatGPT to pinpoint or isolate information that could be used for harm.

To address these concerns, OpenAI has taken steps to ensure its language models are safe. These include establishing strict access controls to prevent people from entering its systems without permission, as well as setting out ethical rules for AI development and use. The rules include a commitment to responsible use of the technology, as well as transparency and fairness.

Duping ChatGPT into writing malicious code

ChatGPT can generate code and The AI is programmed not to generate malicious code or code intended for hacking purposes. If hacking code is requested, ChatGPT tells users that its purpose is to “assist with useful and ethical tasks while adhering to ethical guidelines and policies.”

However, manipulation of ChatGPT is not impossible and with enough knowledge and creativity, bad actors could potentially trick the AI into generating hacking code. On hacking forums, hackers have claimed to be testing the chatbot to recreate malware strains. Given that such threads have been publicly identified, there could be more across the dark web. Less commonly discussed is the potential for ChatGPT itself to be hacked – if that did happen, threat actors could use it to disseminate misinformation.

ChatGPT scams

Whenever any new technology or platform emerges, new scams also emerge as cybercriminals look for ways to make money. ChatGPT has generated huge interest since its launch, so it’s not a surprise to see scams such as fake ChatGPT apps spreading malware that can steal your passwords and money. Sometimes these fake apps are promoted with messages of free, unlimited access at fast speeds with the latest new features and so on.

As always, the old adage holds true – if something sounds too good to be true, it probably is. Be cautious of ChatGPT offers promoted via social media or email. And be wary of apps claiming to be ChatGPT apps. To use ChatGPT on your phone, you can do so either through your mobile browser or using the official app.

ChatGPT can also help cybersecurity professionals

As one of the first examples of a publicly available AI with good language skills, ChatGPT has generated much discussion about both its opportunities and its potential risks. As with any new technology, it’s important to exercise caution. It’s easy to get caught up in the excitement and forget that you’re dealing with an online service that can be exploited or misused.

It’s also true that, while ChatGPT can assist threat actors in their pursuit of cybercrimes, it also can help with defensive purposes. ChatGPT can also be used by cybersecurity professionals as a tool in their arsenal.

Use a VPN to stay safe online

However you use ChatGPT, a key way to maximize your online privacy is by using a VPN or Virtual Private Network. A VPN renders your internet connection anonymous and hides your IP address – preventing third parties from tracking your activity.

If you want to hide your IP address and browse safely while keeping your data safe, a VPN like Kaspersky VPN Secure Connection is a good option to consider. Kaspersky Secure Connection allows you to browse securely and anonymously online, at industry-leading speeds, and allows you to unlock global content without restrictions from anywhere.

For added protection, you could also consider using a password manager – which generates strong random passwords for you and stores them securely in a digital vault.

Related products:

Further reading:

Is ChatGPT safe to use? What you need to know

ChatGPT raises various security and privacy questions. Read our guide to learn about ChatGPT security and whether ChatGPT is safe to use.
Kaspersky logo

Featured posts