The Dangers of AI: 5 Things You Shouldn’t Tell ChatGPT
The Dangers of AI: 5 Things You Shouldn’t Tell ChatGPT

ChatGPT has made our lives so easy, right? Share anything and everything with the chatbot, and your query will be solved. We share, and the chatbot learns from it- simple give and take. Well, if only people knew the underlying consequences of this. The Dangers of AI.

The truth is, oversharing with ChatGPT can cost us in the long run. Some AI researchers say we should be more discerning about what we tell these human-sounding tools. So, before patients upload their blood work for analysis, customers input their credit card information and engineers paste snippets of unpublished code for debugging help; they must consider what they will be signing up for.

Don’t Tell These 5 Things to ChatGPT EVER!

Like it or not, the go-to ChatGPT is a “privacy black hole.” The way it treats data and the breach of privacy is concerning to say the least. Any data that is entered in AI becomes a public property. With this in mind, there are several things that absolutely should never be told to it – or any other public cloud-based chatbot.

1- Identity Information

Anything personal — whether it’s your social security number, driver’s license, passport, date of birth, phone number or address, must never be fed into AI. Unless you want it to be out there present for anyone to access.

Especially with the rise of agentic AI, the chatbots have become capable of connecting to and using third-party services. It’s possible that in order to do this, they need our login credentials. However, it must be known that once this data has gone into a public chatbot, there’s very little control over what happens to it. This is not just a conspiracy theory: there have been cases of personal data entered by one user being exposed in responses to other users. Clearly, this could be a privacy nightmare.

Identity Theft by AI
Identity Theft by AI

2- Healthcare and Medical History

In the healthcare world, confidentiality is a number one priority. Most of the time it is for our own good; to save us from embarrassment and discrimination. However, many chatbots are not trained for this protection of health data.

Read More: Elon Musk’s AI Chatbot Grok is Unhinged

Sometimes getting checked by a health professional can be a big hassle and in these very moments people end up using ChatGPT as an easier way out. But they often underestimate the extreme consequences that come with this. Particularly due to the recent updates that enable it to “remember” and even pull information together from different chats to help it understand users better. While it may seem beneficial at first, none of these functions come with any privacy guarantees. Hence, it is advised to be aware of the sensitivity of the information that is being entered.

3- Financial Accounts Theft

It is very important to guard bank and investment account numbers. There is a reason why banks warn you regarding such information.

For similar reasons, it’s probably not a great idea to start putting data such as bank account or credit card numbers into AI chatbots. These should only ever be entered into secure systems used for e-commerce or online banking, which have built-in safety guards like encryption and automatic deletion of data once they have been processed. Chatbots have none of these safeguards. In fact, once data goes in, there’s no way to know what will happen with it, and putting in this highly sensitive information could leave you exposed to fraud, identity theft, phishing and ransomware attacks.

Financial Account Theft
Financial Account Theft

4- Proprietary Corporate Information 

Everyone has a duty of confidentiality to safeguard sensitive information for which they’re responsible. An employee who is using a chatbot for corporate purposes is directly or indirectly exposing client data and public trade secrets to the world even when they are just drafting a simple email or dissolving client query.

Sharing business documents, such as notes and minutes of meetings or transactional records can easily be considered a breach of confidentiality. All this is considered utterly irresponsible in the corporate world and often times have grave outcomes to it. Such was the case involving Samsung employees in 2023. Samsung banned ChatGPT after an engineer leaked an internal code to the server.

The moral of the story is that no matter how tempting it is to feed client data, it must be done at all costs. If it’s really necessary, then the enterprise version must be used.

Proprietary Corporate Information 
Proprietary Corporate Information

5- Unethical and Criminal Requests 

Using AI for illegal purposes isn’t only unethical and criminal but also quite dumb too. Why would you want a robot to generate your illegal requests?

If anyone plans to use AI for this purpose, they must know that most AI chatbots have safeguards designed to prevent them from being used for unethical purposes. And if your question or request touches on activities that could be illegal, it’s possible you could find yourself in hot water. Many usage policies make it clear that illegal requests or seeking to use AI to carry out illegal activities could result in users being reported to authorities.

These laws can vary widely depending on where you are. For example, China’s AI laws forbid using AI to undermine state authority or social stability, and the EU AI act states that “deepfake” images or videos that appear to be of real people but are, in fact, AI-generated must be clearly labelled. In the UK, the Online Safety Act makes it a criminal offense to share AI-generated explicit images without consent.

We all know that AI companies are hungry for data to improve their models, but even they are urging the users not to enter sensitive information in AI.

“Please don’t share any sensitive information in your conversations,” ChatGPT owner OpenAI urges. Google asked its Gemini users: “Don’t enter confidential information or any data you wouldn’t want a reviewer to see.”⁠

Areeb Asif
I'm Areeb Asif, an SEO Content Writer with six months of experience in crafting engaging, optimized, and reader-friendly conversational content. I am passionate about writing daily news articles and informative articles that allows me and the audience to stay aware of the latest news about the world. Moreover, I like to stay updated with the latest SEO trends to ensure my content drives traffic and boosts online visibility. All my informativearticlesare focused on delivering compelling, well-researched, and keyword-optimized content.