Skip to main content

ChatGPT gets a private mode for secret AI chats. Here’s how to use it

OpenAI just launched a new feature that makes it possible to disable your chat history when using ChatGPT, allowing you to keep your conversations more private.

Previously, every new chat would appear in a sidebar to the left, making it easy for anyone nearby to get a quick summary of how you’ve been using the AI for fun, schoolwork, or productivity. This can prove problematic when you’re discussing something you want to keep secret.

I tested ChatGPT's privacy option to disable history
Image used with permission by copyright holder

A perfect example is when you ask ChatGPT for help with gift ideas, an excellent use for OpenAI’s chatbot. If the recipient likes to dig for clues, they won’t be hard to find if a ChatGPT window is left open in your browser.

I tested this new privacy feature by disabling chat history, then asking a somewhat shocking question about faking a Windsor knot for a necktie. The option to disable history is in settings under Data Controls.

OpenAI also recently added an export option in the Data Controls section, another nod to privacy and personal control of your data. Disabling chat history and exporting your data are features that are available to both free users and subscribers.

When I clicked the big green button at the left to reenable chat history again, my embarrassing conversation that revealed my lack of knot skills was nowhere to be seen. What a relief!

OpenAI notes that unsaved chats won’t be used to train its AI models; however, they will be retained for 30 days. OpenAI claims that it will only review these chats when needed, to check for abuse. After 30 days, unsaved chats are permanently deleted.

That means your chats aren’t entirely private, so you need to be aware that they might be read by OpenAI employees. This could be a concern for business use since proprietary information might accidentally be shared with ChatGPT.

OpenAI said it is working on a new ChatGPT Business subscription to give enterprise users and professionals more control over their data. There are already business-focused AIs such as JasperAI.

Editors' Recommendations

Alan Truly
Computing Writer
Alan is a Computing Writer living in Nova Scotia, Canada. A tech-enthusiast since his youth, Alan stays current on what is…
Apple may finally beef up Siri with AI smarts next year
The Siri activation animation on an iPhone running iOS 14.

As the world has been taken over by generative artificial intelligence (AI) tools like ChatGPT, Apple has stayed almost entirely out of the game. That could all change soon, though, as a new report claims the company is about to bring its own AI -- dubbed “Apple GPT” -- to a massive range of products and services.

That’s all according to reporter Mark Gurman, who has a strong track record when it comes to Apple leaks and rumors. In his latest Power On newsletter, Gurman alleges that “Apple executives were caught off guard by the industry’s sudden AI fever and have been scrambling since late last year to make up for lost time.”

Read more
OpenAI’s new tool can spot fake AI images, but there’s a catch
OpenAI Dall-E 3 alpha test version image.

Images generated by artificial intelligence (AI) have been causing plenty of consternation in recent months, with people understandably worried that they could be used to spread misinformation and deceive the public. Now, ChatGPT maker OpenAI is apparently working on a tool that can detect AI-generated images with 99% accuracy.

According to Bloomberg, OpenAI’s tool is designed to root out user-made pictures created by its own Dall-E 3 image generator. Speaking at the Wall Street Journal’s Tech Live event, Mira Murati, chief technology officer at OpenAI, claimed the tool is “99% reliable.” While the tech is being tested internally, there’s no release date yet.

Read more
Bing Chat just beat a security check to stop hackers and spammers
A depiction of a hacker breaking into a system via the use of code.

Bing Chat is no stranger to controversy -- in fact, sometimes it feels like there’s a never-ending stream of scandals surrounding it and tools like ChatGPT -- and now the artificial intelligence (AI) chatbot has found itself in hot water over its ability to defeat a common cybersecurity measure.

According to Denis Shiryaev, the CEO of AI startup Neural.love, chatbots like Bing Chat and ChatGPT can potentially be used to bypass a CAPTCHA code if you just ask them the right set of questions. If this turns out to be a widespread issue, it could have worrying implications for everyone’s online security.

Read more