OpenAI recently unveiled a new “incognito” mode whose aim is to prevent user chats from being used in their AI training sets.
The company claims that this feature will enhance user privacy and allow individuals to select which conversations are saved, potentially making their tools accessible to businesses that deal with sensitive data.
Since the introduction of ChatGPT to the mainstream media, OpenAI has been subject to intense scrutiny regarding how they gather data and train their AI models. GPT-4 and its predecessors are extensive multimodal models, necessitating vast amounts of data.
Users of ChatGPT actively trained the AI by providing their chats, which were used in the large training sets.
However, this method posed a significant security threat.
ChatGPT’s proficiency in generating and solving code has resulted in the platform being banned by software companies. Any input code is freely accessible for OpenAI to use, exposing developers’ hard work and posing a risk of violating data security regulations, leaving companies susceptible to cyberattacks.
Due to privacy concerns, ChatGPT was banned in Italy, and OpenAI has until April 30th to alter how it collects data.
In response, OpenAI introduced a new feature that allows certain chats to be excluded from appearing in the chat history and being used in their training sets.
When chat history is disabled, conversations will be kept for 30 days to monitor for abuse before being deleted. OpenAI also announced that ChatGPT will soon have a Business Subscription, providing professionals with greater control over their data.
This subscription is expected to be available in the upcoming months.