OpenAI rolls out new privacy options for ChatGPT users who may now disable their conversation
OpenAI rolls out new privacy options for ChatGPT users who may now disable their conversation history in their account settings by clicking a toggle button. People may opt out of having their ChatGPT interactions used to train the artificial intelligence company’s algorithms, according to OpenAI. The change may serve as privacy protection for users who occasionally exchange sensitive information with the popular AI chatbot.
ChatGPT was made public, and millions of users have tried it and other bots. This new breed of AI chatbots is already being used for anything from holiday planning to serving as an impromptu therapist, raising worries not just about how these systems might be utilized, but also about how corporations analyze the instructions individuals enter into them. According to OpenAI, their software strips away personally sensitive information sent in by users.
By default, will continue to train its models on user data. It will continue to keep data for 30 days before deleting it to detect abusive conduct. OpenAI also announced that customers would be able to send themselves a downloaded copy of the data generated while using ChatGPT, which includes discussions with the chatbot. In the next months, the firm plans to launch a corporate subscription plan that, by default, would not train on the data of those users.
The post OpenAI Rolls Out New Privacy Options for ChatGPT Users appeared first on Analytics Insight.