Secure your data on ChatGPT: Tips and what to keep confidential

ChatGPT

Acknowledged the dangers of depending on ChatGPT, it’s a mistake to trust it for anything significant right now.

Internet users are typically aware of the dangers of potential data breaches and the manner in which our personal information is utilized online. Yet, ChatGPT’s alluring qualities appear to have produced a blind zone around threats we ordinarily avoid. OpenAI just released a new privacy feature that allows ChatGPT users to deactivate chat history, preventing tasks from being utilized to enhance and refine the model.

It’s a step in the right way, says Nader Henein, a Gartner privacy research VP with two decades of corporate cybersecurity and data protection expertise. Yet, the underlying issue with AI privacy is that there is little you can do in terms of retroactive control once the model is in place.

Henein suggests imagining ChatGPT as a kind stranger sitting behind you on the bus, filming you with a camera phone. They have a pleasant voices and appear to be decent folks. Would you then have the same chat with that person? Because that’s exactly what it is. It’s well-intentioned, he said, but if it affects you, it’s like a psychopath; they won’t think twice about it.

Fundamentally, consider ChatGPT prompts like you would any other online publication. The best assumption is that whatever you put on the internet, whether it’s emails, social media, blogs, or LLMs, can be viewed by everyone on the planet. Never publish anything you don’t want others to read. Gary Smith is the Fletcher Jones Professor of Economics at Pomona College and the author of Distrust Big Data, Data Torturing, and the Future of Information. ChatGPT, he says, may be used as an alternative to Google Search or Wikipedia if it is fact-checked. But it should not be depended upon for much else. Someone outside of OpenAI might break into your account and take your data. When using a third-party service, there is always the danger of data exposure from flaws and hackers, and ChatGPT is no different.

OpenAI pledges not to share user data with third parties for marketing or advertising purposes, so that’s one less thing to worry about. Nonetheless, it does share user data with suppliers and service providers for site maintenance and operation.

The ultimate productivity hack has been advertised as ChatGPT and generative AI technologies. ChatGPT can create articles, emails, social network postings, and text summaries. When Samsung personnel utilized ChatGPT to validate their code, they unintentionally divulged trade secrets. The electronics manufacturer has subsequently prohibited the use of ChatGPT and warned staff of disciplinary punishment if they do not follow the new rules.

Because of strict financial laws on third-party messaging, banking institutions such as JPMorgan, Bank of America, and Citigroup have also prohibited or limited the usage of ChatGPT. Employees are also not permitted to use the chatbot, according to Apple.

While utilizing ChatGPT, there is a way to remain anonymous. Your discussions will still be saved for 30 days, but they will not be utilized to train the model. You may access options by going to your account name and then clicking on “Data Controls. You may turn off Chat History & Training from here. You may also delete previous talks by going to General and then choosing Clear all chats.

The post Secure your data on ChatGPT: Tips and what to keep confidential appeared first on Analytics Insight.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...