ChatGPT is Always Right. It’s Only About not Having the Wrong Expectations

ChatGPT

ChatGPT only calculates the probability of a good word match. And this process repeats

ChatGPT, as of now has the fastest growing user base, a record of 100 million active and 13 million unique visitors in the month of January. The scale of popularity is obvious for the range of tasks it can perform from writing articles, and making summaries to generating ideas and writing code. Clearly, it had the first-mover advantage among generative AI apps. However, the growing popularity is accompanied by a few drawbacks. Academic dishonesty, misinformation and helping cybercriminals do their job, and more importantly, the ability to get the facts right are few of the accusations it has to deal with. Does this mean the popular AI chatbot is not worth the applause it is getting? Microsoft is making multi-billion dollar investments into ChatGPT in the form of cash and the provision of cloud computing facilities. Does the tech giant have wrong expectations over the generative AI app? According to reports, Microsoft is about to release its new ChatGPT-powered search engine, Bing. Even Google is notorious for presenting misinformation inspite of fact-checking panels in place. What Microsoft CEO must be looking for in ChatGPT so much so that he is integrating it into a search engine? Well, my good guess is that the secret lies in how search engine suggestions work and how they improve with the keywords one types.

Generative AI Models Do not Have Facts

If we look into how large language models GPT3, GPT2, and the recent version GPT3.5 to which chatGPT belongs, are developed, it will be pretty much evident that they are not Wikipedias of sorts. They cannot find answers in an objective manner, going through a large library of references nor do they make up stories just by gathering words from stories and characters because they are not designed to do so. ChatGPT’s creator OpenAI itself mentioned in its blog that it had trained its chatbot on “vast amounts of data from the internet written by humans.” The same article also specifically mentions the intentions behind designing the bot. “It is important to keep in mind that this is a direct result of the system’s design (i.e. maximizing the similarity between outputs and the dataset the models were trained on) and that such outputs may be inaccurate, untruthful, and otherwise misleading at times,” says the blog. And hence coming up with wrong and false information is very much part of the design.

What is ChatGPT Capable of?

A chatbot like ChatGPT is a Transformer model made of neural networks that are essentially trained to learn context and meaning by tracking relationships in sequential data like the words in a sentence. It calculates the probability of a good word match fetching from the infinite amount of sentence sequences it has gone through. And this process repeats. In other words, chatGPT has information on word sequences and hence can predict the most plausible next sentence. Hence looking at it as a mere data retrieval system is akin to misleading oneself. So, instead of looking at it as an information retrieval system, it can be tamed into an improvisation tool on the lines of a human-in-the-loop model – repeatedly asking it for answers. We, humans, are subjective creatures, and finding spaces for chatGPT is not difficult, for it to spin relatable content, where accuracy is not a crucial factor. ChatGPT can imagine better than it can serve you facts.

The post ChatGPT is Always Right. It’s Only About not Having the Wrong Expectations appeared first on Analytics Insight.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...