OpenAI Thinks ChatGPT Thinks

OpenAI Thinks ChatGPT Thinks

OpenAI’s newest model o1 is claimed to be ‘reasoning’ and even ‘thinking,’ but this claim is not accepted by many. Apart from the well known sceptic like Gary Marcus, this time Clem Delangue, CEO of Hugging Face, was also clearly not impressed with the ‘thinking’ claim.

“Once again, an AI system is not ‘thinking’, it’s ‘processing’, ‘running predictions’,… just like Google or computers do,” said Delangue, when talking about how OpenAI is painting the false picture with what the company’s newest model can achieve. “Giving the false impression that technology systems are human is just cheap snake oil and marketing to fool you into thinking it’s more clever than it is,” he added.

On the other hand, isn’t that exactly how thinking works? “Once again, human minds aren’t ‘thinking’ they are just executing a complex series of bio-chemical / bio-electrical computing operations at massive scale,” replied Phillip Rhodes.

How is o1 Thinking?

Sam Altman, the CEO of OpenAI, calls the launch “the beginning of a new paradigm: AI that can do general-purpose complex reasoning.” The new model quite literally takes some time to think before responding, as opposed to earlier OpenAI models that start generating text as soon as you give it a prompt.

The model does this by producing a long internal chain of thought prompting before responding to the user. Which is why, the team has also in some way suggested to not ask generic questions to the model as its high reasoning capabilities would work better for complex PhD-level problems, giving answers with PhD-level accuracy.

But apart from coding and maths, this reasoning capability is the special highlight of the release. These ‘reasoning’ and ‘thinking’ capabilities were long being touted as the next frontier by Altman in all his speeches and it seems to finally be landing on the right spot.

According to the Learning to Reason with LLMs blog, the reinforcement learning algorithm developed by OpenAI helps the model think more efficiently by refining its thought process through a data-efficient training method.

Over time, the performance of “o1” improves as more training and thinking time is added. This differs from traditional LLM pretraining, which focuses more on expanding the size of the model, instead of focusing on increasing reasoning with a small model.

Through reinforcement learning, o1 improves its reasoning skills by breaking down complex problems, correcting mistakes, and trying new approaches when needed. This greatly enhances its ability to handle complicated prompts that require more than just predicting the next word—it can backtrack and “think” through the task.

However, a key challenge is that the model’s reasoning process remains hidden from users, even though they are billed for it, which are called “reasoning tokens.”

o1 is explicitly instructed to not disclose the “hidden chain-of-thought” which is done using “reasoning tokens”. and to not let users trick it or “ask for step by step”.
It didn’t disclose it to me but it’s readable in the “thought” summary.
But yeah o1 seems to literally be… pic.twitter.com/mWeKz37X7G

— Lewis N Watson (@LewisNWatson) September 13, 2024

OpenAI has explained that hiding the reasoning steps is necessary for two main reasons. First, for safety and policy compliance, as the model needs freedom to process without exposing sensitive intermediary steps. Second, to maintain a competitive advantage by preventing other models from using their reasoning work. This hidden process allows OpenAI to monitor the model’s thought patterns without interfering with its internal reasoning.

Not for Everyone and Focus on Inference

As Jim Fan explained, this “Strawberry” or o1 model is marking a significant shift towards inference-time scaling in production, a concept that focuses on improving reasoning through search rather than just learning.

Reasoning doesn’t require large models. Many parameters in current models are dedicated to memorising facts for trivia-like benchmarks. Instead, reasoning can be handled by a smaller “reasoning core” that interacts with external tools, like browsers or code verifiers.

This approach reduces the need for massive pre-training compute. A significant portion of compute is now dedicated to inference, rather than pre- or post-training. LLMs simulate various strategies, similar to how AlphaGo uses Monte Carlo Tree Search (MCTS). Over time, this leads to better solutions as the model converges on the best strategy.

This was also explained by Subbarao Kambhampati in his post.

My (pure) speculation about what OpenAI o1 might be doing
[Caveat: I don't know anything more about the internal workings of o1 than the handful of lines about what they are actually doing in that blog post–and on the face of it, it is not more informative than "It uses Python… pic.twitter.com/QgDjLycLif

— Subbarao Kambhampati (కంభంపాటి సుబ్బారావు) (@rao2z) September 12, 2024

OpenAI likely discovered the benefits of inference scaling early on, while academic research has only recently caught up.

While effective in benchmarks, deploying o1 for real-world reasoning tasks presents challenges. Determining when to stop searching, defining reward functions, and managing compute costs for processes like code interpretation are complex issues that need to be solved for broader deployment.

o1 can act as a data flywheel, where correct answers generate training data, complete with both positive and negative rewards. This process improves the reasoning core over time, similar to how AlphaGo’s value network refined itself through MCTS-generated data. This would in the end create more valuable data.

So probably, we can say that ChatGPT is now thinking, that is why it gets better as you spend more time with it, and OpenAI doesn’t care much about speed.

The post OpenAI Thinks ChatGPT Thinks appeared first on AIM.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...