Meta Launches Open Source Models, OpenAI Controls Them

Meta Launches Open Source Models, OpenAI Controls Them

Meta has been open sourcing almost all of their creations. Undoubtedly, it has been good for the company. Ever since the release of LLaMA, Meta has been touted as the open source champion. Now, with Llama 2 and Code Llama, the praise for the company from the developer ecosystem has been at an all time high. But it is quite interesting how Meta does not care about anything apart from gathering all the praises.

Recently, Jason Wei from OpenAI, posted on X with his alternate account that he overheard at a Meta GenAI social event that the company plans to build Llama 3 and 4, which is expected to be much more powerful than GPT-4. “Wow, if Llama-3 is as good as GPT-4, will you guys still open source it?,” someone asked. To which, the person from Meta AI said that it would be, adding, “Sorry, alignment people.”

Overheard at a Meta GenAI social:
"We have compute to train Llama 3 and 4. The plan is for Llama-3 to be as good as GPT-4."
"Wow, if Llama-3 is as good as GPT-4, will you guys still open source it?"
"Yeah we will. Sorry alignment people."

— jason (@agikoala) August 25, 2023

This remark definitely points towards the closed-door systems that OpenAI, Google, and many others have been building in a bid to make them more aligned. Whatever the case might be, Meta’s bid towards open sourcing one of the most “dooming” technologies of all time, as many put it, is a little concerning.

Is open source really that good?

Sam Altman has been asked several times about the “kill switch” that he keeps carrying in his blue backpack to put an end to OpenAI’s systems if they get rogue. In a recent interview, he laughingly said it was just a joke. But this still does not kill the fear that people have about AI systems getting out of control.

With AI systems getting better and better everyday, the conversation about OpenAI’s system going rogue has steered away. It has come to open source models. Meta wants to open source a GPT-5 level model. This means, as pointed out in an X discussion, that there would be no kill-switch for this. Which means that if some bad actor wants to use an open source model and weaponise it, there is no way to turn it off.

Moreover, all the research for AI safety can possibly become meaningless. Companies that have been trying to make AI systems aligned, honest, and ethical would have no say in what happens. Everyone would be able to fine-tune the open source models however they want to. Arguably, this might be a little more dangerous than just giving away your data to OpenAI through ChatGPT.

Meta’s love for open source isn’t quite clearly well founded on its own beliefs. There is no proof that the company had plans to open source LLaMA in the first place, it only happened when the model got leaked, and was hailed as a game changer by many developers.

Open source is still under control

Yann LeCun, the Meta AI chief, posted on X that, “Once AI systems become more intelligent than humans, humans will still be the apex species.” AI doomers still disagree with this statement. But even if humans remain the “apex species”, one thing is certain that OpenAI’s GPT is going to remain the “apex model” for a very long time. And OpenAI knows that.

The hype around open source models outperforming closed door ones would be nothing if every single model was not compared against GPT-3 and GPT-4.

Every model in the open source ecosystem compares itself against GPT’s capabilities, and on the HumanEval benchmark which has been created by OpenAI itself. Arguably, no one would even bat an eye on the open source models if they were not compared against GPT-3 and GPT-4 in terms of performance.

Furthermore, even if Meta decides to release an open source Llama 3, which would be on par with GPT-4 in terms of capabilities, it would still be measured on HumanEvals. Additionally, by the time this happens, OpenAI might have already released GPT-5 and created another evaluation benchmark for open source models. There is no way open source would be able to escape this.

Adding to all of this is the fact that OpenAI, partnered with Anthropic, Google, and Microsoft, launched Frontier Model Forum, for ensuring safe and responsible development of AI models. So if Llama and further models go rogue in the future, they can be pulled down from Hugging Face and GitHub in a moment.

In May, Meta was not invited to the White House for AI discussion and it is not part of this forum as well. The company is being left behind, maybe voluntarily, and is trying to build an open source league of its own, which interestingly is still being controlled by OpenAI and others. So Meta’s bid to be the good guy of AI through the open source community might not last for too long.

The post Meta Launches Open Source Models, OpenAI Controls Them appeared first on Analytics India Magazine.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...