OpenAI Raises Alarm Over Open Source AI Dangers 

Most AI Doomers Have Never Trained An ML Model in Their Lives 

Imagine a world where an all-knowing entity, which is able to propagate an unfathomable amount of information, text, images, voices, and videos becomes sentient and resists attempts at being stopped.

Seems like a 1984-esque scenario, doesn’t it?

Jan Leike, ML researcher and alignment team lead at OpenAI in a Tweet, painted a doomsday picture. He highlighted a significant challenge for humanity: the decision to not release open-source Language Model Models (LLMs) that can autonomously propagate. Leike emphasized that if these LLMs spread, they could engage in various criminal activities that would be difficult to control and assign blame.

He Tweeted, “An important test for humanity will be whether we can collectively decide not to open source LLMs that can reliably survive and spread on their own.”

“Once spreading, LLMs will get up to all kinds of crime, it’ll be hard to catch all copies, and we’ll fight over who’s responsible.”

Is the Threat Real?

If LLMs and artificial intelligence was to go wrong and was able to fend off any attempts to stop it, it would truly be a menace to society at large.

A workshop to assess the potential impact of AI on future criminal activities, brought together 31 experts from diverse backgrounds to categorize AI-related threats and gauge their severity over a 15-year period.

The discussions’ outcomes highlighted high potential threats like audio/visual impersonation, driverless vehicles as weapons, targeted phishing, AI-controlled system disruption, and large-scale blackmail. Medium-level threats encompassed military robots, data poisoning, and autonomous attack drones, while bias exploitation and evading AI detection were classified as low-level threats.

While the dangers seem unlikely at present, there’s a need to balance the doom and gloom, while keeping an eye out for the ‘ENTITY’.

Is the threat real?

But is it so or is it just a ploy to hamper the development of open-source models like LLaMa 2 which are stealing the thunder from OpenAI?

Clement Delangue, Co-founder & CEO of HuggingFace, responded to Jan Leike’s Tweet, pointing out that the tweet could be interpreted as using fear to undermine open-source practices.

Delangue then addressed Leike’s main point, noting uncertainty about the concept of technology “surviving and spreading on its own.”

He went on to raise a very pertinent question of whether the act of creation itself posed a risk(or open-sourcing it was the problem) since if a technology is truly exceptional, it should naturally find avenues for widespread adoption without solely relying on open-sourcing.

In response to the ongoing discussion, another tweet contributed a perspective. It emphasised the inevitability of closed-source Language Models as well, eventually, being exposed to leaks.

While the potential for various manipulations arises when LLMs are open-sourced, including actions that could undermine their intended purpose—by implementing a comprehensive alignment process, the act of “de-aligning” LLMs—altering their intended behaviour—could be rendered moot. The process of intentionally disrupting an aligned LLM’s behaviour could still be made significantly difficult through meticulous alignment procedures. However, there are challenges with it.

OpenAI recently released a paper addressing superalignment authored by Jan Lieke and Ilya Sutskever, which pertains to the challenge of ensuring that superintelligent artificial intelligence systems are aligned with human values and intent. Superintelligence, the hypothetical level of AI capability far surpassing human intelligence, holds the potential to revolutionize various sectors and address critical global issues. However, it also poses substantial risks, including the potential for human disempowerment or extinction. It seems like the firm is employing a narrative to give a little nudge to it.

Two Sides of OpenAI

Nonetheless, there are others like Andrej Karpathy who is an active contributor to the open-source ecosystem. He recently built a ‘Baby-llama’ model based on Meta’s LLaMA 2, which has been critically celebrated in the developer ecosystem.

It seems like there are two narratives within OpenAI itself, one which wants to capitalise on the momentum from the release of ChatGPT, make mullah and remain closed-source. Others genuinely want to make a difference and haven’t steered a long way from their initial Open-source days—but later turned towards a closed source-for-profit company, in the words of one of its founders.

The post OpenAI Raises Alarm Over Open Source AI Dangers appeared first on Analytics India Magazine.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...