OpenAI Doesn’t Need Safety Lessons from Safe Superintelligence 

Mira Murati, the chief technology officer of OpenAI, during a recent interview at the AI Everywhere event at Dartmouth College, said that OpenAI gives the government early access to new AI models, and they have been in favour of more regulation.

“We’ve been advocating for more regulation on the frontier models, which will have these amazing capabilities and also have a downside because of misuse. We’ve been very open with policymakers and working with regulators on that,” she said.

Mira Murati says OpenAI gives the government early access to new AI models and they have been advocating for more regulation of frontier models pic.twitter.com/akubHLq28M

— Tsarathustra (@tsarnick) June 20, 2024

The discussion, moderated by Dartmouth trustee Jeffrey Blackburn, covered both the potential benefits and the inherent challenges of AI advancements.

“In terms of safety, security, and the societal aspects of this work, I think these things are not an afterthought. It can’t be that you sort of develop the technology and then you have to figure out how to deal with these issues,” said Murati.

“You have to build them alongside the technology and actually in a deeply embedded way to get it right. And for capabilities and safety, they’re actually not separate domains. They go hand in hand,” she added.

Notably, OpenAI recently appointed retired US Army General Paul M Nakasone to its board of directors. As a priority, Nakasone will join the board’s Safety and Security Committee, which is responsible for making recommendations to the board on critical safety and security decisions for all OpenAI projects and operations.

Murati’s optimism about AI is in the belief that smarter AI can lead to safer and more beneficial outcomes. She emphasised that the future of AI lies in creating systems that are not only more intelligent but also more secure. This dual focus on capability and safety is crucial as AI becomes increasingly integrated into various aspects of society.

Murati’s Perspective

According to Murati, OpenAI prioritises safety, usability, and reducing biases, aiming to democratise creativity and free up humans for higher-level tasks.

In a recent post on X, she said that to make sure these technologies are developed and used in a way that does the most good and the least harm, they work closely with red-teaming experts from early stages of research.

“We also use an iterative approach, gradually releasing tools and carefully studying how they impact the real world to guide future development. Protecting and strengthening the most valuable aspects of creativity is fundamental to our human experience,” she said.

This is a positive step towards ensuring the responsible use of AI. This allows governments to better understand the capabilities and limitations of the technology, and develop appropriate regulations to minimise potential risks.

Meanwhile, OpenAI’s former chief scientist, Ilya Sutskever, recently started his own company called Safe Superintelligence. He left the company in May 2024 amid reports of tension with CEO Sam Altman over AGI safety and the rapid pace of advancements at OpenAI.

After almost a decade, I have made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the…

— Ilya Sutskever (@ilyasut) May 14, 2024

Seemingly in response to this, and for safety concerns, OpenAI formed its Safety and Security Committee led by directors Bret Taylor (Chair), Adam D’Angelo, Nicole Seligman, and Altman.

This committee recommends critical safety and security decisions for OpenAI projects and operations as the company trains its next frontier model, which is expected to advance AGI capabilities.

Exploring AI’s Potential

When asked how OpenAI’s safety work aligns with the development, and if she believes it falls within their domain or requires external regulation, Murati candidly replied, “My perspective on this is that this is our technology. So it’s our responsibility [to see] how it’s used.”

She added that it’s also a shared responsibility of the society, civil society, government, content makers, media, and so on, to figure out how it’s used. “But in order to make it a shared responsibility, you need to bring people along. You need to give them access, and tools to understand and to provide guardrails,” she said.

Furthermore, the discussion highlighted the transformative impact of ChatGPT in bringing AI into the public consciousness. By providing people with a tangible, interactive experience of AI, ChatGPT has simplified the technology and made its capabilities and risks more comprehensible.

Moreover, when people are aware of the potential and limitations of AI, they are better equipped to advocate for appropriate uses and safeguards.

There is a need for a comprehensive and collaborative approach to AI regulation and safety. By focusing on risk minimisation, involving governments and fostering public awareness, we can better prepare for the transformative impact of AI on society.

This balanced approach can help ensure that AI is developed and used responsibly, benefiting individuals, businesses, and society as a whole.

The post OpenAI Doesn’t Need Safety Lessons from Safe Superintelligence appeared first on Analytics India Magazine.

Follow us on Twitter, Facebook
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 comments
Oldest
New Most Voted
Inline Feedbacks
View all comments

Latest stories

You might also like...