A collective of AI scientists, including Turing Award recipients Geoffrey Hinton and Yoshua Bengio, as well as Sam Altman of OpenAI and Demis Hassabis of Google DeepMind, have endorsed a concise statement from the Center for AI Safety.
The joint statement is that safeguarding against the threat of extinction posed by AI should be regarded as a global priority on par with other significant risks to society, such as pandemics and nuclear warfare, adding on to the steady stream of doomsday predictions revolving around AI.
This unprecedented collaboration brings together AI experts, along with philosophers, ethicists, legal scholars, economists, physicists, political scientists, pandemic scientists, nuclear scientists, and climate scientists, in recognizing the risk of extinction from advanced AI systems as one of the most critical challenges facing the world. Every other influential leader also repeated this sentiment right after OpenAI was released which probably influences the growing public sentiment, as a recent poll revealed that 61 percent of Americans believe AI poses a threat to humanity’s future.
The mounting concerns regarding the potential consequences of AI echo the early discussions surrounding atomic energy, where J. Robert Oppenheimer, one of the pioneers of the atomic bomb, had emphasised the need for international cooperation to prevent nuclear war.
Dan Hendrycks, the Director of the Center for AI Safety, highlights the importance of engaging in conversations similar to those held by nuclear scientists before the development of the atomic bomb. He stresses that addressing the negative impacts of AI, which are already being experienced globally, is imperative. Additionally, we must anticipate the risks associated with more advanced AI systems.
Hendrycks also asserts that while we grapple with immediate AI risks, such as malicious use, misinformation, and disempowerment, the AI industry and governments worldwide must seriously confront the potential threat that future AI systems may pose to human existence. Effectively mitigating the risk of AI-induced extinction will necessitate collaborative global action.
There are sceptics who contend that A.I. technology is not yet developed enough to pose a threat of great magnitude. Their concerns primarily revolve around immediate issues such as biassed and inaccurate responses, rather than long-term risks.
The post The Peril of AI: Leading Scientists Sound Alarm on Extinction-Level Threat Comparable to Nuclear War appeared first on Analytics India Magazine.