Industry experts and IT leaders said in an open letter that the risks associated with artificial intelligence should be reduced on a global scale since it may cause human extinction.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement Tuesday said.
The CEO of ChatGPT manufacturer OpenAI, Sam Altman, as well as leaders from Microsoft and Google’s AI division DeepMind backed and signed the Centre for AI Safety’s brief statement.
Following the public release of the chatbot ChatGPT in November and its subsequent viral success, the technology has advanced rapidly in recent months.
After only two months of operation, it attracted 100 million users. With its capacity to provide humanlike responses to user prompts, ChatGPT has astounded experts and the general public, raising the intriguing possibility that AI could replace humans and do human-like tasks.
A “wide spectrum of important and urgent risks from AI” have been discussed more recently, according to the statement released on Tuesday.
However, it acknowledged that it can be “difficult to voice concerns about some of advanced AI’s most severe risks” and sought to do so by dispelling this roadblock and igniting the dialogue.
A lot more people are likely to be aware of and use AI now that ChatGPT has been released, as large corporations all over the world are competing to provide similar products and features.
In March, Altman acknowledged that he is “a little bit scared” of artificial intelligence because he is concerned that authoritarian countries may develop the technology.
Elon Musk, the CEO of Tesla, and Eric Schmidt, the former CEO of Google, have also issued warnings about the dangers AI poses to civilization.