The Artificial Intelligence (AI) Security Center in the United States and other groups of experts have together warned that the development of AI technology threatens the extinction of humanity and that measures must be taken to prevent such a scenario.
“Reducing the risk of extinction that could occur because of AI should be a global priority along with other threats such as pandemics and nuclear war,” the Center for AI Safety said in a statement published on its website.
The document was signed by AI experts, as well as experts in information technology, economics, mathematics, and philosophy. Among them are the CEO of Open AI and programmer Sam Altman, and one of the developers of AI, British scientist Jeffrey Hinton.
The initiators of the statement circulated in order to raise awareness of the risks of AI, to start discussions on the topic, and to talk about the increasing number of experts and public figures who take the threats from advanced AI technology seriously.
The Associated Press recalls that earlier this year, Twitter, SpaceX and Tesla owner Elon Musk and more than 1,000 scientific researchers and technicians signed a letter calling for a six-month suspension of AI development programs because, in their words, “it poses serious risks to society and humanity.”