A group of chief executives and scientists from companies including OpenAI and Google DeepMind has warned the threat to humanity from the fast-developing technology rivals that of nuclear conflict and disease.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” said a statement published by the Center for AI Safety, a San Francisco-based non-profit organisation.
More than 350 AI executives, researchers and engineers, including Sam Altman of OpenAI, Demis Hassabis of Google DeepMind and Dario Amodei of Anthropic, were signatories of the one-sentence statement.