AI could cause 'nuclear-level' catastrophe, third of experts say

More than one-third of researchers believe artificial intelligence (AI) could lead to a...

More than one-third of researchers believe artificial intelligence (AI) could lead to a "nuclear-level catastrophe", according to a Stanford University survey, underscoring concerns in the sector about the risks posed by the rapidly advancing technology. The survey is among the findings highlighted in the 2023 AI Index Report, released by the Stanford Institute for Human-Centered Artificial Intelligence, which explores the latest developments, risks and opportunities in the burgeoning field of AI. "These systems demonstrate capabilities in question answering, and the generation of text, image, and code unimagined a decade ago, and they outperform the state of the art on many benchmarks, old and new," the report's authors say. "However, they are prone to hallucination, routinely biased, and can be tricked into serving nefarious aims, highlighting the complicated ethical challenges associated with their deployment." The report, which was released earlier this month, comes amid growing calls for regulation of AI following controversies ranging from a chatbot-linked suicide to deepfake videos of Ukrainian President Volodymyr Zelenskyy appearing to surrender to invading Russian forces. Last month, Elon Musk and Apple co-founder Steve Wozniak were among 1,300 signatories of an open letter calling for a six-month pause on training AI systems beyond the level of Open AI's chatbot GPT-4 as "powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable".