Politics Events Local 2025-12-13T08:23:14+00:00

Researcher Warns of 99.9% Probability of AI Catastrophe

AI security expert Roman Yampolskiy stated a near 100% probability of a catastrophic scenario for humanity within the next century due to rapid technological development. Other researchers offer more modest estimates.


Researcher Warns of 99.9% Probability of AI Catastrophe

Roman Yampolskiy, a researcher specializing in artificial intelligence security, warned that the rapid development of this technology could threaten humanity's future. He noted that the probability of a catastrophic scenario within the next hundred years could reach 99.9%. Yampolskiy bases his warning on the observation that all current artificial intelligence systems lack sufficient security guarantees, believing that future generations of these technologies may not solve the same fundamental problems. In doing so, he joins a list of experts concerned about the long-term risks of this technology. Yampolskiy has published his latest book, "Artificial Intelligence: Unexplainable, Unpredictable, Uncontrollable," in which he explains the core risks of AI, such as the difficulty in predicting its behavior or understanding the reasons for its decisions. The book also addresses questions about who controls this technology, the unintended consequences of its use, and includes philosophical discussions about whether AI can possess consciousness or a personality. Conversely, other research offers less severe estimates. A joint study by the Universities of Oxford and Bonn, which included the opinions of over 2,700 experts, estimated that the probability of human extinction due to artificial intelligence does not exceed 5%. Researcher Katja Grace explained that most experts acknowledge the risk, but the disagreement lies in its precise scale. On the other hand, some prominent pioneers in the field reject the idea of total human extinction, such as Google Brain co-founder Andrew Ng and researcher Yan LeCun, who believe that some exaggerated warnings may serve other purposes unrelated to science. Furthermore, OpenAI CEO Sam Altman sparked widespread debate when he warned that AI could eliminate a large number of jobs he described as not "real work," predicting fundamental changes to the existing "social contract." Altman had stated in 2015: "Most likely, artificial intelligence will lead to the end of the world, but before that, it will create great companies."