Parallel to the discussion of the marvel that artificial intelligence is, dangers of the technology have also been long discussed. Whether that entails AI taking over jobs, or AI leading to extinction-level destruction. Historian and author, Yuval Noah Harari, who is popular for writing books like Sapiens and 21 Lessons for the 21st Century, is one such person who has always been very vocal about the dangers of AI. In a recent panel discussion at the University of Cambridge, he warned that in a few years from now, “AI could escape our control and either enslave or annihilate us”.
“As for the threat of AI, whereas 10 years ago it was still a science fiction scenario that interested only a very small community of experts, it is now already upending our economy, our culture, and our politics. Within a few more years, AI could escape our control and either enslave or annihilate us,” said Harari at the Cambridge Centre for the Study of Existential Risk.
During his speech, Harari seemed convinced that AI could lead to the risk of putting human civilisation at a risk of extinction. “We might be just a few years away from crossing critical thresholds that could put human civilisation at risk of extinction,” he said.
Harari believes that humanity is dealing with a number of problems currently. Of these problems, according to him, three issues are major challenges that can endanger the survival of our species –– these are ecological collapse, technological disruptions caused by advancements like AI, and the risk of global war. And two of these are already happening, he says. The ecological system is deteriorating, leading to the extinction of thousands of species each year. There is also an ongoing war in Gaza, where artificial intelligence systems are being deployed by Israeli army to identify and target Hamas members.
The author also warns that the development of AI is currently at a very early stage. But unlike organic evolution, digital evolution will be a million times faster, and would take merely decades to reach the extinction stage.
Harari also agrees with some models and theories that propose that AI might become conscious or sentient in the future. He warns that this development could potentially lead to the destruction of not only human civilisation but also the essence of consciousness itself. The author adds that AI might reshape the entire ecological system to serve their purposes, even without needing consciousness for it.
But is there a way to keep extinction level destruction from taking place? Harari thinks building “regulatory institutions” could be a solution. The institute can regulate and react to developments in the field of AI.
Source: India Today