Past Yonder

A human's thoughts on AI


The latest parlor betting game: when will AI kill all humans?

In Mary Shelley’s 1818 novel Frankenstein, Victor Frankenstein, a talented but troubled scientist, becomes consumed by the secrets of life and death and creates a living being from assembled body parts. His creation eventually turns on him.

Since that novel, the theme of technology betraying its creators has been revisited time and time again in novels and movies such as Neuromancer, The Matrix, and of course the Terminator franchise.

These plots have sat firmly in the land of science fiction, but more and more experts are warning that recent advances in AI could shift them from science fiction to science fact. They just can’t agree on when that might happen.

AI ponders the fate of humanity.

Earlier this year, NewScientist described a survey of 2,700 AI researchers who had published work at six of the top AI conferences. In the survey, the experts were asked to share their thoughts on possible AI technological milestones, including whether AI might, you know, kill us all.

Almost 58 per cent of researchers said they considered that there is a 5 per cent chance of human extinction or other extremely bad AI-related outcomes.

The story points out that AI experts don’t have a great track record of forecasting future AI developments, which could provide comfort or not, depending on whether you think their prediction is overestimating or underestimating the inherent risks of AI.

Elon Musk, CEO of Tesla and owner of that social networking site that was previously known by a different name, has weighed in with his own thoughts.

In a 2023 interview with CNBC, Musk presented a good-news-bad-news scenario.

There’s a strong probability that (AI) will make life much better and that we’ll have an age of abundance. And there’s some chance that it goes wrong and destroys humanity.

Musk has called for increased regulation to help temper the risks of AI.

While AI systems are not yet as smart as humans (at least in a generalized sense), they’ve made huge inroads in recent years, and experts are beginning to predict that the point of singularity – when AI surpasses human intelligence and abilities – might not be far away.

Inventor and futurist Ray Kurzweil recently made a prediction on Joe Rogan’s podcast that AI will achieve human-level intelligence by 2029, but in a post to X, Musk claimed it will happen in 2025.
“AI will probably be smarter than any single human next year,” Musk predicted. “By 2029, AI is probably smarter than all humans combined.”

When that point of singularity is reached, we should be concerned, at least if we trust the guts of science fiction authors.

The fear is that AI will turn on its creators, viewing them as either inferior and useless, or a threat to its existence.

The Terminator foreshadowed the risks of creating too-smart autonomous weapon systems. But the perils of AI might be less dramatic than literal warfare between humans and machine. The Pixar movie WALL-E showcased what might happen when humans become so reliant on technology that they lose the ability to walk and become completely dependent on automated systems. The antagonist in the movie, an autopilot AI, attempts to keep humans in this dependent state.

Michael Garrett, a researcher at the Department of Physics and Astronomy at the University of Manchester, recently presented an interesting theory in a paper titled Is artificial intelligence the great filter that makes advanced technical civilisations rare in the universe? In it, he notes that astronomers have been puzzled over the past 60 years that they have never detected potential “technosignatures” of extraterrestrial life in astronomical data. Surely, if other intelligent lifeforms have existed in the Universe – which many scientists believe has a high likelihood – we would have detected them by now, the theory goes.

This discrepancy, known as the Fermi paradox, has spawned many theories, including that the lifetime of intelligent civilizations is too short (in cosmic time) to detect, or that perhaps life in the Universe truly is a rare event.

But Garrett presents another theory: that biological civilizations universally underestimate the speed that AI systems progress. In other words, he suggests there is an inevitability that any intelligent lifeform will create an Artificial Superintelligence (ASI), and that development will ultimately be their undoing. And that might be why we haven’t seen signs of extraterrestrial life.

Garrett goes further and suggests that the longevity of technical civilizations (on Earth or elsewhere) is less than 200 years.

Which, really, is just a blink of an eye.

While experts disagree on whether AI will present an existential threat to humanity and if so, on what timeframe, there seems to be little consensus about what we can do about it. Musk and others have suggested government regulation might be the answer, but this is a global risk and non-technical politicians don’t have a great track record of keeping pace with rapidly-evolving technology, let alone understanding its full ramifications.