AI represents a “risk of extinction”, open letter claims

on

Here’s a cheery start to your June. A bunch of the world’s leading AI researchers and engineers have signed a 22-word statement which argues that artificial intelligence represents a “risk of extinction”. 

An interesting follow-up, which is either good or bad news depending on the angle you view it from, is that the signatories include representatives from Google and OpenAI. In other words, the same people trying to commercialise artificial intelligence at an alarming pace. Hmmm.

Here’s the statement from the Centre for AI Safety in full so you can plot your level of alarm accordingly:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The list of people who considered the statement uncontentious enough to put their name to includes Sam Altman (CEO of ChatGPT-maker OpenAI), Demis Hassabis (CEO of Google Deepmind) and Dario Amodei (CEO of Anthropic).

How might AI go from chatbots making sandwich suggestions to a superintelligence ending humanity? Well, the Centre for AI Safety has a number of suggestions, so you don’t have to imagine them (or get ChatGPT to on your behalf):

  • Weaponisation. This is where your old-fashioned bad egg gets AI to do bad things, like discover new chemical weapons.
  • AI-generated misinformation. We’re kind of seeing this one already, but done by humans. Fake news and disinformation could destabilise democracies and “undermine collective decision-making” — and far more efficiently than your average Russian troll farm.
  • Enfeeblement. This is the Wall-E scenario, where humanity essentially downs tools and lets AI handle everything to its detriment. Skills are forgotten, and we’re in trouble when we need them again.
  • Proxy Gaming. This is where AI’s are given a goal, but don’t understand that there are some things you don’t do in order to achieve it. It’s the old paper clip maximiser scenario, basically:

That’s just a few. More nightmare scenarios can be read here if you don’t fancy sleeping tonight. 

Of course, there are plenty of respectable experts who think these concerns are overblown and the products of minds that absorbed too much science fiction. That includes Professor Yann LeCun — one of the three so-called ‘godfathers of AI’:

However, it’s worth noting that the two other godfathers — Geoffrey Hinton and Yoshua Bengio — have signed the statement. 

And it’s also worth noting that the former of these used to think such fears were overblown, too. He doesn’t anymore.

“You need to imagine something more intelligent than us by the same difference that we’re more intelligent than a frog,” he told The Guardian last month. “And it’s going to learn from the web, it’s going to have read every single book that’s ever been written on how to manipulate people, and also seen it in practice.”

He thinks what the paper euphemistically refers to as “crunch time” will come in the next five to 20 years. 

“But I wouldn’t rule out a year or two. And I still wouldn’t rule out 100 years – it’s just that my confidence that this wasn’t coming for quite a while has been shaken by the realisation that biological intelligence and digital intelligence are very different, and digital intelligence is probably much better.”

Happy dreams, readers.