Does artificial intelligence really pose an existential threat?

evil artificial intelligence

We’ve all seen the films. We all love a good story about ‘evil AI’ – Skynet, The Matrix, Ultron, and The Cylons to name but a few. Several high profile people, such as Elon Musk and Stephen Hawking have recently warned us of the potential dangers of Artificial Intelligence. Musk has gone as far as starting a company to directly address these concerns. But are these realistic views of the future of AI? Is Skynet a possibility?

In theory, yes, superintelligent AI is a possibility. The brain is simply a collection of electrically active units (neurons) that communicate via binary signals (action potentials). In fact, we mathematically model the behaviour of individual neurons using the exact same equations used to model electrical circuits. There is no reason why, in theory, we couldn’t build something sufficiently complex that it is self aware. But the key word there is theory. In theory it is possible. In practice, however, I’m not so sure.

Before going on we need some definitions. There are three broad classifications of AI: weak, strong, and superintelligent. Weak AI is what we currently have. It is good at one specialised task, such as playing Atari games or Go. Strong AI is roughly of human level intelligence, though what exactly this means I will discuss below. Superintelligent AI is intelligence beyond our comprehension. It’s not good or evil, it’s just really really smart. The term used to describe the moment where AI surpasses human intelligence is the technological singularity. Broadly speaking, this is where AI ‘runs away’, improving exponentially and results in unimaginable changes to human civilisation (for better or for worse!). Many think that the singularity is close. For example, Ray Kurzweil makes the bold claim that it will occur by 2045.

Is the singularity actually going to happen?

A very eloquent description of the ‘yes and it will happen soon’ camp has been written by Tim Urban, and Nick Bostrom has written a whole book on the topic. In a nutshell, the premise behind the impending singularity is exponential growth – technology is currently advancing at an accelerating rate and has been for decades. We can’t assume this will continue, but it certainly doesn’t look to be slowing down (but then neither did the dot-com bubble). I don’t necessarily oppose the suggestion that a superintelligent AI could happen. The main problem I have with arguments from people like Time Urban and Ray Kurzweil is one of timescales. Assuming our current rate of advancement continues unabated, when will AI be on a par with human intelligence? Well, to measure progress of AI relative to ourselves we need to have a reliable measure of human intelligence. We don’t. Most measures seem to focus on assigning an estimate of compute capacity to the brain, which, given our current level of understanding is unreliable at best. For example, Ray Kurzweil gives an estimation of ten quadrillion calculations per second by erroneously considering a single action potential as equivalent to a calculation in the mathematical sense.

Now, the neuroscience community has a reasonably good description of a single mammalian action potential, and has a decent understanding of which brain areas are involved in different high-level functions (movement, memory etc). However, to suggest we can quantify a single calculation given our current level of understanding is ludicrous, and may not even be possible given the highly stochastic nature of neuronal functioning. Intelligence isn’t simply down to the number of action potentials that occur; this simplistic description rapidly breaks down once you start to consider things like threshold potentials, synaptic vesicle release, or neural facilitation. To give an indication of how close we are to fully understanding neural computation, we can’t simulate the brain of a c. elegans nematode, which comprises only 302 neurons (although there is an open source project attempting this who have put their model into a lego robot!). As an aside, a more reliable estimate of the brain’s computation capacity could be traversed edges per second, though this is still overly simplistic.

So, given we can’t really assign a value to human intelligence, we have no way of estimating when AI will achieve it. However, this doesn’t stop people trying; a survey of expert AI researchers suggests that there is a 50% chance of strong AI being realised around 2050, and a 90% chance by 2075. I would be surprised if many of the AI researchers suggesting this were also trained neuroscientists (disclaimer: I’m going out on a limb with that last statement and basing it on my own experience, where I have rarely seen expertise in machine learning and neuroscience overlap – I’m of course open to corrections!).

The second issue I have with the ‘we’re doomed’ argument (which I won’t dwell on) is that it is assumed that weak, strong, and superintelligent AI all lie on the same exponential growth curve, which they clearly don’t; strong AI is not simply a progression of weak AI, it is a whole new paradigm. You can build the most bad-ass Go playing AI ever, but there’s no way it will recognise a picture of a cat (we all know that cat recognition is a prerequisite to sentience…). I think we will go some way to building more general purpose AI systems by bolting multiple weak AI systems together in a decision tree-like manner (I will delve into this in a future blog post), but this would be a long way from achieving sentience.

If the singularity does happen should we be worried?

As I mentioned at the start of this post, many people think we should be worried, or at least that we should be careful. Indeed, if/when AI does outsmart us there is really no telling how quickly it would advance. Furthermore, Nick Bostrom makes a good argument that AI doesn’t have to be inherently evil to pose a threat, but could simply have an ill defined optimisation function. In reality we face far more immediate threats, such as the ramifications of climate change; our failure to fully embrace potentially population saving technologies like genetically modified crops or vaccination programmes; and Donald Trump. AI has the potential to drive significant progress in at least two of these three examples so we should certainly continue to develop it. Personally, I think it is far more likely we see society collapse due to more immediate existential threats long before strong AI is ever realised. That said, there’s certainly no harm in planning ahead, just in case Skynet pops up on Twitter

[Thanks to Michael Harrison for the artwork.]

EDIT: By happy coincidence, MIT Technology Review published an article on the same topic, on the same day, and arrived at much the same conclusion!

Published in
0 Comments

Leave a reply

Log in with your credentials

or    

Forgot your details?

Create Account