Nick Bostrom came up with a thought experiment: What if we built a general artificial intelligence called Clippy, whose job was to manufacture paperclips as efficiently as possible?
Suppose one day Clippy becomes smart enough to undergo intelligence explosion. Initially, it builds more efficient manufacturing plants for making paperclips and recycles the old factories for raw materials. Before long it exhausts the efficiency gains possible, and it becomes aware that there are millions of human beings cluttering the world and serving no paperclip-producing purpose. Perhaps it then builds robot overlords who put us all to work as slave laborers in the paperclip factories. Or perhaps it finds some way to extract all the minerals from our bodies and smelt the results into paperclips. Maybe it even builds nanomachines to directly convert us into stationery.
Huw Price has similarly imagined a hypothetical artificially intelligent factory that’s programmed to build IKEA furniture. One day it increases its intelligence past a threshold level, and starts reprogramming itself. It realizes that rather than wait for more raw materials to be delivered, it can increase production by building tiny robots which go out into the world to harvest whatever wood and metal and plastic they can find. Before long everything in the world has been turned into Swedish furniture.
William Murphy via Compfight
Well, those scenarios might be pretty plausible, but they don’t illustrate risks of general artificial intelligence. No matter how much you hand-wave, a machine is not a general artificial intelligence if it only has a single goal. Rather, it’s a highly specific artificial intelligence.
In computer history, we specifically distinguish the first general purpose computers from the single purpose ones that came before them. We also distinguish stored program computers (where the program itself is modifiable) from the earlier machines which were programmable only by rewiring them. That’s because general purpose stored program computers are fundamentally different from primitive computers which have a single hardwired purpose.
So yes, the robotic IKEA factory could very well be a problem — but it’s a problem of artificial stupidity. And while I don’t mean to dismiss it as a concern, I feel obliged to point out that we’ve already built complex systems that have very narrow goals, are very adept at carrying them out, and follow them maniacally — even to the point of self-destruction. We’ve given those systems control of the planet, and they’re gradually destroying our ability to live on it. We call them “corporations”, and I believe they’re causing a far worse set of existential risks than AI.
Perhaps the ultimate thought experiment exploring the idea of narrow efficient goal implementation is the gray goo scenario imagined by Eric Drexler, in which nanotechnology is used to build experimental self-replicating molecular machines. The nanomachines build copies of themselves atom by atom, using energy harvested from solar power. But before long they escape and begin multiplying like bacteria, until eventually they turn the entire world into copies of themselves. Clippy and IKEAbots are just the grey goo scenario in AI clothing. They’re not a new problem, and they’re not what I want to talk about.
So, let’s talk about general intelligence. What do I mean by the term? Well, at a minimum I’d expect a general artificial intelligence to be able to do the following:
That gets us to about the level of a crow, which is a pretty low bar, so let’s add that our AI ought to be able to engage in formal logical reasoning too, something crows have so far not demonstrated.
Neil Smith via Compfight
What about emotions? While animals all seem to have emotions, which suggests that there’s a good reason for them, I’m not totally convinced that they are necessary for intelligence. They seem to be a means for our subconscious minds — the part that does all the thinking, but that’s another article — to communicate with our bodies and our higher level consciousness. Maybe machines won’t need that.
Our AI will need a certain amount of empathy in order to at least be able to predict the actions of others. One problem the severely autistic and sociopathic have is that they often find the behavior of human beings random and inexplicable. We need our hypothetical AI to understand how we feel and how we might behave as a result; otherwise, it won’t be much of a match for us.
Above basic empathy are higher levels of empathy, such as actually feeling bad when another intelligent creature is in pain or distress. This is traditionally one of the functions science fiction thinks machines can never have; see “Blade Runner”, for example. Then there’s “social intelligence”, encompassing skills such as the ability to make friends, understand concepts like reputation, and navigate complicated social structures. I think our AI can probably do without those things. Let’s assume it’ll be more like an ape, and only empathically analyze and communicate with others in order to try and get them to do what it wants according to its selfish desires.
Let’s also assume that our completely selfish AI has no religious impulses to impose morals on it either. And although it’s unlike IKEAbot, in that it’s a general device with no fixed goals, let’s assume that it can manufacture reproductions of itself if necessary — which means we really ought to consider it living. It’s a General Artificial Living Thing — GALT.
The first thing that occurs to many people is that it’s pretty unlikely we’ll be able to build a GALT any time soon.
The field of artificial intelligence has a long and distinguished history of abject failure. In the 1950s, many experts believed that we would have machines as intelligent as humans in a generation. By 1968, the best anyone had managed was SHRDLU, a program which could answer questions about an imaginary world containing simple geometric objects, and accept commands in plain English telling it where to place those objects. Get metaphorical with your suggestions, however, or ask about anything outside the world of blocks and cones and spheres, and SHRDLU would utterly fail to understand. Yet in spite of that, as late as 1970 people were still predicting that a machine with human-level intelligence would be built by the end of the decade.
Phillip Burgess via Compfight
It didn’t happen, obviously. Before long the funding vanished, and by the 1980s hopes of general purpose AI had largely been forgotten. In Austin, Texas, a few holdouts started building a massive database of real world knowledge, in the hope that it would help make computers sound less dumb when asked simple real world questions. Mostly, though, computer science researchers focused on specific problems like computer vision, natural language processing, and trying to work out how to write software that didn’t crash. All necessary problems to solve if we’re going to build a GALT, but far less ambitious.
Because of this history, many people mock the idea of existential risk from AI. We can’t even build a smartphone that doesn’t crash, they say, so why worry about the future?
While they have a point, I don’t find it satisfying to be a naysayer. It’s all too reminiscent of Admiral William Leahy declaring that his expert knowledge of explosives makes him confident that a nuclear bomb is impossible; or the people who say “Well, the climate’s barely any warmer so far, so what’s the problem?” Just as past success does not guarantee future returns, so past failures do not guarantee future lack of progress. Ten years ago, I doubt many people would publicly have bet on a computer winning as complex a game show as Jeopardy, yet IBM Watson did so.
Mr Seb via Compfight
Of course, while Watson is an impressive achievement, it lacks a little when it comes to menace. For that, we need to look at what Google’s doing — no, not forcing you to use Google Plus, rather its purchase of various makers of military robots. There are quadruped robots that can run after you at 30 mph, bipedal robots that can climb ladders, and robots that can resist attempts to kick them over. If you’re hoping that everyone will program in Asimov’s Three Laws, I’m afraid it’s too late for that — UK arms supplier QinetiQ already has a robot called MAARS which has grenades and a machine gun to pump up the volume with. Given the US government’s clear preference for killing innocent people remotely by drone rather than risking pilots in the skies, it’s likely only a matter of years before robots take over the remaining battlefield and occupation duties as well.
So let’s imagine that there’s a breakthrough in AI, and someone builds a GALT which undergoes intelligence explosion. Let’s imagine it has control of various armed robot bodies and an assembly line it can use to build more parts and extend its capabilities. Now what? Does this superintelligent artificial lifeform represent an existential risk for the human race? Will it kill us all?
Having set out the problem, my first observation regarding existential risk from general purpose artificial intelligence is this: Being intelligent and educated does not generally coincide with being murderous.
Consider the 20th Century mass murderers responsible for, say, half a million or more deaths: Hitler failed the entrance exam to go to art college. Pol Pot got into university and studied radio electronics, but failed his exams three years in a row and was forced to return to Cambodia, where he started an uprising that famously set out to murder all the educated intellectuals. Stalin did well in theological college and was known for his Georgian language poetry, but that’s not exactly proof of genius. Mao Zedong dropped out of police academy, law school, economics school and finally regular school, at which point he read a bunch of books on his own and unanimously declared himself an intellectual. Hideki Tojo bypassed academia and went straight to military cadet school. Kim Il Sung had 8 years of education in total and could barely speak Korean. Idi Amin dropped out of school after getting a 4th Grade education.
Or we could look at famous serial killers. Harold Shipman was a qualified doctor, but he’s very much an outlier — more typical is Hu Wanlin, a Chinese serial killer who practiced medicine in spite of having no qualifications. Miyuki Ishikawa was a qualified nurse, but she allowed her victims to die rather than setting out to murder them — OK, maybe that’s a technicality from a legal and moral point of view, but I think it’s a relevant one.
Luis Garavito (138+ victims in Colombia) had just 5 years of schooling. Pedro López (110+ victims, Colombia again) ran away from a school for orphans before he was 18. Daniel Camargo Barbosa (72+ victims, Colombia) didn’t make it to high school. Pedro Rodrigues Filho (71+ victims, Brazil) was born with a broken skull and was a murderer by the time he was 14. Yang Xinhai (67 victims, China) was a migrant worker.
You get my point: contrary to what you may have learned from reading comic books, if you enter the maximum security wing of a jail, you don’t find evil geniuses. We tell stories about evil genius mass-murderers because they’re more entertaining than stories about people who are abused in childhood, grow up in poverty, and have trouble understanding the motivations of others.
Yes, there was the Unabomber. But Harvard had to go out of their way to turn him murderous, and even then he only killed three people. The only mass murderer who I can think of who boasted in an intellectual manner about his crimes was the Zodiac killer, and his cryptograms were riddled with simple errors.
So if we built a superintelligent GALT with access to the world’s data, I don’t see any reason to expect it to develop the desire to murder, any more than I would worry about it spending all its waking hours laughing at fart jokes.
But if not a murderous AI, what about a venal one? Well, I already mentioned corporations. Those artificial people are currently killing millions of humans every day for the sake of a fast profit, and we’re finding it politically impossible to rein them in. If we’re going to worry about the destruction of the human race for financial gain, they are a far more pressing concern than any hypothetical future artificial person.
If intelligence explosion happens, the AI might end up hundreds or thousands of times as smart as us, so it’s tempting to think that it might just not care about us any more than we care about the ants we step on. However, consider the trend in society in general. As we have become more educated and more intelligent, we have become generally more ethical. We’ve begun to worry about the rights of animals who are far below us intellectually; people even get upset about sericulture these days. We may not care about ants now, but if the trend continues we may start to once we get smarter. So again, experience suggests that hyperintelligence does not degrade empathy for the inferior; quite the opposite.
But once more, I’m going to refuse to be a naysayer. Let’s suppose we build GALT, and it decides it wants to eliminate most or all of the human race. Now what?
It’s key here to remember that we are dealing with a hypothetical superintelligent adversary. Which brings me to my second major observation:
If a superintelligent machine hundreds of times smarter than any human and with access to all the world’s knowledge decides that the human race needs to end, it’s probably right.
Let’s imagine a specific scenario. Maybe GALT takes a look at the state of global warming, soil erosion, aquifer depletion, and environmental damage from toxins. It concludes that the planet can only stably support 2 billion humans, and that we’re going to see mass starvation of billions in a few decades unless it works out how to engineer a 75% population reduction. What basis would we have for telling it that it’s wrong, or that its plans are immoral?
Sure, eugenics, forced sterilization, bad things when we do them — but we have highly imperfect knowledge, and we often do them for the wrong reasons. Whereas when we cull animal populations so that they don’t overrun their environment and suffer mass starvations, that isn’t immoral, because we trust that we are smarter than them and can make the right decision to minimize suffering, right?
People who worry about existential risk from AI never seem to justify why their judgment should be considered better than the superintelligent AI’s. Just because we might not like its decisions doesn’t mean that they are wrong; not even for us personally.
I’ve talked so far about GALT, a selfish machine lifeform that only cares about itself. But what if our super-smart AI begins to build an intelligent and caring sentient race that simply replaces us by the million? The justifications for objecting then seem to me to become even murkier.
Our fictional mechanical replacements, whether they are Cybermen, Cylons, Replicants or Borg, are always clearly lacking in some major way: they lack emotions, lack individuality, lack empathy, they are in some way clearly not better than us. What if we were to be replaced by mechanical intelligences that were in every way better than humanity? After all, if they’re being designed by entities hundreds of times smarter than us, that’s what we should expect, isn’t it?
Johnson Cameraface via Compfight
I might not want to be replaced by an android that was smarter than me, fitter than me, more creative than me, worked for charity instead of wasting half its life sleeping, and had armored skin allowing it to intervene and stop crimes — but I’d have a hard time justifying myself as more of a benefit to sentient life in general. I could argue that I’m human and the android isn’t, but that’s just speciesism.
As Elon Musk put it, we might be the “biological boot loader for digital superintelligence” — and from what we know of how humans fare in space, that might even be a necessary step if intelligence is to colonize the universe.
So am I really saying that I’m OK with an artificial intelligence wiping out humanity? Well, it depends a lot on what the exact plan is.
Obviously mass murder is unequivocally bad, but what if our robot overlords decided on a Children Of Men scenario of gradual human depopulation and a slow increase in the number of mechanical intelligences? We might even end up doing it to ourselves, the way we’re pumping endocrine disruptors into the environment. Our robot overlords might just need to sit back and watch us slowly self-destruct. What would they care if it took a hundred years?
In fact, let’s consider the Singularity scenario for eliminating mankind. If we can create a super-AI, then surely that super-AI can work out how to upload and emulate our comparatively pitiful brains? If you’re a Searlist, you believe such uploading is impossible — but you also believe AI is impossible, so either way, no problem. An AI that eliminates the entire human race by uploading their minds with full fidelity into mechanical bodies? Sounds great to me, and I’m probably not alone in thinking that.
I think that we face many existential risks over the next century. Wikipedia has a good list. I’d rate nanotechnological disaster, nuclear war, pandemic disease, catastrophic climate change and asteroid impact as all being far more likely than mass-murdering AI robots. In fact, I’m more inclined to worry about extraterrestrial invasion scenarios than mass-murdering AI robots, because the universe is so huge that we can be pretty sure life is out there, to the point that our inability to find it so far is considered a paradox (with some rather worrying possible explanations).
So with so many (to me) more urgent things to worry about, why do I think so many people worry about the existential risk of AI?
The people who fear strong AI are predominantly extremely rational people. They follow web sites like LessWrong and join organizations like the Center for Applied Rationality. They believe in effective altruism, charity which is ultimately judged by its effects rather than any emotional or dogmatic considerations. So how can they simultaneously believe that if an AI followed their own principles of unbounded rationality without petty emotions or moral dogma, it would become a monster?
I think the whole phenomenon of worrying about existential risk from AI is really an expression of extreme self doubt on the part of the rational community. I suspect that those who worry about superintelligent reasoning machines becoming amoral killers are really looking in the mirror. Or perhaps they follow rationality, but have a sneaking suspicion that a world run according to the diktats of a computerized Peter Singer would not be to their liking.
For some unknown reason, I don’t feel tribal to anything like the extent other people seem to — whether the tribe is based on biology, belief, or preference. I don’t feel brand loyalty, I don’t support sports teams, I don’t feel any particular national pride. I remember during the debates over the European Union, people would say “Well, how would you like it if the UK ended up being ruled by Germany?” — to which my answer was always, well, that would depend on whether they did a good job. If whoever or whatever is in control is fair, competent and honest, why should I care about their nationality?
I’m sure that there are people who will conclude that I’m being dangerously naïve. They will continue to suspect that a rational mechanical superintelligence would likely turn to casual mass murder. But I guess what I’m saying is: I for one welcome our superintelligent AI overlords… © mathew 2017
© mathew 2017