Why we should be worried about an artificial intelligence explosion

Elliot Olds
9 min readNov 29, 2017

François Chollet recently published an essay arguing that an artificial intelligence explosion (in which an AI rapidly becomes far more powerful than humans) is impossible. Below I’ll describe why I found many of his arguments unconvincing.

Does “general intelligence” exist?

there is no such thing as “general” intelligence. On an abstract level, we know this for a fact via the “no free lunch” theorem — stating that no problem-solving algorithm can outperform random chance across all possible problems. If intelligence is a problem-solving algorithm, then it can only be understood with respect to a specific problem.

According to this definition of general intelligence, not even humans have it. This raises the question: what good is this strict definition of generality?

Clearly humans are able to solve problems far more generally (and of higher complexity) than any other animal. Whatever kind of intelligence that humans have more of and mice have less of, it exists and is important.

We should worry about an AI that has far more of the type of intelligence that humans already have, even if someone reassures us that nothing can be truly generally intelligent.

The intelligence of the AIs we build today is hyper specialized in extremely narrow tasks — like playing Go, or classifying images into 10,000 known categories. The intelligence of an octopus is specialized in the problem of being an octopus. The intelligence of a human is specialized in the problem of being human.

The claim is that humans are not more generally intelligent than AlphaGo or octupuses. If you think about what exactly is involved in “the problem of being a human,” you’ll realize that it involves solving a much wider range of problems than playing Go extremely well.

Would we necessarily be separating intelligence from context when creating AIs?

What would happen if we were to put a freshly-created human brain in the body of an octopus, and let in live at the bottom of the ocean? Would it even learn to use its eight-legged body? Would it survive past a few days?

This is not analogous to how a superhuman AI would be created. In the given scenario a brain is “trained” (via evolution) for one environment, and then “tested” by putting it in a very different environment.

The creators of a superhuman AI would try to make its interface to the environments in which it learns as similar as possible to its eventual interface to our world.

What would happen if we were to put a human — brain and body — into an environment that does not feature human culture as we know it?

A superhuman AI would be trained within our culture. Things relevant to operating in our world (our language, history, psychology, etc.) would be fed to the AI in order to prepare it to achieve its goals.

Does high intelligence even matter that much?

If the gears of your brain were the defining factor of your problem-solving ability, then those rare humans with IQs far outside the normal range of human intelligence […] would take over the world — just as some people fear smarter-than-human AI will do.

It’s a mistake to infer that because people with IQs 70 points above average don’t take over the world, that no entity could take over the world even if it were thousands of times smarter than the world’s smartest humans.

There is no evidence that a person with an IQ of 170 is in any way more likely to achieve a greater impact in their field than a person with an IQ of 130.

Stephen Hsu describes why this is false here.

The short version is that if you look at all high achieving people, you’ll find more with IQs of 130 than of 170. However you need to control for the fact that there are far more people with IQs of 130 than people with IQs of 170. If you look at particular individuals, a 170 IQ makes high achievement far more likely than an IQ of 130.

What form would superhuman AI take?

Similarly, an AI with a superhuman brain, dropped into a human body in our modern world, would likely not develop greater capabilities than a smart contemporary human.

An AI that takes over the world would likely not be a machine brain in a human body — instead it would initially exist on the Internet. It might start by copying itself to as many of the world’s computers as it could (without detection), use those computers to further improve its software, and eventually manufacture extensions of itself to take physical action in the world.

Does civilization grant humanity protection?

Most of our intelligence is not in our brain, it is externalized as our civilization

An AI would have access to all of our civilization’s knowledge as well.

A conflict with a superhuman AI wouldn’t be human civilization vs. one AI on a standalone computer disconnected from civilization. It would be human civilization vs. a powerful intelligence who had access to all of our computing power, all of our knowledge, and all of our civilization’s other infrastructure.

The AI would likely not let humans know they were in any sort of power struggle until it had developed its capabilities enough to easily seize control.

It is civilization as a whole that will create superhuman AI, not you, nor me, nor any individual. A process involving countless humans, over timescales we can barely comprehend. A process involving far more externalized intelligence — books, computers, mathematics, science, the internet — than biological intelligence.

We can say all of human civilization will in some sense get credit when a superhuman AI is created, but this doesn’t change the fact that the architecture and goals of the AI may be disproportionately controlled by a single organization.

If DeepMind develops the first superhuman AI, the fact that they utilized knowledge and technology from the rest of humanity won’t magically prevent them from giving the AI goals that lead to catastrophe for humanity.

Is recursive improvement possible?

Will the superhuman AIs of the future, developed collectively over centuries, have the capability to develop AI greater than themselves? No, no more than any of us can. Answering “yes” would fly in the face of everything we know — again, remember that no human, nor any intelligent entity that we know of, has ever designed anything smarter than itself.

This is similar to the earlier mistake of saying that because someone with an IQ of 170 can’t take over the world, no single entity can ever take over the world no matter how smart it is.

The fact that humans haven’t successfully done brain surgery on themselves to give themselves superpowers should not give us much confidence that something smarter than humans with a very different mental architecture could not directly improve its own intelligence in a more efficient way than humans are capable of.

Because humans and other animals all evolved from common ancestors, we can’t treat the failure of all animals to recursively self improve as “independent” failures. Humans and cows have an architecture that is probably much more similar than the architectures of humans and the first superhuman AI. We should be cautious about extrapolating from what humans and cows can do to what a superhuman AI could do.

Should we expect scientific progress to continue linearly?

Can you do 2x more with your the software on your computer than you could last year? Will you be able to do 2x more next year? Arguably, the usefulness of software has been improving at a measurably linear pace, while we have invested exponential efforts into producing it.

This is what we’d expect to see if the main thing preventing human civilization from becoming far more productive was inherent limitations in human brains. It’s unclear whether this lack of productivity would remain if we created entities smarter than us with extremely different brain architectures.

For more detail about how fast the world might improve once human brains are no longer the productivity bottleneck, see this essay by Robin Hanson.

Beyond contextual hard limits, even if one part of a system has the ability to recursively self-improve, other parts of the system will inevitably start acting as bottlenecks.

AI won’t be able to self-improve forever, but it’s hard to know exactly how much it could improve before it runs into fundamental limits.

Imagine a woolly mammoth trying to assure another mammoth that nothing as intelligent as humans could ever arise to take over the world. “Even if they got a little bit smarter than chimps, there would be bottlenecks! That’s how complex systems work. Don’t worry.”

How do we know that the bottlenecks that self-improving AI will run into won’t be at least as far above our current intelligence as we are above that of mammoths?

modern scientific progress is measurably linear. […] We didn’t make greater progress in physics over the 1950–2000 period than we did over 1900–1950 — we did, arguably, about as well.

This is the essay’s strongest argument, but we should not be very comforted. Three reasons are given for why scientific progress has been linear, but the AI would be significantly less affected by all of them.

The next three sections analyze these reasons.

Will lack of low hanging fruit stop an AI from improving quickly?

Doing science in a given field gets exponentially harder over time — the founders of the field reap most the low-hanging fruit, and achieving comparable impact later requires exponentially more effort.

Yes — however an AI can mitigate this by scaling up its amount of effort much faster/cheaper than humans can.

For humanity to add more scientists to work on a problem these scientists need to be born, given lots of resources, and taught things for about 20 years before they can start cutting edge work on a scientific problem.

For an AI to add more resources to a scientific problem, it just needs to construct more hardware and then copy existing software (itself) onto this hardware.

Is communication overhead a big problem for an AI?

Sharing and cooperation between researchers gets exponentially more difficult as a field grows larger. It gets increasingly harder to keep up with the firehose of new publications. Remember that a network with N nodes has N * (N — 1) / 2 edges.

An AI will be able to configure its architecture to consume knowledge not as ~7 billion distinct entities that can only communicate imprecisely through language, but as one unified entity with efficient communication between its components.

Imagine if everyone in the world immediately knew anything that any other human learned, with the same clarity of the person who understood it best. How fast would humanity make progress?

Even if an AI still has significant communication overhead between its components, the amount of improvement possible above the current human situation could be enormous.

AI culture vs. human culture

As scientific knowledge expands, the time and effort that have to be invested in education and training grows, and the field of inquiry of individual researchers gets increasingly narrow.

A singular AI which can copy parts of itself onto new hardware does not need to invest further time and effort into education and training after one part of the AI learns something.

This ties into the earlier discussion about civilization and culture: the idea that human culture is critically important and would be a huge asset in our attempt to control any superhuman AI should make us even more worried about how powerful a superhuman AI would be, when we think about what culture is and how inefficient it is.

Culture is how humanity transmits and preserves knowledge. Yet our system of getting things from one human brain to another is horribly slow, inefficient, and lossy.

A unified superhuman AI would have its own system of integrating and using information throughout its vast pool of computational resources. This would essentially be a superhuman “AI culture” that would likely be far superior to human culture.

Conclusion

We should not be very confident about extrapolations from what we observe in humans to how things must work for any superhuman AI.

Recursive AI improvement isn’t a certainty, but we don’t know that it won’t happen — at least not based on the arguments presented in Chollet’s essay.

Even if superhuman AI reaches bottlenecks shortly after it starts improving itself and we don’t experience a full “intelligence explosion,” we should still be worried.

We know that some animals (like humans) are a lot smarter than other animals (like mice). If we ever created an AI which got to be more intelligent than us to the same degree that we’re more intelligent than mice, it could be a serious threat to our existence even without additional recursive improvement.

If you’d like to read more about the problem of AI safety, this site contains a good collection of links.

Follow me on twitter.

--

--