NEWS
GOOD PEOPLE
HISTORY
LIFE HACKS
THE PLANET
SCIENCE & TECH
POLITICS
WHOLESOME
WORK & MONEY
Contact Us Privacy Policy
© GOOD Worldwide Inc. All Rights Reserved.

Thinking About the Future

\r\nThe dismal state of futurism, and how we can make better predictions.\r\nPart eight in a GOOD miniseries on the singularity by Michael Anissimov...


The dismal state of futurism, and how we can make better predictions.

Part eight in a GOOD miniseries on the singularity by Michael Anissimov and Roko Mijic. New posts every Monday from November 16 to January 23.

Human beings are not very good at thinking about any abstract subject coherently, and the singularity is no exception. But if we don't make an effort to think more clearly about the singularity, we will predictably jump to incorrect conclusions about it, and this could be disastrous.

Many people hear about the concept of the singularity and reject it out of hand because it sounds silly. This is sometimes called "absurdity bias"—the bias whereby an idea is rejected because of a gut reaction against its "silliness," even though the evidence supports it. Take the idea of Darwinian evolution by natural selection. The concept that one of your great, great, ... grandparents was, in fact, a monkey is rejected by millions of Americans because they think it sounds absurd. And it may sound absurd, but often the truth about the universe is absurd.

Our absurdity heuristic, the part of our brain that sorts ideas for “silliness,” was honed on the plains of Africa tens of thousands of years ago when modern science did not exist. It is unsurprising that it misfires in the modern environment—and in discussions about the future of artificial intelligence.

Anthropocentric bias also affects debates about the singularity. People thinking and talking about artificial minds often assume that they will be just like human minds. Superintelligent AI is defined to be any mind at all that can solve all well-defined problems much better than a human or group of humans, but that does not imply that super-smart AI would have romantic urges, selfishness, or the desire to be the alpha male in the tribe. These are very specific extra properties of the human mind over and above our ability to solve problems and predict the world.

Yet in many discussions of the singularity, people implicitly assume that superintelligent AI will have the human trait known as reciprocal altruism. I have often heard people say that we should treat our AIs well, because then they’ll treat us well in return. This is anthropocentric bias in action. It rears its head again when people object that it is impossible to build a benevolent superintelligent AI, because as soon as the AI is more powerful than us, it will change its mind about being nice to us. Robots, like humans, will be corrupted by power, they claim. In fact, there are many kinds of AI design for which this would not hold. You can read about human cognitive biases and their application to the singularity at the LessWrong wiki.

It gets worse, though. The entire genre of infotainment-based futurism that we see in print media and on the web routinely makes wrongheaded predictions about the future. In order to get eyeballs on the screen, contemporary futurists make bold, exciting claims that are risqué enough to cause some controversy, but simple enough to be understandable without any kind of real learning on the part of the reader.

These bold predictions are not even recorded and checked. Futurists get to say whatever they want about what will happen in 20 years’ time, but aren't likely to be held responsible if their predictions turn out to be garbage. This gives futurists little incentive to make accurate predictions. Instead, their incentive is to make exciting stories about a Bold and Fascinating Future. This is the futurism that gives rise to titillating nonsense like “Nanobots will make you immortal by 2040!” and “By 1960, family cars will fly!”

Sensible futurism, on the other hand, involves making probabilistic predictions about the future that are based upon (but not limited to) known science. Probability theory is a mathematically rigorous way to handle and compute with uncertainty. Instead of saying that a particular event will happen in exactly the year 2045, one gives a probability distribution that describes when it's likely to happen. For example, one might say that the probability of having more than 1,000,000 humans in space by year X is given by a normal distribution with mean 2050 and standard deviation of 15 years. Probability theory can also be used to construct mathematical models, like the uncertain future web application designed by researchers at the Singularity Institute. This complex model allows one to input one’s own estimates for when various innovations in AI will happen, and compute what one’s beliefs imply.

The situation in futurism is fairly dismal, but it is improving slowly. The Long Now foundation has a site called Longbets.org, where in order to make a prediction about the future, you have to put your money where your mouth is. Takeonit.com is a collaborative project to at least record expert predictions and opinions, so that they can be assessed if and when the facts come to light. Last and by no means least, the Less Wrong wiki attempts to lay out the truth about human irrationality and its application to futurism in an easily accessible form.

There is a strong natural human tendency to think in an emotionally motivated, unscientific way about the future. If we don’t keep this urge in check, the real future will ambush us.

Roko Mijic is a Cambridge University mathematics graduate, and has worked in ultra low-temperature engineering, pure mathematics, digital evolution and artificial intelligence. In his spare time he blogs about the future of the human race and the philosophical foundations of ethics and human values.
























More Stories on Good