NEWS
GOOD PEOPLE
HISTORY
LIFE HACKS
THE PLANET
SCIENCE & TECH
POLITICS
WHOLESOME
WORK & MONEY
About Us Contact Us Privacy Policy
© GOOD Worldwide Inc. All Rights Reserved.

Our Delicate Future: Handle with Care

\r\nWhy we need to get serious about the threats that could wipe out humanity.\r\nPart nine in a GOOD miniseries on the singularity by Michael...



Why we need to get serious about the threats that could wipe out humanity.


Part nine in a GOOD miniseries on the singularity by Michael Anissimov and Roko Mijic. New posts every Monday from November 16 to January 23.

In his book Reasons and Persons, the philosopher Derek Parfit asks us to compare two scenarios: A nuclear war that kills 99 percent of the world's existing population, or a nuclear war that kills 100 percent. The first outcome would be a horrible disaster, but the second would be an existential disaster: one that destroys the human race or irreversibly curtails our whole future.

Call the chance of such an event an "existential risk." The future promises a whole host of new ways the human race could be snuffed out: from AI gone wrong (see this previous article) to nanotechnology to synthetic biology and engineered viruses.

What makes existential disasters worse than even widespread personal disasters like cancer, which has killed billions of people? The future potential of the human race is the key difference: If a disaster kills all the people on the planet, then there will be no one to continue the human race. Our species has come a long way since the dawn of history, and if we work to preserve our humanity, our civilization, and our values, we may go a long way yet. There is a whole universe out there and it is huge beyond our wildest dreams-our own galaxy contains more solar systems than there are people on the planet, and our galaxy itself is merely a tiny mote of dust in the great sea of galaxies. If the human race can spread out and colonize a few dozen systems, it seems likely that it will also be able to colonize the entire reachable universe, which contains a whopping 1,000,000,000,000,000,000,000 stars, and will last for perhaps 100,000,000,000 years.

The number of good human lives that could be lived in this time is simply too large to comfortably contemplate, and all those lives currently hang in the balance between existence and nonexistence, like an innumerable audience of ghostly figures looking down anxiously at the early twenty-first century. If any of the existential disasters actually occur, this future will be wiped out. In effect, the indirect death toll from an existential disaster is so big that all other disasters or humanitarian causes pale into complete insignificance in comparison.

Cosmologists, physicists who study the origin and evolution of the universe, know this better than anyone else. One world-renowned cosmologist after another has spoken out about existential risks. Stephen Hawking sits on the board of the Bulletin of the Atomic Scientists, and has warned that mistakes in constructing artificial intelligence risk human extinction as well. Carl Sagan was active in efforts to avert nuclear war. Martin Rees, in his book Our Final Hour, has warned of risks of bioterrorism and biowarfare.

It is a tall order to preserve not just human civilization, but also our human values, through the disruptive forces of time, technology, and Darwinian selection. The "fragility of human values" thesis claims that many desirable properties that a possible future could have are delicate properties of human brains and culture that could easily be disrupted by technological, economic, and geopolitical forces. We value laughter, friendship, children, sexuality, music, art, family, humor, and nature, amongst many other things. All of these things exist only because the human brain happens to create them, and the human brain is also currently the most intelligent thinking machine in the world. Each day, women give birth to children who also have brains which contain that same magical combination of human value and useful intelligence. But if we create entities that are more competitive and intelligent than humans-entities that did not also value laughter, friendship, children, sexuality, music, art, family, humor, and nature-then there would be a serious risk that humans would lose control of the future, and that these new, inhuman minds would create the world as they saw fit, or as emergent political and economic trends dictated; a grim universe of shuffling electrons and economic transactions with no people and no joy. These dark futures where human civilization gradually shifts away from human values without a "bang" also count as existential risks, and are perhaps the most insidious and horrific to think about.

Existential risks are, from one point of view, the most important and pressing problem in the world, for they threaten humanity as a whole. Unfortunately, they are also grossly under-addressed in terms of research, funding, and effort. Gaverick Matheny writes, in his paper on reducing human extinction risks (pdf), that "A search of EconLit and the Social Sciences Citation Index suggests that virtually nothing has been written about the cost effectiveness of reducing human extinction risks," and Nick Bostrom and Anders Sandberg noted, in a personal communication, that there are orders of magnitude more papers on coleoptera-the study of beetles-than "human extinction." Anyone can confirm this for themselves with a Google Scholar search: coleoptera gets 245,000 hits, and "human extinction" gets fewer than 1,200.

This means that small groups or even individuals can make a difference to the outcome, by reading and understanding the subject or by supporting research into understanding and avoiding these risks at places like Oxford University's Future of Humanity Institute.

The potential of the human race is virtually limitless, but first we have to survive into the next century.

Roko Mijic is a Cambridge University mathematics graduate, and has worked in ultra low-temperature engineering, pure mathematics, digital evolution and artificial intelligence. In his spare time he blogs about the future of the human race and the philosophical foundations of ethics and human values.


























More Stories on Good