NEWS
GOOD PEOPLE
HISTORY
LIFE HACKS
THE PLANET
SCIENCE & TECH
POLITICS
WHOLESOME
WORK & MONEY
About Us Contact Us Privacy Policy
GOOD is part of GOOD Worldwide Inc.
publishing family.
© GOOD Worldwide Inc. All Rights Reserved.

Singularity 101: What Is the Singularity?

Superhuman intelligence and the technological singularity. Part one in a GOOD miniseries on the singularity by Michael Anissimov and Roko Mijic....


Superhuman intelligence and the technological singularity.
Part one in a GOOD miniseries on the singularity by Michael Anissimov and Roko Mijic. New posts every Monday from November 16 to January 23.

Living to 1,000? Superhuman robots? Matrix-style virtual reality? These staples of science-fiction may become a reality when (or, perhaps, if) the "singularity" happens.

The phrase "technological singularity" was coined by the mathematician and science fiction author Vernor Vinge in 1982. He proposed that the creation of smarter-than-human intelligence would greatly disrupt our ability to model the future, because to know what smarter-than-human intelligences would do would require us to be that smart ourselves. He called this hypothetical event a "Singularity," drawing a comparison to the way our model of physics breaks down when trying to predict phenomena past the event horizon of a black hole. Instead of having a sudden rupture in the fabric of spacetime, you'd have a break in the fabric of human understanding.

Vernor Vinge's idea of a technological singularity bears resemblance to earlier ideas, such as WWII codebreaker I.J. Good's "intelligence explosion" concept. Good was quoted as saying, "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make." This concept has been explored in (mostly dystopian) science fiction films and novels, such as The Matrix and Terminator franchises.

More recently, a growing number of academics and technologists have began looking at the singularity as a serious prospect in the coming century rather than a piece of science fiction esoterica. If human minds and brains are basically machines that operate according to physical law, they say, then it's just a matter of time before the principles of these machines are reverse-engineered and implemented on digital computers. Another possibility, thoroughly analyzed by the Future of Humanity Institute at Oxford University, is that of duplicating human intelligence in a computer by precisely simulating the way our brains process information. If we could implement human minds on computers, we could also speed them up to create a sort of "weak superintelligence"-minds not qualitatively smarter-than-human but significantly faster-than-human.

It may be decades before the technology for smarter-than-human minds develops, but we should consider what it would mean. If smarter-than-human entities do not value humanity, for example, they could cause our extinction. This suggests that advanced artificial intelligence research should be approached cautiously and with the necessity of human-friendly motivational architectures firmly in mind. It isn't too early to start thinking about this. Just this year, researchers at Cornell University built an artificial intelligence program that was able to independently reinvent the laws of physics merely by observing the swinging of a pendulum. Researchers at Aberystwyth University in Wales and England's University of Cambridge were able to build "Adam," an artificially intelligent robotic system that formulates its own scientific hypothesis and designs experiments to test them. Though these systems don't challenge human intelligence for the time being, rapid progress in the field suggests we should start considering the ramifications of the day when our robotic creations learn to think better than we do.

Michael Anissimov is a futurist and evangelist for friendly artificial intelligence. He writes a Technorati Top 100 Science blog, Accelerating Future. Michael currently serves as Media Director for the Singularity Institute for Artificial Intelligence (SIAI) and is a co-organizer of the annual Singularity Summit.

















More Stories on Good