It's time for the world to get optimistic about AI again. An encore post from GOOD's miniseries on the singularity by Michael Anissimov and...
<br/><h3>It's time for the world to get optimistic about AI again.</h3><br/><em>An encore post from <a href="http://www.good.is/series/singularity-101" target="_blank">GOOD's miniseries on the singularity</a> by Michael Anissimov and Roko Mijic. </em><br/><br/>Last year was one of the best years for artificial intelligence since the "AI winter" of the 1980s and early 1990s. The most notable achievement was Adam, the AI with robotic arms and lab equipment that can formulate hypotheses and run its own scientific experiments. In one example, Adam investigated the genetic expression of baker's yeast. Adam was named the 4th top scientific discovery of the year by <em>Time </em>magazine. Another major AI breakthrough in 2009 was Hod Lipson's program which independently discovered the laws of physics by observing the swinging of a pendulum. The program is now available for anyone to download. Just search for "Eureqa."<br/><br/>It's time for the world to get optimistic about artificial intelligence again. Instead of viewing the mind as a mystery, many of <span id="OBJ_PREFIX_DWT1032">today</span>'s cognitive scientists view it as a fertile ground for the scientific method, producing thousands of papers each year which further elucidate the operations of the brain. MIT professor <span id="OBJ_PREFIX_DWT1033"><a href="http://edboyden.org/" target="_blank">Ed Boyden</a></span> is working on a technology which allows investigators to fire individual neurons on demand using light signals, an approach which could soon lead to "high-throughput circuit screening" of neural circuits-a tool that has long been needed to untangle the complexities of human intelligence.<br/><br/>Scientists studying the activation patterns of neurons have even discovered that cognitive systems seem to be laid out as approximations of Bayesian reasoning, a statistical method that has been a strong focus of artificial intelligence over the last decade. This study has even given rise to a new subfield of cognitive science, known as Bayesian cognitive science. The Bayesian methods used by Gmail to filter spam are comparable to the Bayesian processing used by our brains to identify faces or distinguish objects from the background in a visual scene. This surprising parallel shows that we may have more of the basic toolset necessary for human-level AI than many people assume.<br/><br/>Futurist and inventor Ray Kurzweil has predicted the arrival of roughly human-level AI in 2029, based on what appears to be exponential growth in the availability of computing power and the resolution of brain-scanning devices. Many scientists agree with Kurzweil that brain scanning and simulation would allow scientists to build human-level artificial intelligence, even if we don't understand intelligence on an abstract level. There are already scientists working on simulating the hippocampus, a part of the brain responsible for memory, just by scanning it, interpreting the scans, and translating the pieces into code. The end result will be a hippocampal implant that could restore memory-formation abilities to victims of brain damage and age-related decay.<br/><br/>If Kurzweil is right, then the human race could be confronting the next intelligent species on the planet within 20 years. As we've discussed elsewhere in the <a href="http://www.good.is/series/singularity-101" target="_blank">Singularity 101</a> series, intelligence is the most powerful known force on the planet. Even if human-level AI is than 20 years away, it could still be developed within the lifetimes of people alive <span id="OBJ_PREFIX_DWT1034">today</span>. Our continued survival and prosperity will depend on cooperation with it. Instead of adopting a confrontational attitude-common among a species that evolved in mutually distrustful tribal societies, like humans-we must realize that AI is ours to make. We should be careful to create it with our best qualities, such as compassion and moral complexity. This will not be easy.<br/><br/>As Bill Gates recently <span id="OBJ_PREFIX_DWT1035"><a href="http://www.huffingtonpost.com/bill-gates/why-we-need-innovation-no_b_430699.html" target="_blank">pointed out</a></span>, society has a strong bias towards short-term solutions which offer incremental improvements at best. To solve the big problems-poverty, war, resource depletion, and so on-we need more innovation than the planet's scientists and researchers can provide. Only 1 percent of the population goes into research. By creating a new intelligent species as our allies, we can blend the best of human and machine intelligence to create an "Intelligence Explosion," where intelligence can endlessly improve itself in an open-ended fashion, instead of being kept essentially static with our current <em>Homo sapiens</em> brains. If we design AI properly and carefully, we will be gifted with explosions of wisdom and compassion as well.<br/><br/>Ensuring that future AIs act as our allies and not our competitors will require careful investigation and design that needs to begin <span id="OBJ_PREFIX_DWT1036">today</span>. The uncertainty is massive, but so are the potential benefits. If you're interested in creating globally beneficial, technologically-enabled future for humanity, get in contact with the <a href="http://www.singinst.org/" target="_blank">Singularity Institute</a> <span id="OBJ_PREFIX_DWT1037">today</span>, and help us in our quest for a positive singularity. Thank you for reading!<br/><br/><em>Michael Anissimov is a futurist and evangelist for friendly artificial intelligence. He writes a Technorati Top 100 Science blog, <a href="http://www.acceleratingfuture.com/michael/blog/" target="_blank">Accelerating Future</a>. Michael currently serves as Media Director for the Singularity Institute for Artificial Intelligence (SIAI) and is a co-organizer of the annual Singularity Summit.</em><br/><br/><em><br/></em><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/>
Keep Reading Show less