NEWS
GOOD PEOPLE
HISTORY
LIFE HACKS
THE PLANET
SCIENCE & TECH
POLITICS
WHOLESOME
WORK & MONEY
About Us Contact Us Privacy Policy
© GOOD Worldwide Inc. All Rights Reserved.

Singularity 101: Readers' Questions Answered

To wrap up our Singularity 101 series, we invited readers to ask the authors questions about the technological singularity. Roko...

To wrap up our Singularity 101 series, we invited readers to ask the authors questions about the technological singularity. Roko Mijic picked a few to answer. His responses are below.

Josh Hibbard asks:
My main argument in response to the benefits/drawbacks of having an advanced AI integrated into all facets of our society for the betterment of humanity and to possibly rid the world of poverty, illness, etc. is that I don't necessarily believe that the ills of the world are brought on or perpetuated by a lack of intelligence or technology. That is why I don't think programming an advanced AI with an ability to make decisions based on our own ethics will somehow grant it the ability to make an "honorable or ethical" decision when the logic it is drawing off of is the reason for most of the problems that have adversely affected humanity for the past several millennia. So my prediction is that as Artificial Intelligence advances, it will likely be most effective in the field of medicine or enviornmental work, not politics or business.

Oliver Carefull asks:


As the human body becomes augmented more and more in the sense of artificial limbs/blood cells/ brains i'm interested in the global social implications. Will the west's accelerated advancement of the body and mind lead to a greater gap between rich and poor globally. Sure everyone likes the idea of being 10x smarter using a biological computer brain but will we want to use that brain power to create equality and abolish poverty on planet earth.

We have significantly enough brain power currently to solve many of the worlds ethical issues yet we don't. Will new advancements change the way we think or just how fast we think of it?


These questions both seem to be making the same point, namely that smartness cannot solve the problem of the designers of a smart system fundamentally being selfish and nasty. Now, in a sense this is a good point to make, because of the distinction between means and ends: Intelligence is used for furthering a fixed set of ends. If the designer of a super-smart AI system coded that system to not care at all about the starving 1 billion in the developing world, then clearly they wouldn't benefit.

But we in the developed world do care at least to some extent about the welfare of others. That's why we send some aid. The ability to more easily solve problems such as world hunger would in practice mean that the limited amount of desire of the First World to do it would actually be enough to solve them. Think of it this way: If you could solve all world poverty for $300, you'd probably pay up and be done with it. More advanced technology in the hands of the benefactor will simply lower the cost of solving such problems.

There are caveats to this answer. If the technology of the 21st century gets out of control and wipes out the human race—as could happen with unfriendly AI—then those less fortunate people in the developing world will die too. If singularity technology such as AI is concentrated in the hands of some deliberately nasty group—white supremacists, for example—then they could use it to have their wicked ways with the developing world.

George Finch asks:





Also, what do you think about the thesis in Light at the End of the Tunnel by Martin Ford; to put it simply, the development of AI will eventually reduce jobs rather than producing other kinds of jobs, which is the present mantra?

If humanity developed an advanced superintelligence which was benevolent, a friendly AI, then it would reduce the number of jobs that humans needed to do to zero. Of course, this would be a good thing, because it would also increase the amount of wealth that humans had to effectively infinity; you would be able to have whatever life you wanted (as long as it didn't hurt others) in exchange for nothing at all.

Other scenarios are possible, such as competition between humans and human brain emulations. In this case, jobs for humans would probably descend quickly to almost none, but with suitable regulation and taxing of the corporations that ran the brain emulations, existing humans could be given fabulous amounts of wealth just for being real. Without such regulation, ordinary humans might fare badly and form an underclass, or even go extinct.

Andrew Price asks:



I'm really interested in this idea of a CEV algorithm. And I have a two-part question: 1) What if a CEV suggests a person do one thing but the person subjectively wants to do another? The CEV says "I've looked at your brain and you should be an architect" and I'm like "But I want to be a doctor?" How do we reconcile? Or similarly, what if the CEV tells a society to do something other than what a voting public wants? 2) Related: If a CEV is just telling us what to do at every step what does that mean for human autonomy?

So, if your brain decides that you want to be an architect, then, since your brain is what controls what your mouth says, you will say that you want to be an architect. However, extrapolated volition goes further than just looking at what your brain currently wants. It edits your brain in some unspecified way to make sure that you know all true relevant facts. For example, if the reason that you want to be a doctor is that you are convinced that doctors make more money than architects, but this is actually false, then your extrapolated volition might want you to be something else.

Coherent extrapolated volition of a group—a country, for example—can make you do things that you don't want because everyone else wants you to do those things. For example, you might want to have the freedom to use your country's flag to clean the toilet, but if most of the rest of the people included in the CEV want you to not do that, then you will be overruled. However, this is just the age old argument about who gets control of the world around us, and when different people want different things, science cannot satisfy everyone. At best it can strike a compromise.

Lack of autonomy is often brought up as an objection to CEV. If the CEV plots out your entire life trajectory, and then edits your brain and the environment to set you off on that trajectory, then surely you have no autonomy any more? (CEV wouldn't need to tell you what to do, it could just edit everything, including possibly your brain, so that what it wanted happened.)

But the concept of autonomy in the sense of there being no fact of the matter about what will happen is just wrong: In a deterministic universe, there is only ever one thing that can happen, only ever one thing you can choose. And in a random universe, there is only ever one probability distribution over outcomes from a given point in time. When you deliberate and choose, it feels like you "could" choose any of many options, but actually you can't: your brain is a lawful physical system, and when you make a choice, there is one guaranteed outcome (or, in a probabilistic picture, one guaranteed distribution) that is settled before you verbalize your choice. It's just that you don't know what that outcome is, so it feels to you like anything could happen.

One could call this an illusion of autonomy, or one could simply define autonomy as having the freedom to choose. And this is something that you would have in a CEV scenario—you would have a freedom to choose—at least, with the caveat that the rest of the people included in the CEV could impose certain constraints on you.

Roko Mijic is a Cambridge University mathematics graduate, and has worked in ultra low-temperature engineering, pure mathematics, digital evolution and artificial intelligence. In his spare time he blogs about the future of the human race and the philosophical foundations of ethics and human values.



















More Stories on Good