To wrap up our Singularity 101 series, we invited readers to ask the authors questions about the technological singularity. Roko Mijic picked a few to answer. His responses are below.

Josh Hibbard asks:

My main argument in response to the benefits/drawbacks of having an advanced AI integrated into all facets of our society for the betterment of humanity and to possibly rid the world of poverty, illness, etc. is that I don’t necessarily believe that the ills of the world are brought on or perpetuated by a lack of intelligence or technology. That is why I don’t think programming an advanced AI with an ability to make decisions based on our own ethics will somehow grant it the ability to make an “honorable or ethical” decision when the logic it is drawing off of is the reason for most of the problems that have adversely affected humanity for the past several millennia. So my prediction is that as Artificial Intelligence advances, it will likely be most effective in the field of medicine or enviornmental work, not politics or business.

Oliver Carefull asks:


As the human body becomes augmented more and more in the sense of artificial limbs/blood cells/ brains i’m interested in the global social implications. Will the west’s accelerated advancement of the body and mind lead to a greater gap between rich and poor globally. Sure everyone likes the idea of being 10x smarter using a biological computer brain but will we want to use that brain power to create equality and abolish poverty on planet earth.

We have significantly enough brain power currently to solve many of the worlds ethical issues yet we don’t. Will new advancements change the way we think or just how fast we think of it?

These questions both seem to be making the same point, namely that smartness cannot solve the problem of the designers of a smart system fundamentally being selfish and nasty. Now, in a sense this is a good point to make, because of the distinction between means and ends: Intelligence is used for furthering a fixed set of ends. If the designer of a super-smart AI system coded that system to not care at all about the starving 1 billion in the developing world, then clearly they wouldn’t benefit.

But we in the developed world do care at least to some extent about the welfare of others. That’s why we send some aid. The ability to more easily solve problems such as world hunger would in practice mean that the limited amount of desire of the First World to do it would actually be enough to solve them. Think of it this way: If you could solve all world poverty for $300, you’d probably pay up and be done with it. More advanced technology in the hands of the benefactor will simply lower the cost of solving such problems.

There are caveats to this answer. If the technology of the 21st century gets out of control and wipes out the human race—as could happen with unfriendly AI—then those less fortunate people in the developing world will die too. If singularity technology such as AI is concentrated in the hands of some deliberately nasty group—white supremacists, for example—then they could use it to have their wicked ways with the developing world.

George Finch asks:

Also, what do you think about the thesis in Light at the End of the Tunnel by Martin Ford; to put it simply, the development of AI will eventually reduce jobs rather than producing other kinds of jobs, which is the present mantra?

If humanity developed an advanced superintelligence which was benevolent, a friendly AI, then it would reduce the number of jobs that humans needed to do to zero. Of course, this would be a good thing, because it would also increase the amount of wealth that humans had to effectively infinity; you would be able to have whatever life you wanted (as long as it didn’t hurt others) in exchange for nothing at all.

Other scenarios are possible, such as competition between humans and human brain emulations. In this case, jobs for humans would probably descend quickly to almost none, but with suitable regulation and taxing of the corporations that ran the brain emulations, existing humans could be given fabulous amounts of wealth just for being real. Without such regulation, ordinary humans might fare badly and form an underclass, or even go extinct.

Andrew Price asks:

I’m really interested in this idea of a CEV algorithm. And I have a two-part question: 1) What if a CEV suggests a person do one thing but the person subjectively wants to do another? The CEV says “I’ve looked at your brain and you should be an architect” and I’m like “But I want to be a doctor?” How do we reconcile? Or similarly, what if the CEV tells a society to do something other than what a voting public wants? 2) Related: If a CEV is just telling us what to do at every step what does that mean for human autonomy?

So, if your brain decides that you want to be an architect, then, since your brain is what controls what your mouth says, you will say that you want to be an architect. However, extrapolated volition goes further than just looking at what your brain currently wants. It edits your brain in some unspecified way to make sure that you know all true relevant facts. For example, if the reason that you want to be a doctor is that you are convinced that doctors make more money than architects, but this is actually false, then your extrapolated volition might want you to be something else.

Coherent extrapolated volition of a group—a country, for example—can make you do things that you don’t want because everyone else wants you to do those things. For example, you might want to have the freedom to use your country’s flag to clean the toilet, but if most of the rest of the people included in the CEV want you to not do that, then you will be overruled. However, this is just the age old argument about who gets control of the world around us, and when different people want different things, science cannot satisfy everyone. At best it can strike a compromise.

Lack of autonomy is often brought up as an objection to CEV. If the CEV plots out your entire life trajectory, and then edits your brain and the environment to set you off on that trajectory, then surely you have no autonomy any more? (CEV wouldn’t need to tell you what to do, it could just edit everything, including possibly your brain, so that what it wanted happened.)

But the concept of autonomy in the sense of there being no fact of the matter about what will happen is just wrong: In a deterministic universe, there is only ever one thing that can happen, only ever one thing you can choose. And in a random universe, there is only ever one probability distribution over outcomes from a given point in time. When you deliberate and choose, it feels like you “could” choose any of many options, but actually you can’t: your brain is a lawful physical system, and when you make a choice, there is one guaranteed outcome (or, in a probabilistic picture, one guaranteed distribution) that is settled before you verbalize your choice. It’s just that you don’t know what that outcome is, so it feels to you like anything could happen.

One could call this an illusion of autonomy, or one could simply define autonomy as having the freedom to choose. And this is something that you would have in a CEV scenario—you would have a freedom to choose—at least, with the caveat that the rest of the people included in the CEV could impose certain constraints on you.

Roko Mijic is a Cambridge University mathematics graduate, and has worked in ultra low-temperature engineering, pure mathematics, digital evolution and artificial intelligence. In his spare time he blogs about the future of the human race and the philosophical foundations of ethics and human values.

  • Man’s dog suddenly becomes protective of his wife, Internet clocks the reason right away
    Dogs have impressive observational powers.Photo credit: Canva

    Reddit user Girlfriendhatesmefor’s three-year-old pitbull, Otis, had recently become overprotective of his wife. So he asked the online community if they knew what might be wrong with the dog.

    “A week or two ago, my wife got some sort of stomach bug,” the Reddit user wrote under the subreddit /r/dogs. “She was really nauseous and ill for about a week. Otis is very in tune with her emotions (we once got in a fight and she was upset, I swear he was staring daggers at me lol) and during this time didn’t even want to leave her to go on walks. We thought it was adorable!”

    His wife soon felt better, butthe dog’s behavior didn’t change.

    pregnancy signs, dogs and pregnancy, pitbull behavior, pet intuition, dog overprotection, Reddit stories, viral Reddit, dog instincts, canine emotions, dog owner tips
    Otis knew before they did. Canva

    Girlfriendhatesmefor began to fear that Otis’ behavior may be an early sign of an aggression issue or an indication that the dog was hurt or sick.

    So he threw a question out to fellow Reddit users: “Has anyone else’s dog suddenly developed attachment/aggression issues? Any and all advice appreciated, even if it’s that we’re being paranoid!”

    The most popular response to his thread was by ZZBC.

    Any chance your wife is pregnant?

    ZZBC | Reddit

    The potential news hit Girlfriendhatesmefor like a ton of bricks. A few days later, Girlfriendhatesmefor posted an update and ZZBC was right!

    “The wifey is pregnant!” the father-to-be wrote. “Otis is still being overprotective but it all makes sense now! Thanks for all the advice and kind words! Sorry for the delayed reply, I didn’t check back until just now!”

    Redditors responded with similar experiences.

    Anecdotal I know but I swear my dog knew I was pregnant before I was. He was super clingy (more than normal) and was always resting his head on my belly.

    realityisworse | Reddit

    So why do dogs get overprotective when someone is pregnant?

    Jeff Werber, PhD, president and chief veterinarian of the Century Veterinary Group in Los Angeles, told Health.com that “dogs can also smell the hormonal changes going on in a woman’s body at that time.” He added the dog may “not understand that this new scent of your skin and breath is caused by a developing baby, but they will know that something is different with you—which might cause them to be more curious or attentive.”

    The big lesson here is to listen to your pets and to ask questions when their behavior abruptly changes. They may be trying to tell you something, and the news may be life-changing.

    This article originally appeared last year.

  • Throughout history, women have stood up and fought to break down barriers imposed on them from stereotypes and societal expectations. The trailblazers in these photos made history and redefined what a woman could be. In doing so, they paved the way for future generations to stand up and continue to fight for equality.

  • ,

    Why mass shootings spawn conspiracy theories

    Mass shootings and conspiracy theories have a long history.

    While conspiracy theories are not limited to any topic, there is one type of event that seems particularly likely to spark them: mass shootings, typically defined as attacks in which a shooter kills at least four other people.

    When one person kills many others in a single incident, particularly when it seems random, people naturally seek out answers for why the tragedy happened. After all, if a mass shooting is random, anyone can be a target.

    Pointing to some nefarious plan by a powerful group – such as the government – can be more comforting than the idea that the attack was the result of a disturbed or mentally ill individual who obtained a firearm legally.


Explore More Articles Stories

Articles

Man’s dog suddenly becomes protective of his wife, Internet clocks the reason right away

Articles

14 images of badass women who destroyed stereotypes and inspired future generations

Articles

Why mass shootings spawn conspiracy theories

Articles

11 hilarious posts describe the everyday struggles of being a woman