NEWS
GOOD PEOPLE
HISTORY
LIFE HACKS
THE PLANET
SCIENCE & TECH
POLITICS
WHOLESOME
WORK & MONEY
About Us Contact Us Privacy Policy
© GOOD Worldwide Inc. All Rights Reserved.

Not Worried About Artificial Intelligence? These Geniuses Think You Should Be

What do Bill Gates, Elon Musk, and Stephen Hawking have in common? They’re all worried about the dangers of A.I.

image via (cc) flickr user zen_warden

Ordinarily, if someone were to start lecturing me on the dangers of artificial intelligence, I’d smile, nod, and maybe mumble something about how how Disney’s Wall-E was “still pretty great though,” before politely excusing myself and blocking the entire conversation from my memory. That said, when it’s someone considered by many to be one of the smartest men on the planet doing the talking... well, I’m a little more inclined to pay attention.


During a recent Reddit “ask me anything” session Microsoft co-founder and mega-philanthropist Bill Gates was asked whether he considered machine super-intelligence to be an “existential threat.” Given that both Gates’ fame and fortune were made, in part, as a result of his deep understanding of computer systems, it’s safe to say his opinion on artificial intelligence carries a fair amount of water. Gates began his answer optimistically, but ended on a surprisingly blunt note of caution, replying

“I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.”

Now, “concern” should probably not be misinterpreted as “everyone panic!” in a Skynet-inspired freak-out. There is considerable debate among artificial intelligence experts not only as to whether A.I. could ever pose a threat to humankind, but whether A.I. is even truly possible to begin with (and if so, what form would it take?). Still, Gates joins two other prominent and well-respected voices in calling for caution with regards to artificial intelligence. One of those voices belongs to Tesla and SpaceX founder Elon Musk, whom Gates mentioned in his Reddit answer. Musk, as QZ points out, does fall a little closer to the freak-out camp. He’s invested ten million dollars to investigate possible negative consequences of A.I., considers it possibly more dangerous than atomic weapons, and even went so far as to name check The Terminator while talking about the dangers posed by A.I. in an appearance on CNBC last summer:

“In the movie ‘Terminator,’ they didn't create A.I. to — they didn't expect, you know some sort of ‘Terminator’-like outcome. It is sort of like the ‘Monty Python’ thing: Nobody expects the Spanish inquisition. It’s just — you know, but you have to be careful.”

It’s a sentiment shared by famed astrophysicist Stephen Hawking who, in a May 2014 article for The Independent co-written with MIT professor Max Tegmark, argued that while A.I. has the potential to be the crowning achievement in human scientific development, if left unchecked it also has the potential for catastrophically unforeseen consequences. Hawking, urging a more serious degree of research regarding the development and implementation of A.I., writes:

If a superior alien civilization sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here—we'll leave the lights on"? Probably not—but this is more or less what is happening with AI.

Hyperbolic language, sure, but when three people famous for revolutionizing science and technology start speaking up, it’s probably smart to at least listen to what they have to say.

More Stories on Good