NEWS
GOOD PEOPLE
HISTORY
LIFE HACKS
THE PLANET
SCIENCE & TECH
POLITICS
WHOLESOME
WORK & MONEY
Contact Us Privacy Policy
© GOOD Worldwide Inc. All Rights Reserved.

Can a Robot Create Something Beautiful?

A creative alternative to Alan Turing’s famous test gauging a computer’s capacity for human-like intelligence

Photo courtesy of Weinstein Company

This week, one British genius will impersonate another in The Imitation Game, a newly released drama about mathematician Alan Turing. Turing was instrumental in cracking the Nazi's Enigma code during World War II, helping the Allies win the war; he was later prosecuted for homosexuality (a criminal offense in the United Kingdom in the 1950s) and sadly, died just before his 42nd birthday from cyanide poisoning. His emotional ups and downs will be brought to life by everyone’s favorite internet sex god, Benedict Cumberbatch, who has perfected the quizzical eyebrow as the star of BBC’s Sherlock.


Alan Turing statue. Photo courtesy of Wikimedia Commons

Besides his wartime achievements, Turing is also famous for coming up with what is now known as the Turing Test, which assesses a machine’s ability to think like a human. In the test, first proposed by Turing in 1950, a human asks two participants a set of written questions. One participant is human; the other is a machine. If the “judge” asking the questions can’t reliably tell which participant is human and which is not, then the machine has passed the assessment. This is what Turing called “the imitation game”; since then, different versions of the Turing Test have become the benchmark for determining machine intelligence.

Now, a researcher at Georgia Tech has proposed an alternative: a test that assesses a computer’s capacity for human-level intelligence through its ability to create rather than converse.

Mark Riedl, a professor in Georgia Tech’s School of Interactive Computing, believes that creating certain types of art requires intelligence that only humans possess, leading him to wonder if there could be a better way to gauge whether a machine can replicate human thought. His Lovelace 2.0 Test of Artificial Creativity and Intelligence measures how well a machine can create a “creative artifact” from a subset of artistic genres set by a human evaluator. The machine passes if it meets the set criteria, but the final product does not actually have to resemble a Rembrandt.

Mark Riedl, Ph.D. Photo courtesy of Georgia Tech

“It's important to note that Turing never meant for his test to be the official benchmark as to whether a machine or computer program can actually think like a human,” Riedl said. “And yet it has, and it has proven to be a weak measure because it relies on deception. This proposal suggests that a better measure would be a test that asks an artificial agent to create an artifact requiring a wide range of human-level intelligent capabilities.”

Riedl will present his paper at an Association for the Advancement of Artificial Intelligence workshop in January

More Stories on Good