Will the next great art movement be created by A.I.?
With most of the world’s pictures coursing through Google’s servers, the tech mega-giant asked them to think for themselves and create their own images. The results are trippy, dream-like, and surprisingly accurate representations of known objects.
How did they do it? Google’s programmers taught their Artificial Neural Networks (ANN) what several everyday objects look like by showing them millions of photos. Google then asked the networks to generate pictures of the objects. What the ANN’s came up with is like something out of a Salvador Dali painting.
No doubt we’ll see these pictures hanging on a dorm room wall at some point.
In some cases, the ANN’s didn’t deliver exactly what Google’s programmers were looking for. In the case below, when asked to create a barbell, the ANN’s assumed that an arm was attached to a barbell because most of the photos had one in it.
Isn’t this one just beautiful?