Hey, peeps, I found another site called Psychic VR Lab that lets you feed images to Google’s Deep Dream algorithm. Take a look:

The Hagia Sophia went to the dogs rather quickly.

The demons that dogged Rasputin finally come to light?

Again, I noticed, as have others, that images of dogs keep appearing, as in, almost all the time. “Why is this?”, I wondered. Well, it turns out there’s a very clear and simple reason for this:

A neural network’s ability to recognize what’s in an image comes from being trained on an initial data set. In Deep Dream’s case, that data set is from ImageNet, a database created by researchers at Stanford and Princeton who built a database of 14 million human-labeled images. But Google didn’t use the whole database. Instead, they used a smaller subset of the ImageNet database released in 2012 for use in a contest… a subset which contained “fine-grained classification of 120 dog sub-classes.”In

If I understand this correctly, it means that Deep Dreams sees what we’ve told it to see. We’re getting better at training computers to recognize images, but we haven’t got to the point of being able to teach them how to teach themselves. Yet.

So, Google has a program called Deep Dream, which basically feeds visual input into a neural network and lets the A.I./Network translate what it see. The results can be arresting. Thanks to Wired, I found a site called Dreamscope which allows anyone to play with this neural networking setup. Here’s an example of what I was able to do:

I fed this image through the “Trippy” filter seven times and the “Self-Transforming Machine Elves” filter three times.