By Graham Templeton on June 8, 2017
When you’re an A.I. researcher at Google, even your days off are filled with neural nets. Mike Tyka is a Google scientist who recently helped create the company’s DeepDream venture, but this week he posted details of a personal project that could someday make DeepDream seem primitive. That famous program works by basically blending together elements of other pictures, and then modifying that collage, but Tyka’s new approach takes the much more difficult and potentially rewarding path: teaching an A.I. to create all-new portraits from scratch.
“I don’t mind if the results are not necessarily realistic but fine texture is important no matter what even if it’s surreal but [high-resolution] texture,” Tyka commented Tuesday on his blog.
How It Works
The approach uses “generative adversarial networks” (GANs) to refine the A.I.’s abilities over time. GANs are neural networks that work in opposition to one another; one GAN draws a picture from scratch (the generative part) and another attempts to tell whether the picture is real or A.I.-generated (the adversarial part). The GANs will eventually trend toward better and better looking portraits, as one learns to trick its adversary network into misidentifying its creations as real. When this happens, the falsifying network learns from its mistake, and gets better at picking out false pictures in the future. In this way, both the generative and adversarial abilities of the system progress together, and one always keeps driving evolution in the other.

This artwork represents what it would be like for an AI to watch Bob Ross on LSD (once someone invents digital drugs). It shows some of the unreasonable effectiveness and strange inner workings of deep learning systems. The unique characteristics of the human voice is learned and generated as well as hallucinations of a system trying to find images which are not there.