While other MediaPost newsletters and articles remain free to all ... our new Research Intelligencer service is reserved for paid subscribers ...

Subscribe today to gain access to every Research Intelligencer article we publish as well as the exclusive daily newsletter, full access to The MediaPost Cases, first-look research and daily insights from Joe Mandese, Editor in Chief.

Improvements in image recognition for Google mean building an artificial neural network where the software is capable of recognizing, learning, and eventually, generating its own network in the
likeness of the original. It's one way that the future of search could improve on serving results, and perhaps another way to give creative advertising agencies an alternative perspective.

Artificial Neural Networks, a Google research project, relies on software based on the structure of how biological brains learn when being shown millions of images. It is trained by seeing millions
of examples. The researchers gradually adjust the network parameters until it gives the desired results, such as recognizing the image.

If an image is incorrect, researchers adjust the
neurons as images change to help the network reach the correct conclusion. There are between 10 and 30 layers of "artificial neurons," and each talks to the next until it learns to redraw the correct
image and recognized when the last layer is reached.

One remaining challenge is an understanding of what exactly goes on at each layer. "We know that after training, each layer progressively
extracts higher and higher-level features of the image, until the final layer essentially makes a decision on what the image shows," wrote a team of Google software researchers in a blog post. "For example, the first layer maybe looks for edges or corners. Intermediate layers interpret
the basic features to look for overall shapes or components, like a door or a leaf. The final few layers assemble those into complete interpretations — these neurons activate in response to very
complex things such as entire buildings or trees."

The researchers were surprised to find neural networks that were trained to discriminate between different kinds of images have quite a bit
of the information needed to generate images too," according to researchers, who learned that the technology doesn't look for the signals once thought to recreate the image.

The
researchers used an image of dumbbells to provide an example. In this case, the network failed to completely identify the dumbbells because it lacked a muscular weightlifter in the picture that most
would associate with the exercise equipment.

Researchers said the techniques help to better understand and visualize how neural networks are able to carry out difficult classification tasks,
improve network architecture, and check what the network has learned during training. They also suggest it might make a tool for artists, or perhaps creative advertising agencies, presenting a way to
remix visual concepts or even shed a little light on the roots of the creative advertising process in general.