Self Reflection

Participants stand before a screen and camera setup and experience different versions of reality in real time. Created with Roee Shenberg, Ben Feher, and Gal Bracha.

The Deep neural networks for sight have many stacked layers. Each layer’s input is the previous (lower) layer’s output.. The lowest layer is the input image, the highest layer has a neuron for each category of object the network was trained to see: e.g. there is a “dog” neuron that will fire if the input image contains a dog.

The idea is to choose a target image whose style you wish to copy. We take an input image and modify it using the seeing neural network: we change the image in way that makes the lower layers become as similar as possible to the target image, while preserving the higher levels. This means the image will logically contain the same content, but the physical components of the image will be in the target style. (Doing this fast enough requires another non-trivial step – training a different neural network to do the process faster)

The impressive thing here is that this is entirely based on an artificial sight mechanism that learned by itself, and its machine perception ended up similar enough to our own that we can in fact call it perception (In the neuroscience level too – lower layer neurons are similar in that sense to neurons in the V1 region in the visual cortex of the brain (simple cells), and there is even evidence of the network naturally learning to perceive shape and texture separately, similar to how it happens in the visual cortex.

Principles of training:
This is not a filter, this application utilises state of the art technologies that up to this point were considered to impossible to perform live.
Executing a feed forward neural network requires a dedicated computer equipped with cutting edge graphical processor unit, as well as a unification of several libraries and tools in order to achieve the high throughput needed.