Generating Glitches

So, before you read this post a disclaimer: some of the images contained within NSFW (Not safe for Work) and the opinions expressed within were given some thought, though not a whole lot.

Glitch art has become quite popular lately. Roughly, glitch art is the aestheticisation of errors. I got interested in glitch art when my hard drive broke a few years ago and I accidentally made some interesting images.

Since then I have been writing software to specifically achieve effects that somehow look glitchy, or more generally represent a distortion of the data present in an image.

Such explicit glitching of images was widely popularised a few years ago by, among others, Kim Asendorf who made glitch art by sorting the pixels of input images. Currently, there is an active community of people making glitch art on Reddit.

I think the original intent of glitch art was some sort of commentary on the pervasiveness of digital and electronic media, or at least a hijacking of that weird sense of discomfort that one gets when a seamless system, one that is taken for granted, suddenly breaks. Now, there is no reason why an artificially created glitch cannot be just as interesting. It is certainly more likely to be ascetically pleasing than the “happy accidents” of pure glitches. In the end, it requires intention for glitches to be art. I have intentionally made some images and I will briefly explain the process below.

I have been using Processing to make art for a few years now. It is a nice simple ecosystem for making generative art. I got started with this particular project by wondering if I could display a 2D image in 3D space by moving the brighter parts of the image forward and the darker parts backwards. To do this I break an image up into blocks and colour these blocks witht he average colour of the pixels underneath. I then move them forward depending on the average brightness of the block. The blocks can be drawn just as squares floating in space, or in some of the following images as solid boxes. Here you can see one of my first attempts at this:

The images can be viewed from different angles:

The technique can sometimes enhance the detail of an image, as in this image of the night sky:

Though, it can sometimes have quite a distorting effect, as in this image of a rainbow lorikeet:

In general, the distortion occurs whenever the brightness of parts of an image does not correspond to our expectations about the depths of the things in the image. More images from this set can be seen here.

Naturally, I could not look at the images I was generating without thinking of ways to break the process I was using to represent them in 3D space. The process I decided on was to move the blocks of an image laterally, depending on the brightness gradients in their local part of the image. So, parts of an image which were relatively uniform in brightness should be undistorted, but areas where the brightness changes quickly will be distorted.

Following are a series of images that explore this technique. The full gallery can be found here. I will post a follow up blog post in the near future where I take these approaches further and distort videos. I will also present some related approaches I have developed which distort an image on the basis of the variance of parts of it.

The following images are made from a found photo. Unfortunately I have no way of knowing who the model or artist are.