The Year of the Glitch project is finishing up its second month and is just 5 followers away from 1000. It would be super slick if today, leap day, we reach that 1000 follower milestone. So if you’re not already following, do so now!

The color of each pixel is the result of applying a series of operations to a value and iterating those operations over the entire image. If you have a look at the code, you can see the operations are pretty much arbitrarily chosen and the outcome is difficult to know in advance.

Allow me to explain further: Feedback requires an input source and output destination connected such that the output is directed to the input. In this case, source = some pixels, destination = other pixels onto which we map the source, plus some transformations. Here the transformations are displacement and hue rotation. The displacement is controlled by a flocking algorithm, one that appears in the Processing examples written by Daniel Shiffman.

At present, I have a lot of work to do to clean things up and make some intuitive affordances, but here’s the sketch code in progress:

If it looks kinda like datamoshing, it’s because the very same process is evoked when you remove i-frames and repeatedly copy and paste a p-frame. Sorry about the jargon, but basically I’ve coded a process that datamoshing exploits, namely a feature of video rendering on the decoding side that applies vector based displacement to data in a frame buffer, etc.