Posts Tagged ‘umbrellas’

I just created a video for the first time in quite a while for my YouTube account. It came about when I was experimenting with algorithmic representations of a source image using the Processing programming language. The image I happened to be working with was “The Umbrellas”, a painting by Pierre-Auguste Renoir.

As I worked with variations on the algorithm it struck me that an animation could be interesting. The basics of the algorithm were to first create a grid of equally spaced points and assign each a x,y location. The key variable at this point was the amount of spacing there would be between each x,y grid location. For this video I set the grid spacing to be 10 pixels.

Next was to get the color value from Renoir’s “The Umbrellas” image for each of the x,y locations in the grid. To draw I had decided to use the line() function. The variables for each line would be color, starting angle, and length. For the line’s color, I simply used the color value taken from the image for that location. For the starting angle of rotation, I opted to use the hue taken from the corresponding pixel. To get the hue required converting the RGB color value into its HSB (hue, saturation, brightness) equivalent. The length of the each line being drawn was determined by the brightness of the pixel at that location. This resulted in another variable: establishing what the maximum line length would be. So brightness values of 0 to 255 had to be translated into a range of lengths from some minimum value to some maximum value. The standard way of doing this in Processing is to use the map() function. I never use map() because it is an inefficient way to translate numbers from one scale into another scale. For the video, I simply divided the pixel brightness by 12 – meaning the longest a line could be was 21 pixels.

To add some variability, I added a variable for the Z axis and used a Perlin noise field to control each location’s Z coordinate. The result is that the distance of each line from the camera varies somewhat, which enhances the perception of depth in the image.

To animate the image required changing one or more of the variables associated with each grid point over time. Keeping things simple, I added a global variable that would equally increment each line’s rotation angle between frames. This created a uniform rotation for all the lines.

I then added a time variable to alter the Perlin noise field values over time and updated each line’s Z coordinate between frames. The main issue here was with respect to how much I wanted the range of Z values to vary. For comparison, below is an illustration of what you would see if a substantially greater range of Z coordinate values was allowed.

To create the output, I used the Processing saveFrame() feature to write each frame of the movie to a tiff file. Separately I had used Audacity to create a narration soundtrack for the video. Once I knew how long my audio track was, I simply dropped a variable into my Processing program which indicated the frameCount at which to stop generating image frames.

While I have previously used tools like FFMPEG to create videos, this time I decided to use Processing’s Movie Maker tool. Confirming that my tiff and mp3 files were fine, I started up Movie Maker. I specified the input sources for my files, went with the default compression of animation and clicked the “Create Movie” button. I then monitored the dialog window as the program progressed through the 3000 images used to create the video. The program ended without error.

But when I went to view the video using VLC, all I got was a black screen and horribly garbled and spotty audio. I had no idea what had gone wrong. Rather than resorting to one of my other tools, I opted to give Movie Maker another try. The only difference was that this time I selected the JPEG option for compression. The dialog proceeded as before and again ended without error. This time the video and audio were fine, except for a narrow strip of color along the video’s edge. For purposes of this video, that is something I can live with.

Unfortunately the image quality of the video has suffered due to YouTube’s overly enthusiastic image compression. Whereas my original video upload was a 3.1 gigabyte file, YouTube compressed it down to a mere 29 megabytes (you can’t throw away that much information without losing quality). While I do understand the need to economize on bandwidth, such economies can be achieved in part by viewing the video at one of the lower resolution settings.

You can view the video on YouTube at: https://youtu.be/CNE0j1LXIJ0. Give the video a watch, let me know what you think, and share it if you like it.