Posts Tagged ‘algorithmic’

I just created a video for the first time in quite a while for my YouTube account. It came about when I was experimenting with algorithmic representations of a source image using the Processing programming language. The image I happened to be working with was “The Umbrellas”, a painting by Pierre-Auguste Renoir.

As I worked with variations on the algorithm it struck me that an animation could be interesting. The basics of the algorithm were to first create a grid of equally spaced points and assign each a x,y location. The key variable at this point was the amount of spacing there would be between each x,y grid location. For this video I set the grid spacing to be 10 pixels.

Next was to get the color value from Renoir’s “The Umbrellas” image for each of the x,y locations in the grid. To draw I had decided to use the line() function. The variables for each line would be color, starting angle, and length. For the line’s color, I simply used the color value taken from the image for that location. For the starting angle of rotation, I opted to use the hue taken from the corresponding pixel. To get the hue required converting the RGB color value into its HSB (hue, saturation, brightness) equivalent. The length of the each line being drawn was determined by the brightness of the pixel at that location. This resulted in another variable: establishing what the maximum line length would be. So brightness values of 0 to 255 had to be translated into a range of lengths from some minimum value to some maximum value. The standard way of doing this in Processing is to use the map() function. I never use map() because it is an inefficient way to translate numbers from one scale into another scale. For the video, I simply divided the pixel brightness by 12 – meaning the longest a line could be was 21 pixels.

To add some variability, I added a variable for the Z axis and used a Perlin noise field to control each location’s Z coordinate. The result is that the distance of each line from the camera varies somewhat, which enhances the perception of depth in the image.

To animate the image required changing one or more of the variables associated with each grid point over time. Keeping things simple, I added a global variable that would equally increment each line’s rotation angle between frames. This created a uniform rotation for all the lines.

I then added a time variable to alter the Perlin noise field values over time and updated each line’s Z coordinate between frames. The main issue here was with respect to how much I wanted the range of Z values to vary. For comparison, below is an illustration of what you would see if a substantially greater range of Z coordinate values was allowed.

To create the output, I used the Processing saveFrame() feature to write each frame of the movie to a tiff file. Separately I had used Audacity to create a narration soundtrack for the video. Once I knew how long my audio track was, I simply dropped a variable into my Processing program which indicated the frameCount at which to stop generating image frames.

While I have previously used tools like FFMPEG to create videos, this time I decided to use Processing’s Movie Maker tool. Confirming that my tiff and mp3 files were fine, I started up Movie Maker. I specified the input sources for my files, went with the default compression of animation and clicked the “Create Movie” button. I then monitored the dialog window as the program progressed through the 3000 images used to create the video. The program ended without error.

But when I went to view the video using VLC, all I got was a black screen and horribly garbled and spotty audio. I had no idea what had gone wrong. Rather than resorting to one of my other tools, I opted to give Movie Maker another try. The only difference was that this time I selected the JPEG option for compression. The dialog proceeded as before and again ended without error. This time the video and audio were fine, except for a narrow strip of color along the video’s edge. For purposes of this video, that is something I can live with.

Unfortunately the image quality of the video has suffered due to YouTube’s overly enthusiastic image compression. Whereas my original video upload was a 3.1 gigabyte file, YouTube compressed it down to a mere 29 megabytes (you can’t throw away that much information without losing quality). While I do understand the need to economize on bandwidth, such economies can be achieved in part by viewing the video at one of the lower resolution settings.

You can view the video on YouTube at: https://youtu.be/CNE0j1LXIJ0. Give the video a watch, let me know what you think, and share it if you like it.

I write this as a creative coder dismayed by my own lack of foresight in keeping a record of some recent coding failures. It was only a week ago that I wrote an article about glitch art – Glitch Art or Not Glitch Art. You would think that with having just written about deliberately capitalizing on failure that I would be more attentive to my own coding failures. But alas no.

I’ve used the artwork titled Linear Moon shown above to illustrate this story. I created this art using a brand new program I had just finished writing. I knew that I’d written a similar program in the past but did not have the patience to go looking for it (yes, my hard drives are just that cluttered – even with files being organized by directory). Instead I decided that starting fresh would be the best way to go.

My early versions of this new program featured some mathematical logic mistakes with respect to what I wanted to accomplish. If I had been wiser I would have kept these mistakes for later evaluation with respect to their artistic merit. But no, I was in hot pursuit of the right program – the program that would generate a picture that matched the one in my head. It was only when my internal visualization of what I wanted to achieve matched what I saw on the screen that I ceased twiddling with my code and began experimenting with different parameter values to create Linear Moon.

Abstract From Line Segments Algorithmic Art Fail

The good and the bad about every run of the program was that the final step always wrote its results to a file so I had a visual record of every failed image. The good was in being able to go back and look over these image fails. The bad was in seeing that a number of them had artistic value and knowing that I had failed to keep a copy of the version of the program that produced that image. One example of an early failure is Abstract From Line Segments shown above and created from a painted version of The Beatles Abbey Road album cover art.

In contrast, the correct version of that same input image is shown below and accurately reflects the look I was going for. Between the two images were a number of program variations where I experimented with my program’s math and logic. These variations produced a range of visual results.

A successful interpretation of a painting of The Beatles Abbey Road album cover art

After the challenge of successfully creating the linear/line segment effect that I wanted, adding a coloring option was fairly straight forward. The only challenges associated with adding color were those of sampling and manipulation. An example of an initial color experiment is shown below using a portrait of SpaceX CEO Elon Musk.

Elon Musk Algorithmic Portrait, color version

There is one big difference between a program that works correctly and a program that leads to erroneous results: it is quite easy to recreate a program that works correctly but exceedingly difficult to recreate a specific set of errors.

My advice to all creative coders out there is this: slow down a little bit, take a look at your failures, and ask yourself “is this an error worth keeping?”

About Linear Moon Algorithmic Art

Linear Moon is the first work of art I’ve formally created using my new program. The original is 30 by 30 inches printed at 300 ppi (pixels per inch). To provide a better idea of what the image looks like at actual size, below is an excerpt that features Tycho Crater. Note that its size on your device screen will vary due to the different pixel densities of different screens.

Tycho Crater actual size detail from Linear Moon Algorithmic Art

While I have not yet added Linear Moon to my web site, I have made it available as merchandise on Redbubble

One of the projects I undertook over the Thanksgiving holidays was to create a new series of abstract algorithmic artworks. The first of these artworks that I’ve made available on Redbubble and Crated is the piece Euclidean Chaos.

The Euclidean in the title is a reference to Euclidean geometry. Euclidean geometry is a mathematical system described by the Greek mathematician Euclid in his textbook on geometry titled simply Elements, written sometime around 300 B.C. The fundamental "space" in Euclidean geometry is the plane. The chaotic aspect of Euclidean Chaos is, what is visually, a countless number of intersecting planes which constitute the artwork.

My analogy for this artwork is the cosmological concept of the multiverse or parallel universes – a system wherein there exists an infinity of non-interacting universes, each unaware of the other’s existence.

I hope you like Euclidean Chaos and will visit its pages on Redbubble and Crated (by clicking the buttons above) to see the variety of art product offerings available for this artwork.

Computational Synthesis is a work of digital art I completed a few days ago which combines elements of algorithmic art and generative art with continual input from the artist. At the time I created this work I had no idea what to title the piece. In creating this artwork, I did have a clear idea visually and aesthetically of what I wanted to create but had given no thought to a title. After completing the piece, I turned to social media. I posted the artwork in a few places and asked for suggestions as to a title. Some suggested titles were:

Abstract Structure

Digital City

Discreet Time

Constructor Theory

Shifting Perspectives

Cityscape, Sky View

Aerial View Of Cyberscape

Monolith Metastasis

Fragmentation

While I did not use any of these titles, I do owe a thanks to the people who suggested them as they served as input to my thought process. Giving a title to a work of art can lead the observer in a certain direction when they are viewing the artwork. In choosing a title, I had to determine how well the title fit with what I was trying to say artistically. And therein lay my chief problem in coming up with a title.

I finally decided on Computational Synthesis as the title. Typically when one thinks of computational creativity, it is more in terms of the "machine" itself being the creator with the source of its creativity being within the framework of its design. In the case of this artwork, the computational component refers to my use of computational methods to produce a particular aesthetic style while synthesis points to the fact that I, the artist, was an equal partner in the creative process.

I created this artwork using an evolved version of a program I created and wrote about in Artistic Creativity and the Evolution of an Idea. For comparison, take a look at a previous artwork I created using an earlier version of this program:

Following are links to the open edition version of Computational Synthesis on Redbubble and Crated, as well as a link to my contact page if you are interested in the availability of the limited edition print version of this artwork.

In closing, the question I ask myself is am I satisfied with the state of the program I used to create this artwork or do I want to continue to explore evolutionary pathways? I have no answer at the moment but ultimately that answer may well depend on whether or not I have a Eureka moment.

Over the last several days I’ve created a number of new works of algorithmic art. One of these pieces is Tunnel Vision – shown above. After creating this particular artwork I began to wonder if the orientation I had used in its creation would actually be the orientation that other people would find to be the most aesthetically appealing. To get an idea of what that answer might be I posted the image below to several art groups and asked people to identify which of the four orientations they found to be the most aesthetically pleasing.

The Four Artwork Orientation Choices

While early voting had A as the overwhelming preference, by the time voting was effectively over, D had emerged as a close runner up. With respect to the two portrait oriented choices, I find it easy to see why D was clearly preferred to C as that’s the choice that I find more aesthetically pleasing. With respect to the two landscape oriented choices, option A was clearly preferred over option B. Again I agree.

Abstract Art Orientation Survey Results

Taking a step back, you can see in the survey results that there is almost a 50-50 split between people selecting a landscape orientation versus a portrait orientation. So the real challenge is choosing between options A and D with the core question being does this artwork work better as a portrait-oriented artwork or as a landscape-oriented artwork? Given the symmetry of this piece, I think the answer to this question is really one of personal taste.

Creating Tunnel Vision

In creating Tunnel Vision, I was working with a program that is a descendant of a very simple spirograph program I had written for a class I taught on using Processing to create digital spirographs and harmonographs. The image below is an example of the type of output that original spirograph program created.

Original spirograph program output

Over a period of time I gradually enhanced and expanded that program along several separate aesthetic lines of evolution. Tunnel Vision is the result of one of those evolutionary lines.

And My Aesthetic Vote Is…

When I created Tunnel Vision, I did so with the orientation of the canvas corresponding to option A. And it was with that landscape orientation in mind that I modified various parameters to create a work that satisfied my personal aesthetic. Fortunately for me the survey results served as a confirmation of the creative choices I had made.

Open Edition Prints

Open edition prints of Tunnel Vision are available from the following art print sites:

I’ve just completed a video project titled Swimming Eye. This was yet another accidental project on my part as I was not planning on creating a video. Rather I was experimenting with using Processing to create an algorithmic painting program.

In experimenting with applying Perlin noise to a gridded particle field to create a large algorithmic paintbrush, I was struck my the nature of the ensuing motion. It was similar to that of a liquid surface in motion. The impression it made on me was that of a living painting: it wasn’t a static image but an image that had a life of its own.

My original idea of creating some rather unusual digital paintings using this methodology was replaced with the idea of creating a video. The image used as illustration above is representative of my original idea. It was created by stacking several individual movie frames together in Photoshop and using different layer blend modes to merge the individual images together.

Previously I wrote about using Windows Live Movie Maker to create a YouTube video (see Portrait Art Video Project). However I found that Movie Maker was not capable of turning jpeg images into a real movie. With Movie Maker, an image must remain on display for at least one second. This is fine if you want to use images to create a video slide show. However, it does not work when it comes to creating an animation. To translate my 1400 images into a movie, I wanted each image (frame) to display for 1/30th of a second (think 30 frames per second).

I tried using Avidemux but it crashed repeatedly. In searching I came across FFMPEG – a DOS command line utility. It worked. With the basic video created my next step was to come up with a soundtrack because I really didn’t want to create a silent movie.

Searching opsound.org, I located a public domain song that met my needs (thanks to Free Sound Collective for making their music available for use). I used Audacity to create a sound clip of the necessary time length. I used Movie Maker to add the mp3 to the video created by FFMPEG.