The following image is a crop from the segmented panorama previously posted. The computer is still extracting pixels from the original image using these segmented regions, where the current count exceeds 50,000.

I’m working on a public art commission for the City of Vancouver and I thought it would be a good opportunity to revisit the Self-Organized Landscapes series. For this project, the idea is to combine both a Self-Organized Landscape and a panoramic photograph such that a photo-realistic and readable image dissolves into abstraction as informed by a Self-Organized Map (SOM). The first step was to shoot a ‘straight’ photographic proxy image. The following is a pano constructed using hugin from ~45 exposures where the full resolution is ~30,000 pixels across.

Since randomizing the order of samples in the clustering process worked so well I went back to not filtering out large regions before clustering. The results are more interesting as stills as they are too literal and unstable in video, thus I’ve abandoned this line of exploration. The tweaking of features for clustering has certainly helped with emphasizing the aspect ratio, and I’ve increased the weight of the area feature hoping it will increase the diversity in the size of the percepts.

Watching (Blade Runner) (Work in Progress) is one channel of what is envisioned as a two channel generative video installation that was the focus of my tenure as a Banff Artist in Residence. Two seven minutes sequences were exhibited as part of the Open Studios in the Project Space at the Walter Phillips Gallery at the Banff Centre in February 2016. These two sequences use different clips from Ridley Scott’s Blade Runner and show the development of the work through the residency, as documented on the production blog. (more…)

In the most recent collages I was interested in the range of aesthetic results from different relative scales of the constituent parts. To explore this, I rendered one frame for each scale setting, resulting in a smooth transition between two extremes of percept scale.

Due to the final push to get the video of the second clip ready for the Open Studios event, I did not have a chance to create any collages from those percepts. The following are a few explorations, some of which involve sorting the percepts according to some of their features, such as the area of the region, or its hue or saturation.

I wanted to increase the size of rendered percepts so there was less empty space between clusters and move away a little from the pointillist aesthetic; I did not have time to try this for the Open Studio. Following are the same images as in the previous post.

After I changed the clustering code so that samples were randomly shuffled before training, I was excited to see the following results. Unfortunately, the whole clip shows the same colour variation due to a bug in my program. The problem was that regions are represented using dissimilar clusters. This is fixed now, but it will be a challenge to get the new clip ready for the open studio tomorrow.

An image is a reference to some aspect of the world which contains within its own structure and in terms of its own structure a reference to the act of cognition which generated it. It must say, not that the world is like this, but that it was recognized to have been like this by the image-maker, who leaves behind this record: not of the world, but of the act.

(Harold Cohen, What is an image? 1979)

I think of subjectivity and reality as mutually constructive. Cognitive mechanisms impose structure on reality in the form of imaginary categories that inform predictions and inferences. At the same time, the complexity of reality constantly challenges understanding. Cognition determines the most essential and salient properties of reality, but those properties are context dependent. Is the quintessential an objectively observable property of reality or a projection of imagination?

This series of works show the result of statistically oriented machine learning and computer vision algorithms attempting to understand popular cinematic depictions of Artificial Intelligence. The machine’s understanding is manifest in its ability to recognize, and eventually predict, the structure of the films it watches. The images produced are the result of both the system’s projection of imaginary structure, and the structure of the films themselves.

As the clip I first choose ended up (a) looking really monochromatic and (b) not having content that refers very strongly to the central essence of Blade Runner, I decided to start working on a different clip. This new clip introduces the main plot-line and the Replicants, so the content is stronger as a proxy for the whole film, and also contains a lot of colour variation. Unfortunately, this clip ended up (after a couple days of processing) just as monochromatic as the first one. The following images show selected original frames on the top and their corresponding reconstructions below.

Following is the result of using 5000 clusters rather than 1000. I find it too readable now, so I’m running k-means again with 3000 clusters and hopefully strike a balance. Despite the readability of the reconstructed sound, the resulting clusters only had mean correlation of ~0.8 (where 1 would be perfect).

While waiting for my 75,000 percept rendering to compute, I’ve returned to audio. I ended up saving the real and imaginary spectra separately, and this not dealing with any signal transformation. This is the first time I’ve heard the results and its quite striking how well it reproduces the quality of sounds (voice) without any of the specificity (words). In this case I used 1000 clusters, so 1000 sounds to represent 10,000 seconds of audio. I’m now running k-means again with 5000 clusters so that the sound could be more readable linguistically.