Clement Valla and John Cayley's Hapax Phaenomena is featured this month on The Download.

Certificate of Authenticity, Hapax Phaenomena (2011)

In Hapax Phaenomena and other projects such as Google Earth Sites, you refer to your art objects as artifacts or curios. Do you see yourself as an observer documenting an endangered technological curiosity?

Yes. These things will all disappear, and probably soon, in the name of progress. These artifacts are atypical ephemera, and often accidental products created by various internet algorithms. There is very little direct human hand in these artifacts. Though the purpose in collecting them is not simply for their preservation. It's more about framing them, allowing them to be seen, and showing a kind of bizarre byproduct of these super-functioning and useful systems, such as Google.

When did you first notice the glitch in Google Earth? What inspired you to begin capturing these surreal moments?

It was accidental. I was Google-Earthing a location in China, and I noticed that a striking number of buildings looked like they were upside down. I could tell there were two competing visual inputs here - the 3d model, and the mapping of the satellite photography, and they didn't match up. The computer is doing exactly what it's supposed to do, but the depth cues of the aerials, the perspective, the shadows and lighting, were not aligning with depth cues of the 3d earth model. I figured that this was not a unique situation in Google Earth, and I started looking at obvious situations where the depth cues would be off—bridges, tall skyscrapers, canyons. Soon I noticed the photos being updated, and the aerial photographs would be 'flatter' (taken from less of an angle) or the shadows below bridges would be more muted. Google Earth is a constantly changing dynamic system, so I had to capture these specific moments as still images.

Could you talk a bit about your reasoning behind using Amazon's mechanical turk in "A Sequence of Lines Consecutively Traced by Five Hundred Individuals"?

It's a project that's about iteration and recursion. Mechanical Turk is typically set up to get a massive number of different responses to a single topic or question - like a poll. But I was interested in doing something recursive, where each person's output is then recycled back into the task. So the process is a feedback loop. In this way there was no expectation of a particular output, I simply created the structure and let it run. This project is similar to the childhood game of telephone (in which one person whispers a message to another, who whispers it to another, and so forth). The message gets corrupted or distorted from the endless recycling of the same piece of data. On a grander sense, I think it's really about how a copy can be a unique object. Copies that are made in an absence of context can create something new and unexpected, and that's a nontraditional framework for the idea of what a copy is. The thing about Mechanical Turk is that it's absolutely the place where an output has no context. The workers are most often disassociated from the purpose of what they're doing. It's possibly a questionable ethic— and I think this is a product of any machine age, in which tasks are broken down, and only few have privileged information about overarching purpose. The copy can be free to deviate wildly from it's origin because there is no idea of the 'original.'

Your work often addresses ideas of human vs computer aided interpretation. In your observations, what differences and similarities exists between the two?

At this moment I think human and computer activity is hardly distinguishable. I am interested in the moments where the typical distinction is blurred or even inverted. Take Amazon's Mechanical Turk, billed as artificial-artificial intelligence. Amazon took the name from an 18th Century automaton that successfully beat humans at chess. A figurine that looked like a seated Turk sat behind a huge contraption filled with gear and levers, that would whir and smoke as it played. It was eventually revealed that the machine was operated by a human hidden inside. We see here an example of a machine using a human to accomplish its task; I like to think of it as the machine outsourcing to a human. In Amazon's Mechanical Turk, they built a system whereby computer programs can query humans, get responses, and react accordingly: human aided computation as opposed to computer aided design.With Hapax Phenomena, I have collected unique word and image combinations that are created by a human individual at the source, and then Google's algorithms put them together as the result of a search. But sometimes the source is actually created by automated machines for commercial purposes. It gets confusing to parse out what the humans are doing versus what the algorithms are doing. I think Mechanical Turk, just the fact that it exists, first awakened me to that reality. There is something similar going on in the projects I have done with the Chinese Oil Painting Factories; the paintings are 'factory' made copies, produced with a consistent and expected output, but the whole value is derived from the human hand that created it. The visible mark of the hand is what distinguishes these images from other reproduction processes.

Age:

32

Location:

Brooklyn

How long have you been working creatively with technology? How did you start?

About 5 years, strictly speaking, but earlier than that I started tinkering with programming while working as an architect. Digital tools were really appearing all over the workplace for architects at that time—3D modeling and such, and as I began using them, I found it interesting that the tools could be customized through programming. I was interested in all the possibility that programming allowed, so much systematic thinking, procedure, iteration, algorithm. Also, I figured, if I learned how to program, I could begin to have insight into how the digital tools I used were made and how they could be unmade, tampered with and altered.Some of my projects now require a lot of programming, and are technologically challenging to realize, and some of my projects require almost no technological expertise - they're simply screenshots of a freely available software. But learning how to code, and how software systems operate has provided me with a framework through which to look at digital technologies.

Describe your experience with the tools you use. How did you start using them?

My main tool is the algorithm, and this tends to be how I tap into the core processes in my work: iteration, loop, recursion, copying, chance, recombination. By programming computers, I tend to be able to create my own tools, or modify existing ones to suit my needs. Beyond that, I use whatever best suits the project I am working on.

Where did you go to school? What did you study?

I recieved a BA from Columbia University, where my major was architecture, and then an MFA from RISD in the Digital + Media department.

What traditional media do you use, if any? Do you think your work with traditional media relates to your work with technology?

I don't paint, but I have made paintings, and I'm not a printmaker but I have made prints. At the core of my work is an interest in process and algorithm. Beyond that, each project requires its own media - the output is varied and comes out of the process.

Are you involved in other creative or social activities (i.e. music, writing, activism, community organizing)?

No.

What do you do for a living or what occupations have you held previously? Do you think this work relates to your art practice in a significant way?

I used to work as a designer in architecture offices. I now teach at RISD. I also freelance as designer and programmer. I don't make any separation between these different activities - I constantly let them feed into one another.

Who are your key artistic influences?

They are changing all the time, depending on what I am working on and looking at. Right now, I'm obsessing over Natalie Jeremijenko, Jodi, Cyprien Gaillard, Allan McCollum, Matt Mullican and Tauba Auerbach. Last month it was Manfred Mohr, Takeshi Murata, Rem Kollhaas, Lust and Stephen Wolfram.

Have you collaborated with anyone in the art community on a project? With whom, and on what?

Yes. Recently I've collaborated with John Cayley on the Hapax Phaenomena project.

Do you actively study art history?

Yes.

Do you read art criticism, philosophy, or critical theory? If so, which authors inspire you?

Again, this is an ever changing list. I usually have a big pile of photocopies and books laying around, but I never get through it. Recent good reads (or viewings) include some Peter Halley essays, Brian Massumi, Rem Koolhaas, Adam Curtis, Jonathan Lethem, Douglas Hoefstader, David F. Wallace, Adam Curtis, Manuel De Landa, Umberto Eco, Claire Bishop, The Critical Art Ensemble. Also, radicalart.info - this site is amazing.

Are there any issues around the production of, or the display/exhibition of new media art that you are concerned about?

The thing I try to avoid is the 'wow' factor that often accompanies new media art. I'm very wary of the impressive novelty of a new piece of technology for its own sake. I almost feel that a technology has to be slightly outdated or at least second-generation technology before it can turn into interesting art. Before that, the novelty, the 'progress' embedded in new media is its principal message, and this tends to drown out everything else.