Work by Ethan Hartman

Main menu

Category Archives: HCI

This is a feedback loop device into a wah pedal enclosure, which allows for an intuitive control over your feedback-noise performances, especially if you’re already using your hands for other things (say, playing guitar or keyboards, or even twiddling with other knobs). As you can see, it is useful for making a wide variety of high-pitched screaming sounds, and bringing out the unexpected from effects pedals and other devices.

It was a fairly simple project: I used a schematic from Beavis Audio Research, though I replaced the 500k pot with the 100k I found in the wah pedal.

Feedback loop controllers like this one do make very interesting things happen — however, always keep in mind that you’ll need the feedback loop you’re controlling to add gain. If you have less than unity gain, you’ll just find that the loop gets very quiet when you open it up all the way.

This is a demo for a monome 40h/64 patch I’ve been working on in ChucK. It’s a tool for granularizing the sound input through your computer’s soundcard.

Basically, there is a set of eight filter banks, one for each row of monome keys. If you don’t touch anything, the sound just plays through without much modification. If you do hit some keys, however, the live sound will be turned off and instead, slices of delayed sound will be played through that filter bank. You can also add in random grains, too.

The cool thing about it (I think) is that it’s a modeless interaction — that means a button press will always do the same thing. I wanted to make a monome app that was simple and intuitive, and didn’t have a row of mode-changing keys taking up an eighth of the surface. Obviously, you give up a lot of flexibility in favor of simplicity by doing this, but I’m pleased with the results.

The captions are really fast in the video, though, so feel free to pause. Also, there is a nasty clicking problem which I’m going to work on in the next version.

For this project, I used some video that Brett and I shot when we were driving from out to California several years ago. I wrote the music to go along with it, and the process seemed to fix the memories in a new way.

Most of the audio was composed beforehand, but the video was performed live using a patch written in Puredata and Gem.

This is really documentation of an ephemeral project, as opposed to a document in itself. The project culminated as a performance where I mixed the video live using the midi keyboard seen in the opening frames; I also hooked my rig up to a small television and made viewers crowd around it instead of using a larger projection screen. As you can imagine, the result was quite different experience than the one you see here.

For a class assignment on synesthesia, I decided to build something to turn a video input into sound. This was in some ways a culmination of a several interactions I created with the clear purpose of being counter-intuitive. In this case, the video camera provides the material to be translated into sound, but it’s set up such that only moving sections of video will result in nonzero samples: the video output being scanned is seen on the television in the background. At the same time, a wii remote is used to control the rate that the pixels are being scanned, which adds some gross control of pitch. The two simultaneous interactions are sometimes at odds with each other.

I set this up for my classmates and let them try it out, which added the element of physical performance. Afterwards, my professor asked me, “Where is the piece?” or, in other words, where should we be looking? At the screen or at the person gesticulating wildly? I didn’t have an answer: later I realized that I was pleased with the ambiguity.

This video was created as documentation of a final project for a class on human-computer interaction. This video gives a fairly clear idea of the interactions, and hints at some of the musical material that Jeff, one of my collaborators and a fantastic dj, was able to do with it. Jeff’s ability to actually use the pendulums to make some compelling music was the reason why I felt like this project was a success.

This is a piece that I did at the end of my first semester at CCRMA. While it may seem a bit goofy to some, once the elements were there, it all seemed quite obvious to me.

This video is about five minutes of me controlling a feedback loop with the tilt sensor in an Apple laptop; I’ll make the code available as well, here. It’s written in ChucK, a free language designed for audio usage. It’s a bit buggy, but it can do a lot of stuff, including synthesis, wave file manipulation, and interaction with other stuff via OSC or midi.

I know I should have smashed the laptop or set it on fire at the end, but I’m saving that for my first stadium show.