Category: Assignment-09-Documentation

Soul Searching is a game about soul searching, delivering the metaphor through maze navigation.

The project spawned from a story idea I had long ago, of a person whose soul was shattered into multiple pieces. The original form was lost and is now trying to return to the person, collecting its fragments along the way. The gameplay focuses on maze navigation, but you can’t see the entire maze at one time.

I took much inspiration from the games I’ve played: Dear Esther, Sense of Connectedness, Thomas Was Alone, etc. I wanted to create something familiar (maze solving) while presenting it in an unfamiliar way.

Technicalities-wise, I took much of it from algorithms online and programs people already wrote on openprocessing; there’s practically an entire culture out there that obsesses over maze generation, trying to figure out the best way to generate mazes, already making creative games with it, and diverging from traditional mazes.

Unfortunately, the program is suffering from a big bug for the moment. I’ve been trying to rat it out for the past several days. It’s slow going and I feel really bad that I don’t have much to show because of that stupid bug. Needless to say I’ll still be working on this throughout the break until I finish it because I’ve spent too much time on it to stop. Proper documentation will go up once it’s done. Sorry!

The Rainbox is a box which produces the sound of rain when a user is nearby.

This project is a simple Arduino setup that involves the use of a servo motor and a rainstick. The Arduino is connected to a flex sensor that is intended to be hidden under a pillow or a mattress. When a person lies down on the resting place, the servo turns 180 degrees, and the rainstick attached to it will also turn. This causes the rainstick to simulate the sound of rain for a few seconds until all the beads in the stick reach the bottom. The servo then turns back 180 degrees and the process repeats until the user leaves the resting spot or thirty minutes after the flex sensor was first activated. All of this is enclosed within a box, along with a blue LED light which emits a soft glow through a hole in the box.
The concept behind this project arose as a response to one of my long lasting personal issues – the inability to sleep in silence. Maybe it is a symptom of a generation that grew up on television, but the lack of any sensory input used to be very unsettling to me, and it would cause my mind to wander into uncomfortable and frightening places. The sound of rain and the glow of muted television often helped me in these. The rainbox was designed to substitute all of this.
All in all, I ended up creating a working prototype, but it is nowhere near a form that I would want to present in public. The box was intended to make the rainstick echo for a more rich sound, but it ended up just being bulky. Having the setup more exposed but pleasing to look at is a goal for this project. A tighter documentation would help a lot in presentation. The light is also something to experiment with, as people thought it would be more distracting than comforting.

Overview
The ‘Voice Box’ is a musical instrument (of sorts) that receives audio input from the microphone and performs real-time pitch changes with a custom glove-controller. It can be used as both a personal listening device and a means of communication: the user has the option to either speak directly into the microphone and have their altered voice projected from the speaker, or plug in headsets and listen to the distorted noises of the world around them.

Inspiration / Critical Reflection
The project was inspired by a number of things that were not necessarily directly related to each other. Initially when I wanted to make simple piano gloves I was actually inspired by my frequent practice of tapping on tables or chairs that I developed as a result of not having ready access to a piano. In order to manifest this habit, I decided to create a portable instrument that allowed other people to listen to the sounds I hear in my head. But soon I discovered that a number of people have made instruments like these in the past, so instead of being a personal project it turned into a re-implementation of what has already been done countless times. So I decided to refocus my scope of inspiration in an effort to create something that was more novel. When I stumbled across Adafruit’s Wave Shield and Voice Changer project I immediately had my heart set on making a device that distorted voices in some way. I was initially aiming to create gloves that allowed a person to autotune their voice in real-time and make them sound like Imogen Heap, but given the limited time I had and my lack of understanding of how sound frequencies work I had to keep things relatively simple. Thus instead of a real-time autotuner, I built a real-time pitch-shifter.

The Voice Box surprisingly became a device that had some personal value as well, as its concept revolves around the difficulty to understand others and their difficulty to understand me. As I was testing the final product, I become engrossed in puppeteering other people’s voices and speaking in voices that were hardly decipherable – and it was then that I realized these gloves had created a wall between myself and society. Using these gloves turned into a very self-reflective experience, as it caused me to exhibit strange control freak behaviors and made me think about why I was able to extract so much enjoyment out of exercising power over others.

Technical Details
Electrodes are placed around the joints of each of my fingers so that whenever I bend one of them I would cause the electrodes to make contact – triggering a switch that creates the voice pitch-shifting effect. Essentially the electrodes behave like normal momentary switches, but were specifically designed to function without having to make contact with an external surface/object. This allows for an ease of use and enables user to make the more natural gestures common in playing keyboard instruments and typing.

Some technical hurdles I had to overcome: Although using electrodes seems to be a conceptually simple idea, they were surprisingly difficult to implement properly. I initially only had a pull-up resistor for each finger (to prevent short circuiting), but when I tested it out I noticed that the Arduino was not correctly interpreting the digital input data; namely, when the electrodes made contact with each other the input was read as 1’s, but when they were separated the input was just a jumbled mess of 0’s and 1’s. To overcome this issue I had to add pull-down resistors to explicitly make the ‘open’ and ‘closed’ states distinct. But however annoying the resistor handling was, I think the greatest technical hurdle I overcame was getting the pitch shifting to actually work. Adafruit’s original voice changer project uses a potentiometer to make pitch shifts, and because that is an analog input it is not possible to change your voice in real-time (running two analog inputs concurrently is beyond the capacity of an Arduino). So I theorized that while it’s not possible to dynamically change pitch using an analog input, it could technically be possible with multiple digital inputs. Luckily my theory was correct, and making things work just required some simple modifications to Adafruit’s original code.

Images
(Sorry for not using Fritzing – there are too many parts to the device and I felt it would be much easier for me to show what’s going on with photos)

A music-loving friend of mine once told me he missed seeing the stars at night after coming to Pittsburgh. The idea for this project came as an idea for a present for that friend. I liked the idea of a portable, personal set of stars that could be charmed to life by playing music. The stars react to new notes being played, and the aurora appears at certain volume of music and duration of continuous music. (This may not seem very obvious in the video at the beginning because I wasn’t playing the notes hard enough. Also pardon my rustiness on piano – I haven’t really played in 2 years.)

The creation of this project was a long and arduous process for me. My initial idea was to have a box filled with blue origami stars (http://fc04.deviantart.net/fs25/f/2008/072/f/e/Straw_Stars_by_Miraka.jpg), with white LEDs hidden inside white origami stars scattered around in the box. However, I quickly ran out of material for making the blue origami stars, and so replaced it with black cardstock and tissue paper. The end result of the stars adhere to my original idea in terms of visuals and functionality. The end product still has the white LEDs hidden inside white origami stars, and you just can’t tell clearly because they are now covered by black tissue paper. The white origami stars make the light of the white LEDs spread a little bit, and if you look carefully, the spread is in the shape of 5-pointed stars. I also wanted more white LED stars, but was limited by the number of PWM pins on the board (and later, space for the wires).

I also wanted to actually learn how to use the FFT library to implement more accurate frequency measurement, for picking out very roughly which notes are being played. It turned out that this is actually quite difficult due to harmonics, and it was hard to understand how to use the library partly due to poor documentation, so I ended up working with code from Adafruit for frequency analysis. A lot of testing was done to get it more suited for piano music. After getting the stars to work the way I wanted them to, I reflected on I could make it appear more interesting/visually appealing. The easy answer was “colors”, so I tried to implement something that appears similar to auroras. The source of the auroras are a number of LEDs. The ideal way to do this would be to use a LED strip (like this one http://www.adafruit.com/products/306), but since this was late into the project, I didn’t have time to get one.

Physically putting this together was also very hard and time-consuming. I had a lot of trouble getting the connections for all the LEDs to work. I had to basically tear my project apart several times because the conductive copper tape wasn’t effective for LEDs, or wires broke, or solder wasn’t strong enough, etc. In the end my breadboard had almost every single slot filled. Then more things fell apart as I was trying to get everything to fit inside a small box. I didn’t realize all those wires would take up so much space.

Weird, but useful tidbits I’ve learned about Arduino:
– variables with types that don’t match won’t raise an error while compiling, but would cause weird things when run
– error in uploading program to Mega board can sometimes be fixed by unplugging a few pins

In the end, I was fairly satisfied with the final product. The stars worked almost as well as I hoped they would. I just wish I was able to show off the craftsmanship that went into this project more. If I get up enough energy, I’d replace the RGB LEDs with an RGB strip. It would be difficult though, because I’d literally have to tear apart my project again, both physically and coding-wise. I enjoy watching it as someone else is playing the piano. Too bad I can’t really watch it while playing at the same time, since I have to watch the keyboard, haha.

[I just realized I accidentally named this the same as that famous van Gogh piece. Ugh. Need better naming skills.]

Arousal vs. Time: a seismometer for arousal, as measured by facial expressions.

Overview

One way to infer inner emotional states without access to a person’s thoughts is to observe their facial expressions. As the name suggests, Arousal vs. Time is a visualization of excitement levels over time. The more you deviate from your resting expression, the more excited you are presumed to be. An interesting context for this tool is in everyday social interactions. Watching the seismometer while talking to a friend can generate insights into the nature of that relationship. It might reveal which person tends to lead the conversation, or who is the more introverted of the two. Watching a conversation unfold in this visual manner is both soothing and unsettling.

Inspiration

Arousal vs. Time is the latest iteration in a series of studies. After receiving useful feedback on my last foray into face tracking, I decided to rework the piece to include sound, two styrofoam heads, and text for clarity. Daito Manabe’s and Kyle McDonald’s face-related projects – ”Face Instrument”, “Happy Things” – informed the sensibility of this work.

“Face Instrument” – Daito Manabe

“Happy Things” – Kyle McDonald

Implementation

A casual conversation between myself and a friend was recorded on video and in XML files. I wrote the two software components of this artwork – the seismometer and the playback mechanism – in openFrameworks 0.8. I used the following three addons:

ofxXMLSettings – for recording and playing back face data

ofxMtlMapping2D – projection mapping

ofxFaceTracker – tracking facial expressions

The set

The projection mapping on the styrofoam heads was carried out on two laptops with two pico projectors. I stored facial data in XML files, and recorded video and audio with an HD video camera and an audio recorder.

The audio file was manipulated in Ableton Live to obscure the content of the conversation. I used chroma keying in Adobe Premiere to remove the background of the video, such that the graphs would seem to emerge from behind the heads, and not from some unseen bounding box. Finally, the materials – a video file, two XML files, and an audio file – were brought together in a second “player” application, also built in openFrameworks.

Reflection

Regarding a conceptual impetus for this project, I keep thinking back to a point professor Ali Momeni made when I showed an earlier version of this project during critique. He questioned not my craft, but my language: the fact that I used the word ”disingenuous” to describe my project. I’m still don’t have a satisfying response to this, just more speculation.

Am I trying to critique self-quantification by proposing an alienating use of face tracking? Or am I making a sincere attempt to learn something about social interaction through technology? The ambivalence I feel toward the idea of self-quantification leads me to believe that it is worthwhile territory for me to continue to explore.

Overview

I made a projection of virtual butterflies which will come land on you (well, your projected silhouette) if you hold still, and will fly away if you move.

Inspiration

This semester, a friend of mine successfully lobbied for the creation of a “Mindfulness Room” to be created in one of the dorms on campus. The room is meant to be a place where students go to relax, meditate, and, as the name implies, be more mindful.

For my final project, I wanted to create something that was for a particular place, and so I chose the Mindfulness Room. Having tried to meditate in the past, I know it can be very challenging to clear your mind and sit entirely still for very long. So, the core of this project was to make something that would make you want to be still (and that would also fit in with the overall look and feel of the room.)

Technical Aspects

Some of the technical hurdles in this project:

Capturing a silhouette from a Kinect cam image. I tried to DIY this initially, which didn’t go well. Instead, I ended up finding this tutorial about integrating a Kinect and PBox2D. I fixed the tutorial code so that it would run in the most recent version of Processing and with the most recent version of the SimpleOpenNI library.

Dealing with janky parts of those libraries (e.g., jitteriness in the blobDetection library, fussiness of SimpleOpenNI). Using the libraries made my project possible, but I also couldn’t fix some things about them. I did, however, manage to improve blob detection from the Kinect cam image by filtering out all non-blue pixels (the Kinect highlights a User in blue).

Trying to simulate butterflies flying—with physics. Trying to simulate a whimsical flight path using forces in PBox2D had only ok results. I think it would be easier to create their paths in vanilla Processing or with another library, (though that might make collision detection far more challenging.)

Finding a computationally cheap way to do motion tracking. When I tried simple motion tracking, my program ate all my computer’s memory and still didn’t run. I ended up taking the Kinect/SimpleOpenNI provided “Center of Mass” and using that to track motion, which worked pretty well for my purposes.

Critical Reflection

As I worked on this project, I was unsure throughout that all the pieces (butterflies, kinect, etc.) would come together and/or work well. I think they came together fairly well in the end. Even though the project right now doesn’t live up to what I imagined in my head at the beginning, it still does what I essentially wanted it to do—making you want to stay still.

When people saw the project, their general response was “that’s really cool”, which was rewarding. Also, the person in charge of the Mindfulness room liked it enough that she wanted me to figure out how to make it work there long term. (Which could be really logistically difficult, in terms of setup and security because the room is always open and unsupervised, and drilling into the walls to mount things isn’t allowed.)

So, though there’s a list of things I think should be better about this project (see below), I think I managed to my concept simplistically, and well given that simplicity.

Things that could be better about this:

Butterflies’ visual appeal. Ideally, the wings would be hinged-together PBox2D objects. And antennae/other details would add a lot.

Butterflies movement. Could be more butterfly-like.

Attraction to person should probably be more gradual/a few butterflies at a time.

Code cleanliness: not good.

Ragged edge of person’s silhouette should be smooth.

Better capture of user. Sometimes the Kinect refuses to recognize a person as a User, or stops tracking it. This could have to do with how I treat the cam image, or placement, or lighting, or just be part of how I was doing Kinect/SimpleOpenNI. After talking with Golan, I think ditching OpenNI altogether and doing thresholding on the depth image would work best.

Video

Code

Inspired conceptually by websites like Kitten War and classic games like “Would You Rather?” and technically by projects like Post-Circuit Board by the Graffiti Research Lab, This or That is an electronic voting poster that allows passersby to vote on two different options as chosen by other strangers.

The poster consists of a voting button and seven-segment display on each side, as well as a reset button, all of which are controlled by a single ATTiny84. There are no wires on the poster besides the alligator clips connecting to the power–all the traces were made with copper tape.

While we are becoming more interconnected digital, electronics are becoming more and more personal–our laptops and cellphones are not devices that are meant to be shared physically, and we even get physically anxious when they’re out of our reach for too long. This or That is a “public” electronic, its charm and fun comes from its communal usage.

Older iteration (I’ve since learned the art of making pretty traces!).

At one point I traded my coin cell battery in for a sweet LiPo battery that someone had lying around.
chargin’ a poster whaaat

Code & Circuits

DIAGRAM:

I have a .ai file that needs cleaning that has both the poster text + lightly drawn traces that you can use to create a poster for your own, so bear with me! Here’s a Fritzing diagram for now:

MATERIALS:

Adafruits’s 7-Segment LED Displays x2 (these aren’t actually soldered directly onto the poster–instead I just used header pins on the poster so I could use the displays on other projects if I wanted )

Arduinolin isa project designed to investigate the evolution of material possessions based on electronic trends by taking a traditional object and recreating a new object which is not only a modern, electrified version of the former but also extends the object using the capabilities of digital media.

Overview

I decided upon a violin as my “traditional object” of choice, mainly because I thought it would be reasonable to use a stringed object as the ability to reprogram the touch sensors on the gloves to play different pitches when activated supports my concept. I immediately researched the evolution the violin and the electronic violin, which is all documented in this previous blog post.

Inspiration

The concept was inspired by a conversation I had with my father one night. I was on the phone with him and he was talking about an app he had just purchased for his iPhone. What caught my attention was that he had actually paid over $3.oo for it – I am not in the habit of downloading an application unless it is free. That set me thinking, how many apps have you purchased? How much money have you spent on virtual material? Is it worth it? How will very highly valued items which gain value as they age be transferred into the electronic world, and will that transfer ever be successful?

There is also an ecological argument accompanying the evolutionary argument which entails comparing the carbon footprint of apps and actual instruments, and how this could all eventually be handled by a single piece of technology.

Technical Aspects

The bulk of the work in this project was in figuring out how I was going to wire touch sensitive capacitors and make them individually interactive to the human touch. I had considered going with something like a pressure sensor which would return a different value depending where on the strip pressure was applied but turned this down in favor of using materials which could easily be translated onto conductive fabric for the purpose of making the final product wearable. In retrospect, I greatly regret not pursuing the first option, which would have made for a much smoother transition between instruments and a greater degree of musicality.

Additionally, I was unsure of how I was going to wire everything together onto the glove. The palm of the hand alone features twenty four wires, all interwoven into the glove itself using conductive thread. In the end I did use jumper cables to transfer data from the individual pins to the glove, then touched the end of the jumper cable to the thread. From there, all the Arduino does is loop through each pin then for each pin loop through each note in an array and if the pin corresponds to the note, play the note. I also have some “fun” touchpads at the moment which loop through a series of notes.

Critical Reflection

There are many existing versions of this project ranging in degrees of professionalism from those hobbyist who work out of the garage to commercially marketed products (typically designed to help one learn an instrument). I see that there are few limits to the extent to which I could improve upon my project, at least in theory. However, there are two things I would like to improve upon more than anything else.

Firstly, I am greatly irritated by the fact that although I have two gloves, the left hand representing the violin and the right hand representing the bow, the right hand does not actually need to do anything for the violin to work. Although in principle placing an accelerometer on the right glove would not be difficult, convincing the accelerometer to communicate with the arduino might have been something of a challenge perhaps involving a wireless shield.

Secondly, I am not impressed by the sound quality at all. I understand that it is very possible to synthesize sounds using MaxMSP which would be a far more rewarding result than the current buzzes provided by the Piezo element. It would be also be very rewarding to have a proper headphone jack to output the audio. I enjoy the personal experience my current edition supplies, but would certainly enhance this wherever possible. (Sticking a Piezo buzzer in one’s hat does not necessarily result in the best audio.)

This crafty measuring device is meant to draw attention to the daily usage of revolving doors at Carnegie Mellon’s University Center building. It logs the time, proximity, and rpm data, but also incites a little competitive spirit on its free voltage.

This project revisited our previous class assignment that utilized seven segment displays to capture an interesting measurement. In my original idea, I wanted to choose a unique and fun way to portray numbers, and what better way to do that than with rankings? The reason I chose revolving doors as my subject matter was more or less because I was interested in the calculations involved with an accelerometer.

But as I developed my idea in this assignment, I wanted to convey more useful information about my subjects, the revolving doors. The research changed its direction from “interesting calculations” to bringing attention to those mundane doors that we pass through without a second thought. And I have to thank Maddy Varner and Golan Levin for reminding me that an extra seven segment display and data logging shield were just the things I needed to accomplish this.

That said, the actual wiring of all these new devices, as well as figuring out their libraries were the most technical aspect of the project. Through this process, I came to understand JUST HOW INVALUABLE neat soldering can be. But in the end, the effort was definitely worth it. (See below for fritzing and code). I had some technical difficulties along the way (I seem to have jynxed technology a lot this semester), but I find the data that came from my 7 hours of installation really valuable. The animated GIF below shows the plotted data from the data logging shield, and there are clear patterns of usage for these doors. (Click the image.)

Facts and Figures:
124 people used the door in those 7 hours
The highest score was 40 rpm
There were 4 notable mishaps

Of course, I won’t forget to address the most exciting–and hazardous–part of this project: the participants. I may have underestimated the competitive spirit of college students, because I felt fear watching some of them. My project installation time was cut short because the UC staff asked me not to display the high score portion, and I personally thought the goings could get worse since it is finals week. On a side note, I was extremely happy with how the magic arm stabilized the box. The whole contraption was incredibly sturdy.

In conclusion, I am very satisfied with this project. Although video editing is not my strong point, I did enjoy watching over my project and seeing people have fun and expressing a genuine interest.

Supported in part by a microgrant from the Frank-Ratchye Fund For Art at the Frontier
URL: bit.ly/revolving-games

A fish which can learn what kind of melody the user likes via machine learning and plays them.

I was inspired by my research professor’s project “Simstudent”, in which a human student walks a computer Simstudent through the steps of algebra problems, from which the Simstudent will learn via machine learning. While I was testing it, I found I greatly enjoyed teaching and watching my Simstudent succeed on the problems that once baffled it. Thus, I started off looking for ways to use machine learning as the backend of my project. Music seemed like a good idea, so I went for it, despite having zero experience.

I used Weka’s implementation of the ADTree algorithm as my backbone. I represented a melody as an array of 10 notes, which is limited to 7 pitches, as recommended by Professor Richard Randall. The user can either rate a melody played by either the fish or the user as favorable or unfavorable, from which the fish learns. My thoughts on what the frontend looks like revolved around a fish, because they don’t seem like creatures that will likely be playing music, so I implemented it as such.

In hindsight, music is probably not the best thing for me to do;. I do not have the skills to hear the musical structures in the melodies that were created, and along with the fact that whether melodies are good or bad to a user is highly subjective, caused me to be unable to statistically confirm whether the algorithm is robust. I did however have a musically gifted friend play around with it, and after around 15 trials, he claimed that the fish picked up on a structure that he had played. I also managed to train my fish to know that it must play the last note at a low pitch, which if that means good melodies in my heart, then I am successful.