Category: Your Work

Interactions with the final version of my final project are presented below:

Zen is a zen-garden simulation that invites users to slow down and relax during the hectic time of final exams and projects. It does so by presenting an outline of the user in a field of flowers (thank you Lama for making such beautiful plants!), and allowing one to wade through. If one moves too much, the color changes from green to yellow to red, which has consequences on the growth of the plants in the garden.

Yellow state – “slow down”

Red state – “take a break”

In the red state, the plants do not grow, and touching them with one’s hands or feet causes them to wither. It is only when the user slows down and reaches the green state that they get to experience the reward – planting their own flowers for everyone to see!

If one stays calm for a short while, pink lotus plants spawn in the regions occupied by what the Kinect detector sees as people. These can be picked up by people’s hands, and planted in the garden.

If one stays calm for a slightly longer while, a purple plant sprouts in one of the person’s hands; the person can plant those as well! As can be seen from the video, these plants are easier to plant (because the person does not need to actively pick up the plant with their hand).

I am very happy that I was able to implement most of the changes requested during user testing! Planting was a big part of the challenge, and I had to rework the majority of the code to make it possible; after making it happen for the purple plants however, adding the lotus plants was very easy.

Additional signifiers were added to tell people what they should try doing with their plants – this helped explain the interaction better, but I am afraid people still did not have enough patience to wait and see what happens. Also, markings were added to the floor that showed people precisely where they should stand in order to be seen by the Kinect camera (blue area), and precisely where they should plant their flowers (green area with a plant symbol). This proved fairly intuitive.

Unfortunately, I did not have enough time to implement the wind-like effects that would bend the flowers with people’s movement. To make the interaction more intuitive, I added the functionality to shrink flowers when they come in contact with a user’s hand. This is not precisely an intuitively expected interaction, but it did engage the users and showed them that there is something that can be done with the visualization, that it is not just a static arrangement of flowers.

I was struggling with interference from people behind my detection area. The other visualization was too close and the Kinect was mistakenly detecting those users as my users. This was a problem because the visualization beyond my detection area had a very long interaction – so people, once detected, would not be un-detected unless the Kinect was forced to forget them (by me blocking their body with my body). The problem was alleviated a little by tweaks to the code that ensured that only one person would be tracked by the Kinect at a time (which was difficult because the library code I was using did not work as expected), but the visualization still required constant surveillance on my part – which is obviously not ideal. I realized too late that I should have requested a blanket or a screen to prevent background people from interfering with the visualization…

Nevertheless, I am pleased with the end result, I think people liked it much more than they did during user testing, and I think they appreciated the ability to leave their mark for others to see in the visualization.

The code is presented below. It is the longest I have written for this class, overcoming even the CM Visualizations project.

This final project was inspired by La Monte Young’s Dream House (hence the name), however, not only I wanted to play with how sound travels through space like La Monte Young did, I also want whoever enters the space of my project to be able to control sound in the whole space. The only way I could think of for making this project come true was building a room (Dream Box), sound isolating it as much as I could, or at least blocking as many of the external sounds as was possible in the given circumstances, and then finding a way for the visitor of the box to be able to move sound through space. The main purpose of sound isolating the room was to make sure it was easy to hear how the sound played in the Dream Box moves within the space by swiping the wall. Swiping the wall is not quite real though, so I was faking it by having an ultrasonic rangefinder in one part of the wall, so that depending of different values(distance) the sound would travel from speaker to speaker.

Making this final come to life was a long journey that consisted of two parts:

1) Making a wooden box (a.k.a. the Dream Box)

2) Programming processing so that it knows what speaker to send the mp3 file to depending on the value from the ultrasonic rangefinder

While making the room (or the box) of wood was time consuming and did not required a lot of planning and calculating for the right dimensions of the foam and food I had to cut, it did not cause me much of a problem in a sense that it all went as planned.

The initial idea for making the walls was taking 25mm plywood and simply attaching a couple of pieces together, however, this proved to be very unsafe and extremely heavy. Knowing that, the plan then was to do the walls in the similar way theater flats are done. I took 1×4 inch stick lumber and made three frames (I used the Arts Center wall as the 4th wall) for the walls that I later simply skinned with 6mm luan plywood. The dimensions of the room, which turned out to be a cube, were 244x244x244cm, which seemed quite small on paper; however, it felt really large once I saw it in life. At some point I even doubted myself whether I need to keep working on it, or whether making something that big would be a waste of time. But, as the saying goes, go big or go home, and I’m not home yet J.

Those are the flats I’ve built.

I thought that the coding part of this project would be a lot simpler and would not cause me any problems. So first I made a working prototype with Arduino. I have soldered four speakers and had the Arduino tell which speaker to make noise depending on the value I was getting from the analog sensor. However, after spending hours making this prototype to work, I was pointed out that it is nearly worthless to have a nice big room be filled with annoying buzz. And that was totally right, why would I want the visitor of the Dream Box experience any annoying sound when one is supposed to enjoy their time in the room playing with the directionality of sound, rather than get tired of a random tone that is being played?

Unfortunately for me, I could not play anything but tone using Arduino only, unless I was to use an MP3 shield, but I could not do that since I had 4 outputs (speakers), and an MP3 shield is only capable of playing through 2.

So I had to use processing for this, which was fine, I never expected anything to go wrong. However, it felt like everything that could go wrong went wrong. The first problem was that I did not have enough outputs for four speakers in my computer either, so I had to find an audio interface with 4 or more outputs. Once I had that, I did not know how to tell processing to play it through this exact device. And once there was a way, processing thought that the audio interface was merely a speaker. I did not know a way to tell processing that this device that it thinks is a speaker actually has 14 different outputs, so I had to teach it how to see all of those separate outputs. If not Aaron, I would never figure this out. Aaron provided me with the code that would let processing know which specific output to play the sound from using the audio interface. I thought the struggle would end, however, it has just begun, because the Beads library was a little confusing, and once I figured out how to make it all work the way I want to with one sound, I did not stop there. I though one sound might be too boring, so I decided to add more sounds so that the visitor of the Dream Box had more freedom when choosing the direction and the type of sound. Making it work with four different sounds was really hard, but eventually the battle was won.

There was another problem along the way as I was going to use infra-red rangefinder at first, but the values there were too inaccurate and unpredictable art times, so I had to change to the ultrasonic rangefinder, which is a bit harder to program.

Another thing why I decided to do the framing this way is so that I could place the rangefinder flat on the inside of the wall, and still have some space between the 6mm luan and the layer of Styrofoam I am going to place in on top of it.

Last step was to sound-isolate the room, after I made sure everything worked the way it should. I used 10cm thick Styrofoam to go along all walls to provide sound isolation. The top two pieces of Styrofoam were covered with a layer of 15mm plywood to keep the whole Dream Box in place. The plywood also had 4 round holes on every corner of the roof, which was to let me put the speakers through it. To make the sound come from the ceiling once one is inside the box, I carved out slots for the speakers to go to, so that the speakers would face downwards. I must say that cutting Styrofoam with a handsaw is not as easy as it looks.

This is the top view of the way the speakers were stuffed into the ceiling

And this is half of the roof top view to see how they are connected. I had to solder the extensions for the speakers as well.

So that the Styrofoam does not look ugly, and also to create a feeling of a Dream Box, I coated all of the interior of the box with red fabric, which made it even more similar to the Dream House.

This is how it looks on the inside:

The only comment I had from the people who went through the experience in the Dream Box was that the button was more attractive to them than the arrow, probably the swipe indication was not that clear, or maybe it is in the human nature that pressing buttons feels so pleasing. I think working a little more on the button design and the swipe signifier design would fix this issue. Other than that, I’ve seen and heard only positive feedback and I ended up being really proud of what I’ve built in a relatively short period of time as well as very happy with how this piece was accepted by the people who walked into it.

As a follow up from my first computer vision assignment and as a way to fulfil my desire of seeing a life-size Totoro, I decided to create a projected image of him that people could interact with!

As a brief recap, this was the project that inspired my final:

Overview of Totoro’s interactions:

Through the PS3Eye and Blob Detection libraries, as well with the infrared LEDs attached to the interactive objects, specific movements and interactions toggled different aspects on screen. This installation had two modes. The first one consists of using an umbrella to try to protect Totoro from the rain. Through two LEDs attached to either side of the umbrella, the program tracked its location and stopped the rain in those locations. As the umbrella gets closer to Totoro, he gets happier, and finally once it is directly in front of him, he growls and smiles widely. The second mode consists of wearing a glove to pet Totoro. Totoro’s eyes follow the user’s glove and, if stroked in his belly, Totoro gets happier and growls as well. Although seemingly simple interactions, the linking between all the components: switching between modes, accurately tracking the umbrella and the glove, toggling the rain on an off, moving Totoro’s eyes, and toggling sound and animation, was a lengthy and time-consuming, although extremely enjoyable, process.

The process for this piece was divided into three sections:

The design: adjusting and making the background and animation frames

The code: writing the program and adjusting the processing – IR camera link

The hardware: attaching IR LEDs to the umbrella and the glove

The design

For the project’s visuals, I adjusted both the background and Totoro’s expressions.

Here is a screenshot of the original image from the movie:

There were two issues with this background image. The first was that the girls in the scene, although iconic and the main characters of the movie, were superfluous. Although their colors added a lot to the appeal of the image, leaving them there would not only take attention out from Totoro, but would also give the impression that the girls were interactive as well. The second issue was the rain in the image. Since my rain was created through Processing, the drawn rain would create a saturated image and would also give the sense that the rain was stopping rather than disappearing when someone hovered on an area with the umbrella, since the coded rain would stop but the drawn rain would still be there. Thus, I also set upon myself to overuse the stamp tool in photoshop and get rid of the rain. This all led to the following final background image:

Actual background (the eyes had to be left blank for the ellipses in the code to move)

For the animation frames, I compiled Totoro’s smile in other scenes and added them to the umbrella scene, since in this whole part of the movie, there are no actual shots from afar where Totoro changes his expression.

For instance, this is the original scene where I got his smile from:

I made the eyes and mouth transparent and then adjusted them to the scene I wanted to use for my project, while trimming everything to 7 frames:

Once the animation was done, it was just a matter of setting up the boundaries as to where the specific frames would be shown.

Here is a sample of the locations where the frames change for the glove code:

The Code

The code was much more complicated than I thought it would be. In summary, I manually change the modes through my keyboard. Depending on each mode, the code checks for the number of “blobs” that are detected on screen. To make the tracking accurate though, I adjusted the brightness threshold as well. When in “umbrella mode”, the code waits for there to be two blobs on screen. Once this occurs, it saves their coordinates and compares them to establish a minimum and a maximum point for the umbrella. Then, it uses these minimum and maximum values to make the rain’s alpha value transparent if it spawns between these locations. For the glove mode, the code checks for only one blob on screen. Once detected, it saves its coordinates. Then, depending on where the coordinates are, it moves Totoro’s pupils accordingly and shifts between animation frames.

Finally, once the logic of the code was functioning, I attached the infrared LEDs to the umbrella and the glove. I 3D printed battery holders for my two 3V batteries and made switches so I could save the battery life for the exhibition. Then, for the umbrella, I attached all the wires with tape. For the glove, my friend Nikki Joaquin sewed all the components together due to my lack of ability. (thank you Nikki <3) Although seemingly quite simple, setting up all the hardware was one of the most time consuming tasks. At first, Nahil and I had not thought about 3D printing the battery holders. Instead, I had just taped everything up, which made it extremely difficult to attach the wires to the batteries and place them on the umbrella without any of the components moving out of place. At first, I had only thought about using one LED on either side of the umbrella and one on the hand. However, due to the directional aspect of the LEDs, I ended up making another extra set and adjusting their angles slightly so the blob tracking could be more accurate.

Sewed components. I could have covered them with a film but the buttons were more accessible this way.

The battery holders were attached with a lot of electric tape to ensure they would not fall offAs seen in the image, the LEDs were slightly shifted to arrange for a wider range.

Challenges and future improvements

This whole process was overall quite challenging. However, by dividing everything into the three sections described earlier and doing everything little by little, I was able to finish Totoro on time. The biggest challenge was definitely the coding. I had to get familiarized with the way the IR camera and the IR LEDs worked, and had to adjust the code for the Blob Detection to fit into the interactions I wanted to create as well. Initially, I made the code in such a way as to make the program automatically recognize the amount of blobs in the camera’s frame and with that identify the mode it was on. However, this made the code extremely unreliable, which is why I chose to manually change it through keys in my computer. Overall, thanks to the help of Aaron, Craig, Nahil, James and María Laura, the code is now fully functional and as bug free as possible (I hope). The visuals and the hardware were also quite time consuming, but were more mechanical, which provided for good breaks once I got tired of writing the code.

Overall, the whole process of making Totoro come to life was a truly gratifying one. Although it was extremely time consuming and frustrating at times, it was all worth it once I saw how excited people got over seeing a huge Totoro, and realizing they could (even through the most minimal of ways) interact with him in some way. Some people even told me that rubbing Totoro’s belly was just what they needed for final’s week 😀 In the end, I am still at awe at how much all of us have been able to accomplish due to this class. I would never have guessed that I would be making a project like this one ever in the future, especially at the beginning of the semester. Overall, regardless of the times of Sunday stress when certain projects didn’t work out like I envisioned them to, this class has been one of the most rewarding I have taken, thank you so much everyone for being a part of it 😀

In the exhibition, I was too caught up helping people out with the umbrella and the gloves that I totally forgot about taking videos of the people interacting with Totoro. Here are some of the photos of the exhibition (thank you Craig, James, and Aaron!)

For my final project I created an opportunity for people to jump around different places on Earth (and off Earth for that matter) in less than a second. With the help of computer vision and a green screen behind, people were able to see themselves in either Rome, a beach in Thailand or on the International Space Station (ISS). In order to navigate these places, all you have to do is move a figure of a person around the map and place it in one of the three locations. Then, this location appears on the screen, and so does the person interacting with the project, because he/she is being filmed. In addition, there is a small carpet on the floor on which to step on. When you start walking or running on it, the background starts moving as well, depending on how fast you move.

The creation of this project was challenging since the first day. I started with connecting two pressure sensors to Arduino and reading the time value between pressing the sensors. That way is possible to know how long is a person’s step. Then I did serial communication to send this data to Processing. In addition to the pressure sensors, there are also 3 LEDs connected to Arduino and it is also sending a different number to Processing depending on which LED is lit up. Each LED is responsible for a certain place on the map.

For the interactive map I got a box, cut 3 holes, added an LED next to each hole, designed the surface and added another layer of cardboard inside, so there would be a bottom for the holes. There are two strips of conductive coper tape coming to each of the holes, and one of the strips is connected to power but the other – to ground. Therefore, whenever there is something conductive placed in the hole, it closes the circuit, and the LED next to the hole lights up. A number is assigned to each LED and this number is being sent to Processing, therefore it knows at which location the person is placed.

the box from the outsidethe box from the inside

For making the person I went to the Engineering Design Studio to use their laser cutter and cut 7mm thick clear acrylic. The figure is a traveler with a backpack and a round bottom. In order to make the bottom conductive, I first tried to tape some copper tape on the bottom, but it was lacking weight as it didn’t properly press down on the copper tape strips when placed in the hole. So I had to be creative and that’s how I decided to stick 3 coins on the bottom to give the person some weight as well as make the bottom more conductive (now I know that euros are more conductive than dirhams or dollars).

a two euro coin on the bottom of the figure

When the person is placed somewhere on the map, the appropriate LED lights up and sends a number to Processing. In Processing I then loaded 3 videos from each of the 3 places and display the appropriate video for each place. For example, when the person is placed in Rome, Arduino recognizes it and sends a ‘1’ to Processing which is then set to display a video of Rome. In order to actually play the video, the person interacting with my project needs to start moving on the carpet. Arduino then recognizes the time between the footsteps and again – sends these values to Processing. I’m mapping the incoming time value in Processing and playing the video accordingly to how fast a person is walking. It is slowing down when a person is walking very slowly, playing normally when the speed is normal and speeding up when a person is running. However, if the steps are longer than the maximum value in the map function (1.2 seconds), then the video just plays at the slowest mapped speed. If there is no movement for a little while, the video stops and restarts playing again when movement is detected again. Therefore, the people interacting with my project get an impression that they are actually seeing the background as they would when moving at different speeds.

the whole setup. people are walking on the carpetpressure sensors on the back of the carpet

The person interacting with my project sees himself or herself in one of the places because of a green screen behind them. The camera from the computer in front of them is filming them and the green screen and substituting all of the green pixels with a video from the place where the figure is located at.

Whenever the person is not placed on any of the locations, this is the photo that shows up on the screen:

The IM show, where we were displaying our projects to public, was an incredible and positively overwhelming experience. For the show I had two screens – one was the computer in front of the person where they were seeing themselves but the other was turned to the public. I was really happy to have the other screen because it definitely dragged more attention to my project because people could see other people interacting with it. I was surprised by people’s interest to interact with my project and observing their reactions was extremely rewarding. The night flew by in a second for me but I tried to capture some moments from it.

Goffredo was really happy to be in Rome!!

Here I have a short time-lapse of people interacting with my project:

And these are some of my favorite moments filmed at the IM show. I have more footage though, and, as soon as the exam period ends, I’ll make sure to make a video about the whole project and I’ll also post it on here! Overall, I have learned a lot not only in this period of making the project but also throughout the whole semester. The IM show was a memorable way how to wind up this semester. Huge thanks to Aaron for the help and the class for the feedback received along the way!

Atmanna or Wish came to be inspired by my interest in creating an art piece that mimics a motion in nature. I really wanted to work on an art piece instead of a game or other application because this class has given me an interest in satisfying motions that produce aesthetically pleasing visuals. The first piece that I worked on that attempted to create the compositional beauty of nature was the generative art of leaves in Processing.

At first I wanted to create an art piece that allowed the user to also make a wish and have a speech to text conversion mechanism and have their wish appear on screen. Ultimately the concept got changed along the way, but read more to find out.

Concepts of Movement

I created several different ideas in order to think about how to mimic the movements of a dandelion in Processing. I wanted to utilize my knowledge of object oriented programming as well as particle systems to create a beautiful effect. Here are some of the ideas that I came up with:

This was the first idea I had in mind, creating the dandelion with random shape particles. This idea was well suited to movements with the mouse and I really liked it, but I felt it was too abstract to be immediately recognized as a dandelion – and the motion wasn’t exactly what I had in mind in terms of the real movement of dandelions.

This concept came when I was trying to play instead with lines and nodes like Dan Shiffman’s fractal tree videos. I played a lot with motion in this concept but ultimately I didn’t like the look and feel of the lines and nodes for a dandelion.

I finally decided to use vector graphics created in illustrator because I had more control over what I wanted the piece to look like, and created different frames of animation for a dandelion within illustrator and imported the different images into an array of images within processing to loop through them.

The particles in my particle system were composed of an image of a dandelion seed that I also imported into processing into the ‘Seed’ class and I played with different movements. I decided to make the seeds flow upwards because it made the most sense spatially on the screen for me. Again, referencing Dan Shiffman’s nature of code book really helped with this phase to be able to add and play with different physical forces to create the desired effect.

I knew I wanted to use the physical action of blowing, but without the use of a wind sensor I had to think of an alternative method. I decided to use a Sparkfun Sound detector that we had available in the lab and was able to read different sound inputs. The act of blowing on a microphone produces certain levels of sound that I was able to explore using the Serial plotter in the Arduino IDE. I used these serial values to trigger motions for the particle system in the Processing sketch.

User Testing

When I did my user testing, I did not yet have a physical dandelion that people could blow on. Some people liked this because it did not take away from the on screen experience and aesthetics that were happening. Others wished that they did have it.

At the time of the user testing, the animation also was not as clear or smooth as I would have liked it to be and people noticed that as well. I also thought about what story was being presented to the user as they interacted with this piece and I didn’t have a set narrative that was being told. I thought about what was being said and how to use that but ultimate idea for me was to allow the user to be able to make their wish and keep the act as simple as possible as it is organically in real life. See the user testing post for more of the notes and improvements that I wanted to work on.

I ultimately did have a physical dandelion for people to blow on, but it was difficult for me to decide on the medium that it should be made with. I used straws for the green stem that made the wiring easier to work with, and cotton balls for the top of the dandelion. It was difficult to embed the microphone in a way that would still allow it to work, but made sense for the user to blow on and interact with.

During the Show

I think my project stood out as being one of the ‘calmer’ projects – there was a lot of light and big screens and sound around so it was a different experience for people to pause and reflect and take a moment to make a wish. People ultimately really enjoyed the experience, and I especially liked that I had many people stop by because it was such a simple concept that didn’t take too much time to engage with but still created the impact that I wanted it to.

One thing I wish I had done was incorporated an element of sound or background music – but it was loud in the room anyway so it wouldn’t have created the exact ambiance that I wanted to achieve. One of my favorite comments was that this was a good business model for an ‘alternative stress ball’ to keep on your desk and use to take a moment to breathe and reflect.

I realized a bit too late that the set up for exactly where the dandelion and screen were position was not perfect. Sometimes as the user was blowing they missed what was happening on the screen. I think it might have been better or more immersive if I had created the ability to pick up the dandelion, and/or projected the art instead of having it on my laptop screen. All in all I really enjoyed presenting my work and have people play with it. Some people even came by several times to make more than one wish!

Limitations + Future Improvements

I didn’t use the right medium to create the physical dandelion – the cotton was really fragile and the straws were not particularly stable as people were blowing on the dandelion itself.

During the show I realized the loud room had some sound interference that the microphone detected and triggered the animation without meaning to. Even though I did user testing, I think each space is unique and I possibly need to add some calibration function – thanks to Aaron for showing me the sound smoothing function that saved me!

I would actually like to figure out an elegant solution for speech to text in Processing

I’m thinking of 3D printing or using some mesh material to create the dandelion instead of cotton balls.

The final project for this class was one of the most fulfilling projects that I have worked on all semester. Having undergone 10 years of training in Carnatic music and having sorely missed practicing it for the last three years, this project was a fun way to reconnect with it. Based on Carnatic ragas, my project was designed to be an eight-key keyboard that could play a variety of ragas (five, in this case). A raga, for context, as a particular combination of notes that songs are composed in. For example, if raga A contains the notes Do-Re-Mi and Mi-Do-Re, then a song composed in raga A will only contain those notes and in that order. I think the Western equivalent of this is a key, but I’m not sure. Ragas are generally divided into two kinds: Melakartha ragas, and Janya ragas. Essentially, Melakartha ragas contain all eight notes that can be sung in any order. Janya ragas (derived from Melakartha ragas) generally have more rigid rules. I used five Melakartha ragas in my machine. If anyone is interested in learning more about how this system works, here are some resources:

Building this project took a lot more work than I envisioned, but I am extremely happy with the way it turned out. Here is a brief breakdown of how I made the Raga Machine over the last two weeks.

PROTOTYPE STAGES:

My first prototype of the project involved no Arduino aspect at all. It simple involved keys on the keyboard (from ‘z’ to ‘,’) that played one note each.I wanted to take it slow (since we had two weeks to work on it) and see what elements I could get up and working just on Processing.

In the second prototype, I included Arduino buttons. I had eight buttons, and I had each button play a single note that would then change its pitch according to which raga was being played. For both these prototypes, the raga would be changed based on the mouse position.

The visuals for each corresponding raga was something that I had continually struggled with envisioning. For a while, I had each button draw an ellipse of a certain size, with raga determining the color. It looked something like this:

I then changed the visuals to draw various henna/mandala patterns instead of ellipses, but still in random positions with size and color being controlled the same way as before. I was still unclear on what the best course of action was, but luckily, it was time for user testing.

USER TESTING:

Here are the notes I made after my user testing session:

“Even though my project is still missing its decoration component as well as refined visuals, having my friend (who didn’t know what my project was about) come test it out gave rise to some incredibly useful feedback. Here are some of the things I learned and plan to work on:

My project is about Carnatic music, but not many people at this school are aware of what that is, and even if they are, would probably not be able to tell that the project is in fact based off the raga system. My user felt that without context, the project was confusing and vague.

To this end, she suggested making an informational page in the beginning of the project, helping people understand situate the project in some sort of context.

My user also suggested that I label the parts of the musical instrument, but on understanding that I was planning to create a keyboard-like structure for the instrument, she thought that it might need lesser labelling than she originally thought.

She also thought that my visuals were random and confusing, and suggested that the visuals have more to do with the physical input that the user is doing. In line with that, I have changed my visuals to reflect the exact position of the key that the user is pressing at any given moment.”

This feedback turned out to be extremely helpful.

FINAL PRODUCT:

The final product, based on the feedback that I had received both during the user testing session and on the day before the IM show, included vastly different visuals and a pretty, compact box within which the Arduino lay. The keys were made out of popsicle sticks, which then pressed down on the buttons. I thought they would be a drawback but it turned out that a lot of people had fun with them — the idea that they were creating music by pressing down popsicle sticks was entertaining and interesting to them. I also had to add another piece of cardboard atop the structure so that the keys wouldn’t come off the structure.

In addition to these changes, I also added an info page at the beginning, which GREATLY helped during the show to contextualize the entire project. The visuals, too, were labelled with what raga was being played at that moment, which also helped greatly to contextualize the project. Here are two videos, one documenting the info page, and the other documenting a young boy playing the instrument:

In hindsight, I think that one of the main components that the project was missing was a signifier. Although most people intuitively understood that the popsicle sticks were meant to function as keys, others would try to pull them out or play them as they would an actual piano. The problem with this was that the popsicle sticks would generate the best sound when pressed where the black dot was, but that wasn’t clear enough of a signifier to make people press that spot immediately. I must say, though, that the visuals helped. Because I changed the ellipses to reflect the position of the key that was being played at the moment, people were much more easily able to grasp what exactly was going on.

All in all, I received extremely good feedback from the visitors at the IM show, and an unexpected number of people were truly interested in learning more about Carnatic music. That, to me, is surely the biggest accomplishment of the project.

As always, major thanks to Nahil and James and Aaron for all the help!

My final project was born out of two motivations. I wanted to play with the concept of cult of personality, and I wanted to do some sort of projection mapping. I thus decided to make an image that couldn’t be vandalized.

In terms of technical implementation, the project has three main components. The first is an infrared camera (a PS3Eye), which I use to track the position of an infrared LED attached to an object resembling a spray can. The second is the projection: both the equipment used to set it up as well as the things that needed to be done in order to make it work within the spatial constraints. Finally, there is a set of images that are triggered depending on the position of the infrared LED on the canvas–these are perceived by the user as an animation.

IR LED, Camera & Blob Detection

A PImage variable ‘cam’ (640×480) is created to retain whatever is captured by the PS3Eye

A PImage ‘adjustedCam’ (width*height) is created to retain what is being captured in ‘cam’ but in a larger size.

A smaller PImage ‘img’ (80×60) is created to enable the Blob Detection. It is not drawn in the processing sketch but runs in the background. It adjusts the size of ‘adjustedCam’ to effectively restrict what the IR camera can see to the area being projected. This allows a blob to be drawn in the same place as where the IR LED is turned on.

Setting the coordinates.

A circuit connected to an IR LED is built into a Pringles can adapted to resemble a spray can. I used a weight to resemble the sensation of holding a spray can, and a ping pong ball to mimic the sound.

Spray can circuit and design.

I use Blob Detection — a form of pixel manipulation that sorts bright from non-bright pixels — to track the position of the IR LED over the canvas. The presence of a Blob–which indicated that a light is ON–triggers a drawing over the position of the light.

Projection Setup
The most time-consuming aspect of the project. Setting up in the space and adjusting the projector’s elevation over the ground and its distance from the wooden canvas. I used to film-set stands to hold the wooden frame.

Projection setup in the IM lab, with the wooden frame.

Animation
There are two components of the animation: what happens when the user ‘sprays’ on inside the painting and when they don’t.
When they are spraying outside the painting, the painting’s character follows the position of the spray can with his eyes. This I do by mapping the position of two ellipses drawn in the eyes of the character to the position of the blob.

When spraying happens inside the portrait, different frames get triggered depending on the general position of the blob.

Notes from user testing
My user-testing pointed me toward the following things which I implemented in the final project.

Add weight to the spray can and protect the circuit because people will want to shake the can — allow them to have that experience.

Allow the users to change the color of the spray paint.

Make the character in the painting dock.

IM Showcase

Here are some pictures of the IM showcase and the accumulated paintings that resulted form people interacting with my piece.

I must start off by admitting that time, though endless (as my high school Calculus professor used to say, “there’s more time than life”), is often insufficient. That was my experience these past couple of weeks. There is so much I wanted to do for this project that I couldn’t implement not because of technical difficulties, but because of time.

Thus, my biggest takeaway is this: a project that one is excited about could go on forever. I was thrilled to carry out my ideas for this final assignment, because I’m fascinated by the story that inspired it. It was fruitful in the end: I’m proud of what I made. But I could have continued to work on it more, adding more features and fixing others, and refining the “craftsmanship.” I’m glad this is the case, though. It means that this project motivated and inspired me, in a way few projects throughout the semester had.

Inspiration

The story on which I based my work is titled “La luz es como el agua,” or “Light is like water” in English, written by world-famous Colombian author Gabriel García Márquez in 1978. I learned about the text from a friend who read it in high school, and purchased the book where this short story is featured (Doce cuentos peregrinos, or Twelve Pilgrims Stories) last summer.

Throughout the semester, I wanted to work with a track from Pirates of the Caribbean for my final project. However, having used it for one of our weekly assignments, I began to consider other possibilities, and when I remembered García Márquez’s story, it made perfect sense to use it.

This link contains two edited (abridged) versions of the story: one in English, translated by myself, and the other in Spanish. The story was shortened specifically for this project, but the complete text can easily be found online in both Spanish and English.

What I love about García Márquez’s writing is its richness in imagery. His descriptions very easily make the stories come to life for the reader, and thus (it seems to me) there’s a lot to work with if one wishes to depict his narrations.

Two aspects of this story made it particularly adequate for an interactive media project. Firstly, the text deals with electricity: it tells the story of two brothers who cause light (electricity) to “flow” like water, ending on a tragic note. I’ve been interested in working with neopixels ever since our “Stupid Pet Trick” assignment, and thought that they could be used in this project to literally show light around a house, and to make it appear as water.

Secondly, the story allows for interactivity in a fantastic way. The title of the text comes from the narrator’s confession that he once told the two brothers that light is like water. Thus, the narrator, who tells most of the story in third person, reveals to the reader his role and direct impact on the events that unfold. I wanted the user to be directly implicated in the story’s events as well, having them “cause” said events.

Process, USER TESTING, & Improvements

I was quite lousy regarding the documentation of this project. I took no photographs of the process, the user testing stage, or of its exhibition during the Interactive Media Spring Showcase.

As the following images (taken after the project was exhibited) show, the “main” component of the project is a house I built mostly out of cardboard. The house can be divided into two sections: the top level (or the fifth floor, according to the story) and the bottom level (or the first floor, in my interpretation a basement). The top level contains the setting of the story, plagued with LEDs, and the roof of the building, which has two servo motors hidden inside it. The bottom level is full of wires that connect the top level’s components to power, as well as to Procesing and Arduino through a RedBoard.

I now include a video of the final project, to serve as the frame of reference for the explanation of the process. This video shows the interaction in English.

I mentioned that the video’s interaction is in English because, as the starting page of the Processing sketch shows, there is also a version of the project in Spanish, which the user can opt for. To me, it was important to include a version of the experience guided by the original story, given that I could never accomplish an accurate imitation of García Marquez’s style in a translation. Because I’m such a big fan of his writing, I wanted his words to be available to Spanish-speaking users.

On a related note, I asked a fellow classmate to help me with the project because I imagined that his voice, specifically, would make the narration of the story much richer than if I had done it myself. Not only is he a great speaker, but he is also Colombian; thus, I thought, it becomes easier to imagine García Márquez himself reciting the text. Perfect casting (thank you, Sebastián!).

In terms of the structure, the bottom level was made by cutting a cardboard box and covering it with black adhesive material (I’m not sure whether to call it paper, plastic…). I cut a small rectangle on one of its sides to let the cables of the RedBoard, neopixel strips, and a small light out, so that I could connect them all to an external power source.

The aforementioned small light was used to illuminate the four wires that the user has to connect. I chose to use a breadboard and breadboard wires for the user interaction for a couple of reasons. In the first place, because the story deals with electric circuits, I wanted the user to have the experience of messing with the house’s actual circuits. I originally left the entire “basement” open, such that the user could easily see not only the four wires they had to manipulate, but also the ones that they didn’t have to use (the ones connected to the RedBoard and a second breadboard). I added the four LEDs that are associated to these wires so that they could act not only as indicators for my own code (of whether or not the wires are connected), but also as indicators for the user.

However, during user testing, my first user expressed that it was confusing to know which wires to connect and disconnect, given that there were so many. To the user, it wasn’t clear what was expected of them. Following his advice, I added a piece of transparent acrylic (hence, it still allows some visibility) to completely separate the four wire-LED pairs and the rest of the circuit.

I also incorporated written instructions in the Processing sketch, right before the narration begins. In them, I tell the user that they must pay attention to the narration (audio) and both the screen and the house (visual). In this way, they know they must be aware of all these components throughout their experience.

The same user also suggested not having written instructions at all, making the computer screen go black after the title clip, with the instructions transmitted through audio. He thought that this would make the experience with the house more immersive, and to separate the narration from the instructions, I could read the latter out loud myself. I recorded these instructions, and was willing to make these changes. But… time. This is definitely an improvement that I would have liked to try, even though I did have one preoccupation: what if the user didn’t understand the instructions right away? Another user who tested the project was slightly confused at the beginning as to where the wires should be connected, but she figured it out after reading the instructions a second time. I think the solution would have been to loop the audio instructions as long as the required task has not been completed.

Another advantage of using a breadboard as the interface is that, by covering up most of the board with tape and leaving one of the positive rails uncovered, I ask the user to connect the wires “along the red line” and don’t have to worry about where exactly they’ll connect them, or in which order, given that all the openings in the rail act the same.

Regarding decorative elements, the bottom level has a large number 47, in reference to the story’s building number, and a set of stairs and floors on one of its sides. I did this because the story mentions that the brothers and their parents live in the fifth floor of their building. Therefore, there’s the bottom level, three floors in between, and the top level, all connected by stairs.

These decorations (as well as the ones in the top level) were very successful during user testing. People appreciate these small details, even if the information of the building is provided until the very end of the narration.

Stairs and floors and stairs and floors and stairs and floors and stairs and floors…

The top level was more complex. I built the box myself, because it needed double walls. The neopixels strips that simulate the water-electricity are glued to the outer walls and the inner walls are covered with translucent paper that allows the user to see the light of the neopixels, diffusing it a bit as well. The same was done with the ceiling.

There are two small openings in the ceiling that go through the cardboard and the paper. I cut a straw to get two small pieces that I glued to these openings. Each of the two servo motors has a wire tied to its arm (which has openings itself, facilitating the process of tying the wire), which moves up and down through the straw when the servo rotates. This mechanism allows the up and down movement of the boat.

The decorative elements inside the top level were printed out on paper and made sturdy by glueing them to cardboard and wooden sticks that go through the cardboard floor. For the four lamps, wires attached to yellow LEDs also go through the cardboard, so that each of the lamps turns on and off in response to the user’s actions. Additionally, there are two other pieces of furniture inside the house: a grand piano and a bar with a wine bottle. These are also referenced in the story.

I decided to make the “flooding” neopixels blue until the very end of the story, when they become yellow and “end” the metaphor of light as water. If one googles this story, the image results mostly show yellow waves and currents, and in the story, the children’s adventures are very explicitly described as occuring in the light, and not water (though water-related terms are constantly used in the text). The advantage of using actual light to depict García Márquez’s water-electricity is that no matter its color, the light is still (actual, physical) light. Thus, in my opinion, the metaphor becomes stronger with lights in blue, like water. I also made them randomly change to different shades of blue with every loop in the Arduino code, to resemble shimmering water.

This is a compilation of Google Images results for “la luz es como el agua”; all of them show the water colored yellow, or alternatively, the light shaped as waves.

There was one thing I knew that I wanted my final project to be the moment I started thinking about it: I wanted it to be cute. Other things I knew were a) I wanted the project to be processing heavy, while physical computing light, and b) to be a game. I really wanted to practice coding more, and I also particularly enjoyed the week in class when we had made a game (I made “Blobby”).

After more in-depth brainstorming, and class suggestions, I came up with the following idea for a game: the user would be a bird, and by flapping his/her wings, would fly the bird around various environments. Games would be hidden/placed around the environments, and the bird could go around playing those games. Later, I fleshed out the idea further: the bird had to play the games in order to earn “seeds” (points) to feed her babies that were to hatch soon. Once the bird reached a certain amount of seeds, the game would end, the user would win, and the babies would hatch and fly happily around with the mother bird on the screen.

The process of making the game looked like this:

a) Planning: I had to plan all the various games, decide what types of visuals they would need, and think about how to convey all the instructions of the game to the user. It was critical that I do this to avoid wasting time in the next steps.

b) Design via Illustrator: I spent many many hours before even starting the code creating all the visual elements of the games. I created the bird(s) myself in Illustrator (making various frames so the bird would look like it was flapping its wings), found various free Illustrator elements online that I would then have to adapt quite a lot to fit my vision, and had to make multiple versions of everything to create the blinking or movement effects that I wanted. Overall, this was extremely time-consuming (especially since I had to redo quite a lot of it later since I had to make sure all the sizes of the files were consistent and appropriate for Processing), but it was a crucial step since the visual aspect of the game was important for it to be successful.

[Below are just a few of the Illustrator elements I created or edited.]

c) Coding: The coding aspect of the project was undoubtedly the hardest part of the project. Not only did I have to create the code for five different minigames, but I had to create the code for the overarching/big picture aspects of the game. These include things like: how, and how long, to display instructions, where to start and end the game, how to let the user find different games, whether the user can play each game more than once or not, how to let the user move from one environment to the next, whether the user should be allowed to move to another game without finishing the first game or not, and on and on. I didn’t realize how complicated this process was going to be until I was rather far along in the project, when I started to appreciate the way that the seemingly unimportant questions in a game (like the issues I just specified) actually make or break it completely. However, one thing I am very proud of and would like to note is the following: I did know that, at the very least, the code was going to be rather lengthy, and thus I spoke with a computer science friend who helped me map out the plan for the code. It was through this that I got the following idea: instead of making completely different code for each minigame, I could use classes to sort of reuse the same code over and over, adapting it for each game. This made perfect sense for my game, since in each game there is an “other” (whether it is a coconut, a fish, etc.) and in each one some sort of overlap needs to be detected. Thus, I utilized classes to my advantage in the code, and believe that as a result the code is far more simple/clean/short than it would have been otherwise. This experience demonstrated to me that planning before coding is crucial to avoid headaches and wasted time.

[The Processing code for the game can be found here, and the Arduino code can be found here.]

c) Physical computing: The physical aspect of the game actually turned out to be rather straightforward and reliable — which is something I cannot say about most of my experiences with sensors in the class. I used an accelerometer, which measures (as the name suggests) acceleration/the rate of movement. I wanted to use it to measure the user’s flapping, so it was perfect because the user truly had to flap to make the bird on the screen move; if they moved really slowly would not work. All I really needed to do was detect the x, y, and z coordinate from the accelerometer and measured the difference between that reading and the previous reading (to see if the user was flapping). I found an equation to calculate this online: sqrt[(x2 – x1)^2 + (y2 – y1)^2 + (z2 – z1)^2]. In regard to how I used the accelerometer exactly, what I did was this: I soldered each (of the two) accelerometers to six feet long cables, which were plugged in to the Arduino/bus board. (Each accelerometer needed six cables, so that means I soldered 12 six-feet long cables in total.) Then, I hid the bus board / Arduino in a pretty box, covered cables in pretty, flowery tape, zip-tied the ends of the accelerometers to a pair of black gloves, and then hot-glued tons of pink feathers onto the gloves. Overall, I very happy with how it all looked: the green/pink theme of the box/cables/gloves fit perfectly with the green/pink theme on the screen.

[Below are pictures of the end product. Note how in the picture on the left, Craig’s pink wig makes an appearance.]

The IM Show:

I was really happy with how the game was received at the IM show — especially when a girl came to play the game, lost the first time, asked to play again, then won, and then jumped up and down out of excitement, took a picture of the winning screen, and then gave me a hug. 😛 While not everybody was as excited as she was, a lot of people found the game really cute, rather fun (particularly because of the wings/gloves), and overall a nice idea. I do, however, think that it might not have been the best suited interaction for the show, since the game is rather long if it is played all the way through, and most people want to only spend a short time at each interaction so that they can get through all of them. This means that a lot of people would stop playing part way through the game. In the end, still, most people seemed to really like it, and I was incredibly happy to share the game with others.

[Here is are two extra photos, one of Luize posing after playing the game and winning the high score, and another of the feathers that were sacrificed during the IM show.]

As ubiquitous as technology is nowadays, it remains a fairly opaque aspect of my life. I often feel I do not have the tools to understand how different systems work, and end up relying on others to tell what things do, or what I can do with them. My interest in Interactive Media stemmed from a desire to further understand technology both in technical terms and in terms of its sociopolitical dimension. I am not able to put forth a satisfactory definition of computing—I can only say, for example, that it largely entails the use of recurring mathematical operations to perform a host of different tasks. However, I think this has been enough to have a profound impact in my life. I have been able to demystify a lot of the technology around me, and grow more aware of how it operates and how much it can actually do. I do not see technology as the solution to all the world’s problems, and I also do not think technology itself has the ability to impact our lives negatively. Instead, engaging with interactive media and art through software has reaffirmed my belief that we should all strive to understand technology in order to make it work for (most of) us instead of against us. That we must fight the urge to leave it to others to deal with the technicalities, because those technicalities (and the biases that are built into them) can affect our lives in profound ways.