Who?

Archives

Category: Unit 2.2

I just handed in my Portfolio of Evidence for this Physical Computing unit. Here is the demonstration video I’ve made, with the hands of Betty scrolling down through my prototype ↓↓↓

I used this song made by my musician friend Sima Kim, and tweaked it a little bit to demonstrate the type of effects I want to produce. Indeed, I’m working towards an evolution of the sound part through the speed of the gesture. Thus, I have some works to do with the MAX/MSP patch, but it might be a lot of fun. Since I’ve discovered MAX/MSP, I’ve always wanted to find the time to actually compose music with its. Should be a cool summer homework!

First, I should still take the next weeks to finish the object’s design in priority.

We had the Project Final Crit this morning, in the presence of Rania Svaronou and Riccie Janus from IBM again. We organized it as a 2P2 feedback, as you can see it below. Pretty cool to see everyone’s project coming through their last iterations!

Here is my (5th) prototype ↓↓↓

(I wish I took a self-explanatory picture before I glued everything, instead of that long paragraph coming 😅)

I made up a very DIY case to ensure the foil was secured: plastic sheet for the touch and colored paper sheet to hide it. I’m considering to simply use colored plastic sheet for the last version, as I don’t need to see the BTS that much anymore.

Compared to the 4th prototype, I didn’t use copper tapes but simply switched back to foil to have bigger strips. I cut around 3 cm compared to 5 mm for the tapes. I also left around 3 cm space between each strip, while the tapes were dispatched too closely and created confusion for the MPR121. I also only used 3 strips compared to the 6 I previously had. I think it’s plenty enough considering the interactions I actually need from them – not that much.

Pretty simple, as instructed: the person has to hold on to the first strip, then slide through the two others strips. I noticed the foil strips sometimes went “off” or were confused between one and another despite the space between them – forcing me to re-start the circuit. It didn’t happen before, not sure if because foil has less stability than copper or it might be simply thank to the tape format? Well, need: bigger tapes!

The technical part didn’t change much from the 4th prototype: I used the same wiring + code for the Arduino part, and I simplified the MAX/MSP patch. Note: the first strip is wired to the 0 pin, the second strip is wired to the 6 pin, and the third strip to 11 pin.

While the 0 pin didn’t change, I used select to bang each time it detects the 6 pin plus counter to bang each time it effectively counted 6 to 11. Both select and counter are linked to timer to know how many milliseconds has passed since the finger passes the second strip (aka first bang) through the third strip (aka second bang). Then I linked it to a gain function: the more the gesture is fast, the more there wouldn’t be much volume.

(Here is Pipe interacting with my prototype, you can also see the title I’m settling on: LET’S DO IT RIGHT, LET’S DO IT SLOW.)

I wrote down the main feedbacks I got + my thoughts on that:

Audrey: “When moving fast, not aware of the reaction or the idea ‘slow down’.”

Agreed, the sound effects definitely need to be more obvious instead of gain, else it looks like it’s broken. I re-linked that to a feedback function right away, so it distorted the sound instead.

Rania: “Loves the idea. Thinking from an UX perspective, better to use vertical scroll instead. Match the speed of the gesture to the content and that’s all it needs more.“

It was great to see the idea understood rapidly with straightforward advices. Plus, it seems the vertical scroll definitely comes off as more familiar and match the infinite scrolling we are doing on our social apps.

Gareth: “Loves the concept, definitely getting through: it’s the most important, technical part come later. Mention of psychological studies to the scroll gesture, and the insatisfaction we get from it through our never-ending feeds. Doesn’t think the scroll needs to be vertical.“

Interesting thoughts– and also related to what I’m looking for my FMP. Maybe the gesture could work in both cases, depending on how people want to handle the object depending on their own preferences – siding horizontal and vertical?

Stephanie: “Advised a strong reminder for the context of the Slow Movement – a more high-tech approach with the phone, and the use of fabrics to tone down that approach.“

Not into the phone direction, but I got where she came from and it actually gave me an idea: maybe I can ask people to put their smartphones besides my touch pad so that action acts like they are substituting their smartphones for my device?

Nicolas: “Something is happening: trust relationship with the object. Need an evolution of the content now: for example, if you scroll right enough to reach a good volume, the next step would be to maintain a good sound effect? The gesture is good as it is now: the hand rests while holding while the other hand scrolls? Last step is the object design, also think about where I want this object to be used? About the question of the fabrics, it could be filled up with cottons and such: take inspiration from toy stores, and look up at kinetic sound.“

Digging that “evolution” idea. Definitely a home object, acting as a substitute for the smartphone as I just ideated. To be honest, I don’t think I will use any fabrics except silky ones: 1/ I want a slick touch to remind the screen. 2/ I don’t want my object’s design to be playful. Since I view it as therapy from the infinite scrolling gesture ≠ aka won’t be a toy, my aim is definitely an adult (teenagers included) audience.

The object’s design will also definitely shape the gesture – I mentioned the wave idea to Nicolas. In my previous blog post, I previously mentioned that I ordered a plastic ball in order to prototype with its wavy shape, well I don’t know where my package is – hence the flat prototype…

Now looking into kinetic sound, my prototyping process is taking me more into the sound part – which it’s why I think I might let down the light part, I don’t think it’ll add anything much to the interaction. I will still consider it for my final sketches, more as a bonus aesthetic part. I’m still thinking about how you have these flashes when you close your eyes after seeing lights. Well, it’ll depend on the shape but it’ll need to be transparent at least on that part for the light to come through and hiding the strips would be extra work – and make caution that the MPR121 would still be reliable with the distance I’d need.

Though I got my concept across – which I’m feeling pretty relieved about, I still have then few mostly technical steps left: the object design, and the sound part of the MAX/MSP patch.

It might be better to hand over the sketches for the PE, and aim for an actual delivery with the objective of Ars Electronica (I didn’t mention it before but the class is going to Ars this September, and I’m bringing that Social Things project in my suitcase).

Here is the 4th prototype, where things are finally starting to come together ↓↓↓

Wiring: I wired up both MPR121 and the RGB LED to a prototype shield + a small breadboard to minimize the size of the circuit. I wired up the RGB LED as usual, look up my previous post and/or directly this tutorial. I wired up the MPR121 as instructed there on Sparkfun, to 6 copper tapes.

(Note: the MPR121 from Sparkfun has been discontinued, but you can find the same on Adafruit)

Code: I first used the original library but I wasn’t able to change the threshold so the tapes would still work with a plastic sheet as you can see on top. I asked Gareth and he advised me to use Bare Conductive’s library – indeed it was pretty easy to change the values there. Here is the code with my values + the RGB LED implemented part ↓↓↓

Time to re-create the scrolling gesture: the thumb has to hold on the first tape while the others fingers scroll down through the rest of the tapes. This one-hand gesture is pretty similar to what you’re doing on your trackpad or on your smartphone.

I spent quite some time on MAX/MSP to figure out how I could make sure that the fingers have to pass through all the tapes – you might cheat by hold on to one tape only and it’ll still work. After trying out select, clocker and such, I used counter and it does count after the full action from the moment I hit the first tape to the last one! Still, I need to figure out how to measure the speed of that action.

I asked Nicolas for advice and we tried some stuff such as thresh or select added to timer. It didn’t quite work the way I wanted – aka no cheating allowed, but it gave me insight about how I can make it work for the next prototype. Indice: I’m thinking to use counter with timer.

For now, the only thing working is using select down the first tape to activate the sound (I chose an ambient style of music made by my friend Sima Kim in his debut days) through a comb function I intend to fully make use of for its effects.

It’s a bit messy but here the actual MAX/MSP patch ↓↓↓

Also, this is the MAX30100 which is the heart rate sensor I intended to use for the other hand to rest down ↓↓↓

I decided to not use it anymore – not that it didn’t work for some reasons… After discussing it with Nicolas and saying my aim was to parody the tracking technology, he said that my point wouldn’t come across as it’ll only be perceived as technologically intrusive – in fact, exactly why I wanted to get out from tracking datas at the beginning. Well I tried, out for good now!

The light also doesn’t have any real use for now. I’m still struggling to sketch an object’s design that’ll make the most of it while hiding all the wires. I’m thinking of a wavy kind of shape though. I ordered this plastic ball to use its half to cover up the LED, in order to envision the wavy part since I can make up the flat part – next prototype if I got it safely delivered.

Reminder: my prototyping process has lead me to an unexpected path; the path of re-creating a touch pad. I did think of directly buying one, but I wanted to get out from both the aesthetics and the shape which is pretty determined by its manufacturing and standardization – starting with the Apollo Computer in 1982.

Be it touch pads or touch screens – they both work with capacitive sensing.

It is very easy to get started with that kind of prototype with Arduino. Below – I used foil but you can pretty much use any conductive material.

Wiring: I connected (1) wire and (1) 1M resistor to respectively digital pins 3 and 4 directly on the Arduino. Both of them were attached to crocodiles wires holding on to a sheet of foil. I used an 1M resistor for its to only respond by direct touch, but you can use a higher resistor and it will respond by few inches.

Code: With its, I used the Capacitive Sensing library which is great to quickly get it to work ↓↓↓

By using the Serial Monitor, I could see the numbers going up when – I paused my finger; I paused it long enough; I pressed more than one finger. It is very straightforward, but it seems it needs stable conditions for its to work. There are copper tapes in the studio I can make use of instead of foil – it seems to be better in terms of stability.

It also might be better to switch to the MPR121, as I would be able to use different strips of foil in a easy and stable way. I didn’t use it previously because I thought MPR121 only had an on/off state, but I just need to time up these states according to Nicolas.

I also started to use MAX/MSP with Arduino, here is a simple patch visualizing the datas ↓↓↓

I’m actually thinking to use MAX/MSP to trigger sounds. More on that on my next prototype!

OK – I have to admit I spent the last three weeks a bit lost there. Despite the fact that I have tried different kind of sensors, none of them did the job for me: it was about that precise interaction I wanted to pin down. After being through a slump I named inert-eraction, I had this “Euraka!” moment I still shall doubt every 15 minutes during any next brainstorming. Nevertheless, I got the missing element I was looking for my meditation device ↓↓↓

Yes, the scroll gesture – be it with the mouse, the touch pad and the smartphone. I’m not sure how exactly I ideated it – maybe when I was myself scrolling down and thought “I’m not moving much, am I?“, but I already had precise thoughts: I wanted an actual pause of movements without any physical sensors. The digital gestures match that! Why not using this gesture as the main interaction of my object?

Plus, the way we relentlessly stare at screens almost makes me think of a trance. For example, how many of us has been binging-watch series without noticing the hours passing by? Surfing on the Internet might then be a kind of non-conscious meditative state. The relationship between the perception of time is very interesting here, and the Slow Movement is indeed encouraging time mindfulness – I’m taking back my time lost in scroll-trance by scroll-meditating. Here it’s about the mean!

Thus, I need a touch pad to scroll on. When I explained my concept to Nicolas on this morning’s tutorial, he encouraged me to create a low-tech touch pad. He mentioned that I could use conductive fabric but advised me to first try out a DIY version using foil.

We talked on my thoughts of how these gestures are related to types of cognitive and psychological responses in their interactions, and how my device could end up creating another type of gesture. So far, I’m only aware of the research project and book Curious Rituals by Nicolas Nova, Katherine Miyake, Nancy Kwon and Walton Chiu.

I found books that are more and less related, though: The Best Interface is No Interface by Golden Krishna, and Irresistible by Adam Alter (this article found on the Guardian is a good review of its). Both gives a different insight of the gestures we use with our digital devices; the first about how we get through the interface by designing better interactions, and the latter about how this interface get us addicted. Exactly the contrary of what I want to accomplish here, by taking the gesture down to another type of interaction.

Although I now know what I want my device to be, I still don’t know its output: sound, light, both? I mentioned the fact that I’m pretty influenced by James Turell in my works, and my wish to create an immersive experience using light.

I wired it to the Arduino and I just read its values with Serial Monitor – nothing special but in case someone wants the code (I hope the picture is clear enough for the wiring part, I don’t have any plans to share sorry!) ↓↓↓

While I didn’t for sure make full use of them, the results I got weren’t satisfying. In fact, I don’t enjoy the handling of these – even though I did say I wanted the user to fully (inter)act with the sensors, aka with gestures. Both actually can be manipulated with small moves, but the in-air gestures aren’t something I envision as specifically meditative. It would have been great if my intention was to make a wearable device, but my idea of meditation is actually equal to a pause of movements –when you immerse in and face your mind.

I’m still lacking that specific interaction to go along with my meditation concept, and I have difficulty to see the output of its: I’m hesitating between a screen with generated visuals, or lights?

The advantage of this classe’s fast-paced schedule despite the irony of my Slow theme, is that even though I feel – and might certainly be right, that I’m ideating without any clear plans, I still have to get something out there.

If I don’t want to be literal – aka not doing anything with the phone and the notion of time, how would my object be possibly referring to the Slow Movement? Isn’t it why examples found are that literal?

No more doubts, here goes my first prototype! Lead by my contextual and knowledge research, I finally opted for a meditation device. The value of mindfulness advocated by the Slow Movement isn’t far-fetched from spirituality after all; I did find in my ethnographic research that some of my interviewees had activities such as meditation, others were into walking or biking. It depends but you got the idea: any mind-free activity.

For that, I assembled a DIY GRS sensor – aka taping down (2) wires to foil using Velcro, reacting to a LED with three states: NONE light when it isn’t used, GREEN when the person’s stress level is detected as normal, and RED when the person is presumably stressed. It won’t certainly work as such but I tried to convey my main idea at this moment: an object gaining its meaningfulness solely by the input of its user, else utterly useless.

Wiring: First, I followed Adafruit’s RGB LED tutorial. Then for the GRS part, I connected one of its two wires to ground through the breadboard and the other one to an analog pin through a 330Ω resistor on the breadboard.

Code: It’s super simple – read pins outputs and set LED colors with if variables ↓↓↓

I don’t think my object should be thought around utility, but I feel it lacks both character and content in its meditative aspect. You don’t need such a device to meditate, after all.

Well, I did found a project by KP Kaiser that makes it somewhat useful; read his blog posts here and there. He used others sensors from GRS such as Heart Rate and Skin Temperature, making it actually track your meditation level and linked it to an app. It’s pretty high-tech; and for these sensors to fully work, it’s better for the user to actually wear them.

From now, I’m thinking I might take a low-tech approach in order to focus solely on the (inter)action. I don’t want the user to wear any sensors but to actually uses them. I might have an idea while iterating the prototyping, hence I just borrowed others sensors to try them out – let’s see how it goes.

We had to make a first presentation in the presence of the alumni Rania Svaronou and her colleague Riccie Janus working at IBM. In this 5 mn presentation, I presented the Slow Movement and its main sub-movement I’m interested in which is Slow Design, with its main principles: craft engagement with meaningful, to bring sustainability.

Plus, the video made for the paper Slow Design for Meaningful Interactions (2013) by Barbara Grosse-Hering, Jon Mason, Dzmitry Aliakseyeu and Conny Bakker, is pretty good to rapidly understand what lies behind Slow Design.

Particularly the last part: “It’s about slowing interaction down at the right moment!“, which reminds me of what Carl Honoré wrote – whom I remind is the one that popularized the Slow Movement: “The Slow Movement is not about doing everything at a snail’s pace. […] On the contrary, the movement is made up of people like you and me, people who want to live better in a fast-paced, modern world. That is why the Slow philosophy can be summed up in a single word: balance. Be fast when it makes sense to be fast, and be slow when slowness is called for. Seek to live at what musicians call the tempo giusto – the right speed.”

Last reference before I’ll go into my presentation. This talk given by William Odom shows various examples of what is Slow Interaction Design:

I also presented two decisions, despite having a prototyping idea: I didn’t want to do anything with an app nor a wearable device as I mentioned there are already good options out there, plus the fact that I actually want to get off the screen to craft a tangible object. For example, I’m pretty fond of these vintage calendars, I own several of them back in my parents’ place. They act like they should – that is telling you the day it is, and I also bizarrely enjoy the fact that you have to turn the handles to literally switch to another day.

Hence, there are interactions and gestures I’m missing in the digitalized world, and I’m trying to ideate to fill these gaps. Still, I’m thinking it might be too literal to even refer to the notion of time in my object. As I go on my contextual research, I actually mostly find projects focused around the notion of time. Here are examples from the Slow Tech exhibit curated by Wallpaper and Protein at the London Design Week. Nicolas also referred me to the Slow Watch project, and Betty linked me to the pretty similar Hidden Time Watch project. Pure counter reaction: I want time out of my object.

One feedback I got from the presentation I particularly retain is: “It’s not about the technology coming to us but us going to technology“, acting as a pretty good reminder of my previous references. Quoting Nina Simon in the Participatory Museum, “Imagine looking at an object not for its artistic or historical significance but for its ability to spark conversation“,I’m thinking my object might actually fall down that path.

After the Collaborative Unit, time to dig into the Physical Computing Unit with the Social Things brief: “Using the research that you have done on a tribe, you will start designing a tangible or a wearable object for or with that tribe. The goal is to create a meaningful object using physical computing as an agency. By meaningful, we mean that it will communicate the value(s) of your tribe.”

Just to remind, my research was on the Slow Movement. I did already gather ideas during the last part of my ethnographic research. I asked my interviewees the following: “Do you think it is possible to find a balance with technology rather than having disconnected moments?” One answer from Trine Grönlund whom is behind the Go Slow initiative, particularly stuck in my mind: “Absolutely. I think the answer is very much in technology and particular in the “interface”. Today everything builds upon distraction – you are looking for one thing but you are constantly being lured away to other things. Imagine the day we decide to build sites/apps/games in a way that spreads compassion. Imagine if the more time we spend in front of the screen the more compassionate and mindful we become.”

If I highlighted this last sentence, it’s because I immediately had a flashback of one of my past projects: YOU HAVE TO FACE ME that I produced last year for the Festival Les Chambres Numériques, where I asked the audience to literally face the screen in order to trigger its contents. Well, I didn’t know back then that I was crafting an interaction I’m now classifying as slow! And even though I still don’t know what kind of object I want to make, I’m pretty interested in the same contrast I did back then: the effortless technology versus the mindful effort of the audience. Where is the balance? Wouldn’t an interaction that requires time end up frustrating the audience in search of efficiency? Then, isn’t it about the notion of slow to be adjusted at its right moment to intervene?

There are also others keywords I retain here: interface – distraction – compassion. There are definitely links to craft between these, as I don’t believe digital detox is a solution and don’t fall neither in the values of the Slow Movement. Trine also referred me to a talk given by Rohan Gunatillake at Wisdom 2 Europe – here is his own summary wisely named Redesigning not Retreating. He explains his vision balancing mindfulness with technology. He did so with his studio Mindfulness Everywhere which create meditative apps such as buddhify and Sleepfulness. Pretty good starting point for my contextual research!

In the same veins, I also found the initiative Time Well Spent – the founder Tristan Harris gave a pretty good talk that I recommend to watch to comprehend his vision of designing to value time with technology versus the seeker-attention character of technology.

For the homework, I had to use something I didn’t get to try previously so I chose the Servo Motor. Setting it up was pretty easy following this tutorial, and the Servo Motor swept back and forth without any trouble.

I added a Potentiometer to control this Servo Motor, and it still works out pretty well. On top of that, I added a LED that is also controlled by the Potentiometer to make it fade. However, I feel that the input of the two components are having a face-to-face and somehow cancel the fade effect…? The Potentiometer still lights up the LED at the end of its turn though.

On another note, this module seems pretty similar that of the Stepper Motor, but it is not as precise — the Stepper Motor is defined by steps and you can choose the angle of each step, while the Servo Motor turns around at once. Hence it seems to have less possibilities, but certainly I wasn’t able to make full use of it.