I strongly believe in R/D and I strongly believe in the importance of academic research, so I’m kind of happy to share a little preview of contribute to the experimental work carried out by Parma University’s Neuroscience Department (yes! those rockstars who discovered mirror neurons!).
Here’s a short video demoing a software tool to be used in the scope of experiments on the perception of self. I’ll write more about it as the research progresses; for now let’s just say that the idea is measuring in a precise way how much a face needs to change for you to stop recognizing it.

A few weeks ago I met a young designer interested in digitally augmented mirrors: in particular he was interested in messing with people faces. Since this is the kind of stuff I have some experience with, we ordered a couple drinks and brainstormed about how he could do this and that.
I ended up writing a little demo showing how to easily change parts of people’s face in realtime and, since I think it could be helpful for other people too, I wanted to share it and quickly explain how it works.
Basically I track the user’s face with Jason Saragih’s library and create a mesh that can be overlaid on the lower part of the tracker’s face mesh; then I can use this “partial mesh” to create a UV map from the user’s mouth expression, or to blend a saved mouth expression into the live feed.

You can find the source code on my github and here’s a video showing how it works:

Yesterday Marco Tempest was on TED‘s stage again, but this time he was not alone; he performed together with our new creation (creature?): EDI, the Magic Robot!

EDI is a heavily customised Baxter robot: we created for him a set of custom manipulators, a few top secret hardware attachments (you know, for magic 😉 ), and a complete “software brain” that enabled him to display a personality, to train with humans and to learn from them.

A final note: as you can imagine, giving life to EDI was an exciting, but complex job, so I think it’s not surprising that Marco, David (our robotics guru, borrowed directly from MIT’s Media Lab) and I were a little apprehensive about this first live performance. Given the common belief that machines are cold and feelingless, it might surprise that EDI was anxious too: if you don’t believe me, have a look at the following behind-the-scenes video 😉

Today I was writing a function to save a specific configuration file from an OF application and I noticed that ofSystemSaveDialog() (the function commonly used to open a save dialog) does not allow me to specify a default save path.

Since I wanted my files saved in a specific location, I quickly wrote a custom function that includes a path argument; it’s super easy and mac only (Objective C ++), but I thought someone could find it useful, so here it is:

Recently a client bought a Kinect to be used with an OpenFrameworks app I wrote for them; we were doing some normal depth tracking, so we did expect a smooth ride, but, after a few seconds from when the Kinect got plugged, the application froze.
To keep it short, it seems that the Kinect model 1473 (the one you’ll find in shops these days) comes with a new firmware that auto-disconnects the camera after a few seconds, causing a freeze whenever you plug it into a computer and try to use it with libfreenect; this of course means that most creative coding toolkits are affected by the problem: I did run into it using ofxKinect, but it will happen also with the libreenect based CinderBlock, Processinglibrary, etc…

Luckily Theo Watson already came up with a solution: you can find a fixed libfreenect here or, if you’re using OF, you can update to the last version on github.
The fix will work also with the Kinect for Windows and, of course, it will not break compatibility with the older 1414 Kinects.
Finally, if you don’t know the model of your Kinect, this picture will explain how to check it out:

I know it’s been a while since my last post, but I’ve been really busy on many, many projects.
Anyway I just wanted to quickly mention a brand new interactive digital signage tool I developed for the folks at EasySoft: it’s essentially an augmented-reality jukebox where you can load a client’s media assets (logos, video clips,…), select an interaction model (computer vision algorithm + particle system style) and watch people play on your led wall of choice.

We just had a christmas themed test run at Stazione Termini, Rome’s main station and people seemed to enjoy