Saturday, January 23, 2010

Anthony Atala's description of the research about how to build or regenerate organs to be used in transplants.

This TED talk is very powerful, fascinating and enthusiastic.
Here is the original link. http://www.ted.com/talks/anthony_atala_growing_organs_engineering_tissue.html

These topics and the current state of the research in these fields seem really to be science fiction but are real.

People will be allowed to access cell banks where to leave their healthy cells taken as biopsy from important organs, to be preserved for eventual later use.
When needed, the saved cells could be used to rebuild the failed organs to be transplanted.

A Windows XP pc with a webcam. I tried on Windows Vista but without success. Windows 7 is probably not working too.

An Arduino board.

Two servos.

Some wires

One toy laser (optional, for more accurate calibration system).

Building:
Building is exactly the same like the LaserGun project.
In place of the laser you can put a toy figure face or a puppet face, so that when the servo move you will see the puppet face turning and tilting.

The Software
Arduino board software is exactly the same of LaserGun project

PC software requires some additional components, but starts from the same base.
I followed the instructions I found here http://ubaa.net/shared/processing/opencv/ which is the main site for OpenCV integration with Processing.

OpenCV libray: go download it and be sure NOT to download the wrong version. We need the 1.0 version. I suggest to install it in c:\opencv10 or to another simple path without spaces within the name. I do not recommend to install it in the default location which is probably dependent from your windows system language, and may contain spaces (like in "Program Files"). Also I suggest to use an all lowercase name.During installation, answer yes when prompted to alter the path adding the c:\opencv10\bin folder.
After installation, I am suggesting to edit the system path so to have that folder at the beginning of the path, as shown in this picture (you can get to this via right click on My Computer, then Properties, then Advanced tab, then Environment Variables button.

A reboot is not needed

OpenCV Processing Libray: This is for interfacing your Processing environment with OpenCV.You can download it from here, expand it and put the contents inside the folder "libraries" into your processing installation root. In my case i put it in c:\inst\processing-1.0.9\processing-1.0.9\libraries

Optionally you can also install the Processing OpenCV examples. Get them from http://ubaa.net/shared/processing/opencv/download/opencv_examples.zip (5Mb)
and copy the opencv-examples folder accordingly in C:\inst\processing-1.0.9\processing-1.0.9\examples

Checking the environment installation
When starting Processing you should be able to open and run the example code. If you do not see camera feed, it is probably due to your OS not being XP. (I had black pitch camera feed on my Vista laptop)
In order for the face detection demos to work, you need to copy the proper haar recognizer data file into the sketch folder. Get the datafiles from C:\openCV10\data\haarcascades

void setup() {
println(Arduino.list());// IMPORTANT! This code will not work if you do not write the correct// id of the serial interface in the next line (in my case 2) arduino = new Arduino(this, Arduino.list()[2], 57600);
arduino.analogWrite(9,initialservox);
arduino.analogWrite(10,initialservoy);
arduino.analogWrite(11,laseroff); // laser off
size(maxx,maxy);

Calibration
As in the LaserGun Project, it is important to properly calibrate the system so to have it work decently.
In order to minimize errors, try to keep the position of the lasers and the position of the webcam as close as possible.
Calibration is performed pressing right mouse click, and then right clicking on the screen where you see the pointer. Repeat for the two points asked.

Multiple faces:
Currently, the OpenCV library is able to detect more than one face in the scene. The detected faces are not always presented in the same order. If you present two faces to the current system, it will be confused. A more accurate movement detection and tracking over time would be needed.

Caution
This code could be dangerous if improperly used. Never play with laser pointing it into people eyes and face.
Be smart, and always think before doing.

Friday, January 15, 2010

See this nice video by brusspup on youtube to quickly understand the concept.

I wrote a software tool to produce the picture and the related mask to see it.
the code is written in Processing. It produces the picture and the related mask.

My software basically takes a number of images as input that are to be considered the frames of the animation to be built. You can change the parameter to define how many images do you want to use. Typically you can go with 4, and 6 is probably the maximum, otherwise the animation is too dark, because the final effect reduces the image brightness sensibly.

If you use 4 frames, only 1/4 columns of pixels are visible at a given time, reducing overall brightness to 25% of the original.
If you use 6 frames, final brightness goes to 17% of the original.

As source pictures, it is best to use some high contrast pictures, for example some high contrast dark shapes on white background. I tried with photos taken from my webcam but results were quite poor.

Simple parametrization is needed in the source, to adapt to your input image sequence and output resolution.

Printing the mask transparency and the multi-frame picture
Another tricky problem can be the printing of the mask bitmap. I used a standard laser printer, and printed on A4 sized transparencies.Usually printers perform dithering and anti-aliasing and introduce their "improvements" on printed data, but for this print job we do not need any halftoning.

I performed some tests, and was not satisfacted by any of the normal printing results from standard applications. I resorted to using Adobe Photoshop, and performed image scaling multiplying the original size of the image by a integer (i multiplied by 3 my original size and kept proportions).
It is critical and important that, when scaling, you multiply the image size by an integer, so that even spacing between resulting pixel columns is used. (doing this the scaling algorithm needs not to introduce new columns via interpolation).
In the resample image option of the Image/Image-size menu, I then selected "Nearest Neighbor". This option produces no dithering or halftoning upon image resizing.
If someone know how to obtain the same result without using Photoshop, please let me know.
(july 2010 note: paint.net has a similar option which is working fine)

Of course, you need to perform scaling of the picture following exactly the same rules. Exact size proportion between pixel column width must be preserved and must be the same in the mask and in the image.

And here is a corresponding detail zoom of individual pixel of the mask for 6 animation frames. You can see 1 transparent column and 5 opaque columns

Code
To use this code you need a Processing development environment. You can download and install it from the Processing web site. It is open source and multiplatform, for Windows/Linux/Mac.
Then you create a new sketch, and paste the following code, saving the new project.
You then have to put in the sketch folder the pictures you want to create the animation from, naming each file with a name ending in a progressive digit starting from 0,1,2,3... See the source code for understanding better.

NOTE on copyright - added on 2010 july 27 after receiving a request from the trademark owner, resulting in removal of every occurrence of the words "scanimation" and "scanimations" in relation to my work:

Scanimation® is a federally registered trademark owned by Eye Think, Inc. and bearing U.S. Registration No. 2,614,549. The mark was federally registered in the United States on September 3, 2002.
http://www.eyethinkinc.com/

Monday, January 11, 2010

If you are a Maker (it seems not politically correct to say Hacker), or a DIY fan, here is a project that I developed in the weekend, playing with Arduino board and Processing.

I built this hack with my kids: Luca and Giulio, and we had lots of fun.

Our Laser Gun works using two servos, connected to an Arduino board. The Arduino is connected to the pc via usb. The thing is controlled via mouse movements on the pc, over a window with a picture taken from the gun position. The laser works normally in low intensity "pointing mode", and when fired is much powerful.
The arduino microcontroller and the pc communicate via serial usb interface. A custom Processing software allows mouse interaction, and enables guidance.

x-servo (the lower one, moving its head on the horizontal left-right plane) control yellow is connected to arduino digital pin 9

y-servo (the upper one, moving its head on the vertical top-down plane) control yellow is connected to arduino digital pin10

laser pointer negative is connected to ground

laser pointer positive is connected to Arduino digital pin 11 (the two laser intensities are managed controlling it -improperly- like a servo, but it works)

Software:
The project requires two pieces of software: one written in Arduino Language (based on Wiring) that is run on the Atmel 328 microcontroller on Arduino 2009 board, and a Processing sketch (Processing programs are called sketches) that is run on the pc.
The two softwares communicate with a standard serial protocol, called Firmata, which is implemented in libraries both on the Arduino side and on the PC Processing side.
I am currently using Microsoft Windows Vista Ultimate 64bit. This is not the best platform for development, and I do not recommend such a setting. A better solution would be Microsoft XP 32 bit.

On the pc, I installed the standard Arduino Development Environment (currently I run arduino-0017), with no additions. This allows to code, edit, compile and download to the Arduino board and test programs written in Wiring.
Arduino Development Environment is based on a version of Processing. Once installed

On the PC I also installed the latest Processing environment (1.0.9 in my case). An add-on library is needed so to support the Firmata serial protocol. This library zip file ( http://www.arduino.cc/playground/uploads/Interfacing/processing-arduino-0017.zip ) file has to be expanded and the three included directories (examples, libraries, src) must be copied to the following folder: \libraries\arduino

Here is the software to be downloaded to the Arduino controller. It is the standard Firmata "servo" template (you can find it among the included examples) with a very simple addition for managing the laser, connected on pin 11.

Note: Unfortunately, it seems that blogger does not like the "<" and ">" characters inside the listings.
the following first two lines are actually #include"<"Firmata.h">" and #include"<"Servo.h">" but you have to remove the " characters.
Additionally, I uploaded the code also on this posterous entry, and on these two scribd entries: 1 and 2.

I am using a small font, so not to cause unwanted line breaks.
The Processing code will have to be adapted to your environment defining the correct identifier for the USB serial interface (see at the beginning of the setup procedure), and also to define the proper name of the picture you want to display on screen while operating the laser (see setup procedure). I suggest to use a picture resized to 800x600, taken from the place in which the laser will be put, towards the target.
The picture file has to be put in the Processing sketch directory in which you will save the project. To access that directory just select the Sketch menu from the Processing environment, and then "show sketch folder".

void setup() { println(Arduino.list());// IMPORTANT! This code will not work if you do not write the correct// id of the serial interface in the next line (in my case 3) arduino = new Arduino(this, Arduino.list()[3], 57600); arduino.analogWrite(9,initialservox); arduino.analogWrite(10,initialservoy); arduino.analogWrite(11,laseroff); // laser off// put in the next line the file name of your ambient picture // taken placing the camera in the place where the lasergun is // so to have a correct perspective on the screen

Features:
Once you have loaded the software on the Arduino board, this will be automatically run at boot. So you will not need to load it again unless you decide to change it.
Operation of the Laser Gun just requires you to connect the USB cable to the pc, and launch the Processing Environment. The Firmata library is designed to "bring outside" of the board all its features, allowing complete control from the PC.

Once initialization has been completed, you will see a window with the picture you loaded in the sketch folder. If you move the mouse, the laser gun should follow your mouse movements.
When you press the left mouse button, the led intensity will grow, to represent "fire". I will probably add some audio features, because the thing is too silent now :-).

You will soon notice that there is a non correspondance between what you point on the screen and the actual position of the laser. This happens because you need to calibrate the system.
Calibrating means that the servo min and max boundaries have to be redefined so to match the space in which you use the LaserGun.

Calibration is performed clicking the right button, and requires you to point with the mouse the place in the screen corresponding to the actual place where you see the laser in your room. When positioned, press again right click, and repeat for a second point that the system will ask. After this procedure has been completed you should have a (reasonable) correspondance between what you aim and what you kill. ( :-)

Safety precautions
Be careful not to point laser in people or animals eyes.
If you use more powerful servos, or a more powerful laser, it is better to power the system via an external power supply, and not thru the USB.

Sunday, January 3, 2010

Which is the need for local country politics in a completely connected world?

We really need to rethink politics, countries, borders, and social differentiation.

Current trends in online interactions are gradually and progressively changing the traditionalsense of nation.
People influence each other through new communication systems and tools.

If we examine all the usual bounds keeping a country together, we see that all these are gradually being released.

Religion: It is gradually reducing its importance. Current religious terrorism is helping a lot in accelerating the process.

Culture: Younger people do not perceive strong local cultural relations. Dialects are gradually disappearing. Most art is cross-borders.

Language: De facto net language is English. Period.

History: Historical reasons are less perceived in a globally accelerating present. Young people is more interested in future. ( Are they to blame? )

Geography and Borders: Easy travel is gradually reducing geography role as a fence. Beside that many communities do not need to be localized in a physical place to prosper and evolve.

Currency: money transactions will shift to "plastic money" i.e. credit cards, or some other currency not bound to local political environment (linden dollars? whuffies? paypal or ebay credits?). Strangely, efforts to build an independent "internet currency" did not succeed. Probably because of bank lobbies not able to find a suitable agreement.

So, traditional "we are together" because "we belong" to the "same nation" are not going to work anymore.
Our identities are being revealed on the net. And we allow it without complaints. We share personal information more easily online than with a policeman.

Along with borders evaporation, our identities are progressively mixing and melting, and social media are becoming the main drivers for education and community building.

We quickly need to evolve the traditional meaning of politics.
We need young politicians. And newer ideas for an up-to-time politic science.

Consensus will not be built anymore with broadcasting, but leveraging online active communities, and promoting autonomous critical thinking.

Feedback cycles will be much faster. Traditional elections are slow and expensive.

Saturday, January 2, 2010

I was thinking about a solar photovoltaic array, in which panels are to keep oriented towards the sun to maximize the collected energy. I thought about possible sun following rotating designs, then I considered sunflowers that do exactly the same.

Many plants actually turn their leaves to the sun to maximize photosynthetic reactions.

So maybe it could be possible to develop some genetically engineered plant, that could be connected to electrical grid through a ground electrode in the roots and with an aerial "electrical vine" entwined to a metal wire that could also sustain the plant itself.

The underground electrode could be grown by the plant itself towards a specific substance that could attract electric roots.

Probably there are many plant species that could be used for this.

A simpler idea is to mount light photovoltaic panels on the leaves, without covering them all, and have the plant movement orient them properly (this would probably be very easy to develop). We need to find the more robust sun orienting plants.