But, Professor Tsukamoto of Kobe University and his team had a much wilder idea about the use of similar flexible piano devices. They came up with this idea of a wearable piano, showed it off at a wearable computing fashion show in Japan last year, and presented a paper about it at a conference last month.

[wearable piano]

Someone says a next step of this could be a wearable orchestra with many other wearable musical instruments. I'd think of it more like a tool for participative improvisation jam sessions.

Pioneer has developed a "floating interface" which allows users to manipulate 3D graphics.

Using a 3D lens, the system creates a 3D image from an image that is displayed on a 15inch LCD. Spatial sensors are installed around the projected image. Users can see the 3D image with their naked eyes, no need of any special glasses.) Sensors detects positions of fingers and a specially developed software computes and renders images in real time. Users can thus draw in the air or manipulate windows.

As for visual playback, the system does image processing focusing on lighting effects such as objects' shadows and contrasts and thereby providing better "psychologica" 3D effects. Also, if users push a window using their fingers, windows are deformed, which the company claims to contribute to realizing a "realistic user interface".

"fundamental principle (how it works)"

The architecture of the system is simple and would be suitable for
mass production. The company claims that it is easy to create contents for the sytem. They are thinking about applications of the system for museums and events.

The floating interface will be exhibited at the CEATEC JAPAN 2005 that will take place at Makuhari Messe from Oct 4 through 8.

Joseph Jacobson and his team at MIT have developed miniature robots that can self-assemble using parts that float randomly in their environments. The robots also know when something is amiss and can correct their own mistakes.

Sequence of self assembly by the miniature robots

The robots come in two colors, yellow (Y) and green (G), and float around on a cushion of air. Each robot is programmed to latch onto a green robot on one side and a yellow robot on the other to form 5-robot strings such as YGGYY or GYYGG.

The robots also have a built-in mechanism to correct any errors they might make. They can check the color of their neighboring blocks and will unlatch themselves if the sequence is not correct.

We've already reported on some projects from Okude Lab at Keio University, but I shouldn't forget to mention a couple of interesting projects whose information is only published in Japanese.

Memorylane, by Ryo Sanpei, Sadamitsu Azuma, Akiyuki Kayama, Yumiko Yoshimoto, and Naohito Okude, is a picture frame for digital photos, which allows users to draw on digital photos and exchange them with friends. The demo video illustrates a cute episode of a boy and a girl making up using memorylane.

[memorylane]

In my opinion, this device would be great for kids who love Rakugao, the art of drawing on people's (photographed) faces. But, of course, there would be varieties of other uses.

The other project is called Okitagami, done by Itsuki Shibata, Sho Hashimoto, Shingo Kaneyama, Tatsuya Matsumoto, and Naohito Okude. Letters are arguably much richer media for communication than emails. The subtlety of handwriting and the richness of the context in which we read and write letters are what we may be losing in exchange for the convenience and speed of digital communications. Okitagami is another example of trying to bring together the best of both letters and emails. The demo video is available from here.

Teddy is one of the most well-known works of his, which allows users to create 3D models just by drawing freeform strokes. Such a 3D authoring method could allow anyone to create 3D objects and effectively support creative processes of making 3D characters and objects. Look what kids made using Magical Sketch 2.

Let's say you just finished drawing a man using Teddy and want to paint him pink and dress him up. Igarashi's other software tools can help you (see Chameleon and Sweater). If you think he should dance, please visit Squirrel. Also, Chateau is a tool that can be used to make buildings and houses.

I'd hope to see his ideas and tools integrated into a software package I can buy and use.

Nukunukukey is a key-shaped information appliance that provides awareness about one's home through sound, light and heat.

[Prototype Nukunukukey.]

Sensors installed in a house detect the presence and the location of people and transmit the sensed information to a nukunukukey through the wireless internet infrastructure. The key has a micro controller, an LED and a Peltier device.

[Internal electronic components of a nukunukukey. From left to right: a micro controller, an LED, and a Peltier device.]

The Peltier device produces 3 different levels of heat according to how many people are present in a house. The LED emits light differently according to people's locations in the house -- whether they are in a living room, a kitchen, or a dining room. An apron-shaped device was also conceptualized for sending yes/no questions (such as "Will you have dinner at home?" or "Are you coming back home today?") to a nukunukukey.

Nukunukukey is designed to increase the sense of family and "make people want to go home." The creators (Yumi Ohgaki, Itsuki Shibata, Kazuhiro Kuroda, Atsunobu Kobayashi and Naohito Okude) raise the issue of diminishing communications among family members and relate this phenomena to the increased use of mobile phone communications and the Internet. And nukunukukey might be an interesting device that might further transform the way we communicate with our family members.