Category: Robots
Page 2 of 5

“Should I stay or should I go?” That seems to be the only concern of the robot that features in a demo movie made by researcher at Brown University. It takes its gestured orders in a way that is easily associated with the way soldiers on a patrol would gesture to ‘stop’ or to ‘go’ to the next soldier in the line. Or at least it looks similar to a man of my limited experience. And having DARPA as a main sponsor also helps the association.

A first impression

It looks like they have done quite a good job with the robot, given the current state of gesture recognition. I especially like it that people don’t have to wear sensors. This is achieved in part by using a depth camera (the CSEM Swiss Ranger) Besides that, the recognition of individuals does still seem to be a bit shaky, since you appear to have to show you face quite clearly before it sees who you are (but then again, given current face recognition technology that is no surprise either, and they have actually done a nice job of getting it up and running in the first place).

Milford School pupils were inspired by ‘Strictly Come Dancing’ to design costumes for Femisapiens and then program dance routines for them using Go-Robo. Facilities supplied by eLC South Nottingham.

Will this be the future of girlie robots? Femisapien is definitely a cute robot from Wowwee with its endearing kisses (here). And with a little software and some creativity you can use Femisapien as your Barbie dressup doll 🙂

Although they obviously spent a lot of time and energy on creating this robot I can’t imagine that it will ever be a good dancer if it merely follows the motions, if it can only be led. There will inevitably be a short gap that will prevent real synchrony in movements, which is exactly what you want to achieve during dancing. But then again, most peple don’t get in full synchrony with each other either…

Please watch the gestures that Elmo makes. There are only a few basic gestures, but they are well connected to the speech. Gestures are often ambiguous and get their specific meaning through their interaction with speech. The same is true to some extent for words (their meaning sometimes relies on the accompanying gestures). In any case, by combining speech and gestures you get a very lively impression. This is what is lacking in my opinion in some of the RC-controlled robots, like the i-Sobot and the MechRC (here). They can do a couple of gestures, but without speech they are restricted to emblematic gestures that can be understood without any words. Add to this that context also does not play a role, and you get a very poor repertoire of gestures. To function properly, gestures need context, and gestures need words even more.

It should be noted that this entire episode is scripted. I do not know enough about Elmo Live but I would guess that all his stories and jokes are preprogrammed chunks.

Bringing the robotic apocalypse one step closer, inventor Dr Jim Wyatt shows off the MechRC, a dancing, fighting, football-playing robot simple enough to be programmed by a child and the bane of many a cat’s life.

I think the general idea of MechRC is quite similar to that of Tomy’s i-Sobot. Both are small humanoids that have a big range of preprogrammed movements and programming options through the PC.

i-Sobot introduction in 2007

There is quite a price difference between the two little ones. i-Sobot is currently available on Amazon for $79, which is ridiculously little, while the MechRC costs £399.00 to preorder (here). But then again, i-Sobot started around $300 as well in 2007 (prices were lowered dramatically just before christmas this year). And a Dutch or Flemish version of i-Sobot (here) still costs €378,99. It is likely that the MechRC will also drop in price after the first year or so, making them more comparable.

As far as functionality goes, at first glance, the major difference is that the MechRC lacks voice control, and the i-Sobot can’t be programmed on your PC (just macro’s of predefined actions). For the i-Sobot solutions have been made for programming, for example Robodance, which also have a great featured article about controlling the robot with a Wii remote. It is a rather geeky solution however that requires good computer skills (according to the Robodance creator), while it appears that the GUI to program MechRC is quite usable, again at first glance.

Neither of the robots as anything remotely resembling gesture recognition, but they can of course produce gestures. Both have a set of preprogrammed gestures that you can create macros with. Yet, the MechRC seems to offer enough direct control over the movements that it should be possible to program your own gestures. Time-consuming perhaps and at best you would end up with an expanded repertoire of gestures to make macros with, but it might be interesting for some gesture fanatics like myself 🙂

I read a news item about robots on the Dutch news site nu.nl (here) about the ethics of letting robots take care of people, especially kids and elderly people. The news item was based on this article in ScienceDaily. Basically it is a warning by ‘Top robotics expert Professor Noel Sharkey’. I looked him up and he appears to be a man to get in contact with. He has, for example, called for a code of conduct for the use of robots in warfare (here).

Noel Sharkey is a writer, broadcaster, and academic. He is professor of AI and Robotics and professor of public engagement at the University of Sheffield and currently holds a senior media fellowship from the Engineering and Physical Science Research Council. Currently his main interest is in ethcial issues surrounding the application of emerging technologies

I wholeheartedly agree with his views so far. He has a good grip on the current capabilities of machine vision and AI, neither of which I would trust when it comes to making important decisions about human life. At least when it comes to applications of speech and gesture recognition, with which I have had a lot of experience with, they simply make too many errors, they make unpredictable errors, and they have lousy error recovery and error handling strategies. So far, I only see evidence that these observations can be generalized to just about any application of machine vision, when it concerns the important stuff.

It reminds me of an anecdote Arend Harteveld (may he rest in peace, see here) once told me: Some engineers once built a neural network to automatically spot tanks in pictures of various environments. As usual with such NNs, they are trained with a set of pictures with negative examples (no tank in the picture) and positive examples (a tank in the picture). After having gone through the training the NN was tested on a separate set of pictures to see how it would perform. And by golly, it did a perfect job. Even if nothing but the barrel of the tank’s gun stuck out of the bushes, it would spot it. And if there wasn’t a tank in the picture the NN never made a mistake. I bet the generals were enthusiastic. A while later it occurred to someone else that there appeared to be a pattern to the pictures: the pictures with the tanks were all shot on a fairly sunny day (both in the training and testing pictures) and the pictures without tanks were taken on a fairly dreary day. The NN was not spotting tanks, it was just looking at the sky…

There is a fun company called Crabfu, which is basically one guy called I-Wei. He creates great steam powered robots, 3D art and animation and all sorts of robots with cute motion control (swatchbots). The funny thing about his swatchbots is that he uses direct control of the actuators which create the movement, see for example in this video of his R/C Tortoise:

It does remind me a bit of a tortoise

There is a more complete coverage of his work and an interview with him by Discovery channel:

Can someone please give him a job?

So, instead of just being able to steer the robot ‘forward’ you need to work out, on your R/C how to move the limbs? As if you are learning to walk all over again. Would people like to get down to this basic level of motion control? Would it feel funny to get your bot to go where you want it? Maybe. At the very least, his robots do provide a cute impression.

Some of his robots almost feel a bit vulnerable or helpless, because they have such trouble moving forward. It reminded me of Hall Object (or Dibbes), the ‘gezellige robot‘ that was built to live in the hall of the NPS/Vara building and endear the people who worked there.

It is altogether fitting that Paro has come to Oegstgeest. Oegstgeest is a small and very old town near the coast that rose to fame as the setting of the novel ‘Return to Oegstgeest’ by Jan Wolkers. In the novel Wolkers writes a lot about his love for animals, both the cuddly ones and the less cuddly ones. It makes me wonder what Wolkers, may he rest in peace, would have had to say about Paro…

There is a good deal of thinking behind Paro. For example, the creators at AIST chose the form of a baby harp seal, and not of a cat or dog, because people will not compare Paro to their experience with a real seal (since they probably will not have had a real experience with a live baby seal). Robot cats tend to be perceived as less fun and less cuddly than real cats. I know from personal experience that many people, especially kids, are quite fond of baby seals. We once went to Pieterburen, home of the world’s foremost Seal Rehabilitation and Research Centre. Even though the kids were not allowed to touch any real baby seals they came to love them just by looking at those big eyes and that innocent appearance. And now, with Paro, you can actually touch and even cuddle them without smelling fishy for a week. I guess all signs are ‘go’ for entering a loving ‘mental commitment’, which is at least what Paro is intended for own homepage…

“Mental Commitment Robots” are developed to interact with human beings and to make them feel emotional attachment to the robots. Rather than using objective measures, these robots trigger more subjective evaluations, evoking psychological impressions such as “cuteness” and comfort. Mental Commitment Robots are designed to provide 3 types of effects: psychological, such as relaxation and motivation, physiological, such as improvement in vital signs, and social effects such as instigating communication among inpatients and caregivers.

Rather grand claims for a robot that hardly does anything, but so far there have been reports in the news (e.g. here, here, or here) that it does have such positive effects to some extent. Yet, Paro only has a few basic sensors (light, touch, microphone, oriëntation/posture, and temperature). He can only open or close his eyes, move his head and paws a bit and ‘purr’ or ‘cry’. The solution, as always, comes from allowing the power of suggestion to work its magic. Minimalistic functionality leaves room to project feelings, moods, even personality to a robot.

Wu Yulu is apparently some chinese guy who has built robots on his own. This fella, named La Yang Che (a translation anyone?) can actually walk and sort of talk. The visionary design of Yulu manages to capture a hitherto disregarded aspect of human face-to-face interaction: The flapping of the ears!

While the lip synchronization only distracts from the message, it is clear to see that the flapping of the ears, rhythmically accompanying the spoken words, beats out the tempo and thereby diminishes the cognitive effort needed for speech perception. At the same time, the rolling or darting of the eyes seems to serve merely to enhance the overall aesthetic experience.