Search

License

Haptic and Tactile

Wednesday, 08 August 2007

Like many universities, EPFL has an innovation park for start-up companies: there I visited Force Dimension, a company that has exploited the delta robot invented by Reymond Clavel to create a haptic device. In this kind of system you don't expect your hand to explore a system directly (as with the virtual reality workstation I mentioned previously). Instead, your interaction with the virtual world is mediated through some kind of instrument. For instance, in the image on the right, you can think of the black sphere I'm holding as the handle of some kind of short, fairly blunt, tool. I can use this to probe the virtual landscape. If I hit a solid object, I will feel force feedback from the robot arm. The system also creates vibrations that allow me to feel textures and friction as I move around.

Using the Force Dimension device was a lot of fun, and pretty realistic: the forces imparted were very large. The experience reminded me of when, at MIT, I got to feel the one of the very early Phantom devices invented by Thomas Massie (a modern version is shown left, but the device I remember looked much the same). At the point that I got to use the system, maybe 1993 or 1994, he'd just recently set up SensAble Technologies: a company that has has since become very successful. Force Dimension claims that the force that their device imparts is much greater than the Phantom, but I couldn't possibly compare after more than a decade.

Anyway, I can remember being very impressed trying Massie's device, which was a memorable experience in more ways than one. Not only was it the first haptic device I ever tried but, with the early prototype I used, there seemed no special protocol for starting a session. I put my finger inside a thimble (the tool through which I would feel the virtual world), and they ran the program. Apparently, because of the position at which my hand happened to be, I 'materialized' into the virtual world deep inside a solid object. Because of this, the force-feedback response was to use every Newton available to push me out of there, pretty much instantly. After my getting over this shock (it was pretty violent!) and Thomas resetting the system, I very much enjoyed using it to feel objects and textures.

Nothing's perfect, but both of these devices are pretty impressive. I only wish they were not so bulky.

Finally, before I get off this subject for a while, I wanted to say that probably the most impressive haptic device I've ever used was designed by Allison Okamura of Johns Hopkins University. One of her goals is to develop systems that allow surgeons to either practice surgery or to allow them to teleoperate while getting good sensory feedback. So she designed some haptic scissors (right) for doing what surgeons do: cut. When I used this device, I got the chance to feel what it was like cutting through skin and organs and blood vessels. It was extremely realistic, and the fact that the tool was not just probing but really interacting with the virtual world made it particularly compelling. Again, the system works entirely through vibration and force feedback.

Captions

Top: Me trying a Force Dimension haptic device.

Middle: A current model of the Phantom device from SensAble Technologies.

Bottom: Allison Okamura with her haptic scissors for telesurgery and surgical simulation.

Friday, 20 July 2007

I'm interested in different ways ofdisplaying information to our bodies, and particularly to our skin. So, in my June visits to the Washington DC area and to Switzerland (Zurich and Lausanne), I made a point of trying to see as many people working with tactile and haptic displays as possible. I had the opportunity to try three very different devices, which made me realize just how difficult a problem this is.

The first of the three, shown right, was at Johns Hopkins University in Baltimore, where I met with Steve Hsiao and Takashi Yoshioka, both members of the Mind/Brain Institute there. This stimulator is designed to present varying shapes and textures to the fingertips. If you click on the image, you'll see that this was a very large and expensive piece of equipment, with individual motors controlling each one of 400 pins. Not exactly portable. Plus, it's designed to show force images, rather than vibration images, so although you do get some sense of a texture or shape, it doesn't quite feel real.

Of course, it's designed for research purposes, to see how we deal with stimuli, not as a practical display. And it's also relatively new: the demo they did with me was fairly rudimentary. So I'm not criticizing it at all. I'm just showing how much machinery it takes in order to be able to apply a decent amount of force to a very small area in arbitrary way.

The second system, right, was also large. Renaud Ott helped me to into the Haptic Workstation when I visited him at the EPFL Virtual Reality Laboratory. As you may be able to see from the picture, the system basically consists of three elements. First you put on gloves, which purely sense the movement of your hand. Then you put little rings just under the first and second knuckle of each finger. (You'll have to click to get the bigger version to see these, but can see it pretty clearly on my left thumb). These provide the force feedback to the hand itself. Finally, the long arms both sense your position in 3D space and also apply force. The screen shows the world you're interacting with.

This was interesting: unlike anything I'd tried before. But not wholly satisfying. (Again, that's not meant as a criticism, it's a hard problem!) First, the fact that there was no way of putting pressure on the palm or fingertips were big issues for me in terms of trying to grasp the items on the virtual table. From what I can tell, we have two major ways of grasping things (I'm sure there are more, but these are the most obvious to me). One is to put your flat hand on the object, and when you feel it on your palm you curl your fingers to conform to it. Another is to use the tip of your thumb and forefinger to grab something. With this workstation, neither of these are possible, because you don't feel the pressure in the right place. So it didn't feel real.

Another issue was the fact that the display is so soft. It's not strong enough to push you around so, for instance you couldn't bang your hand on the table. You would just bounce into or through it. This was actually one of the problems that the team were working on: trying to find a way so that the difference between what you felt and what you saw didn't confuse you. (Also, a tiny thing. You'll see that all the objects on the table are round, except for a box that I'd pushed to the floor. Anyway, they kept rolling off. Annoying!)

I guess the problem here is that the task does not seem suited to the display, something that will no doubt come up again later.

I'll get on to the third display in the next post.

Figures

Top: This tactile stimulator developed at Johns Hopkins University has servo motors that control the force exerted by each pin.

Bottom: Using the Haptic Workstation I try, generally unsuccessfully, to pick up objects from the virtual table.

Tuesday, 03 April 2007

When I first heard the story about the Feelspace or North belt, one of the things that excited me the most was that it demonstrated how we learn from all the stimuli we are regularly in contact with, even those that don't seem very special. We learn that does and doesn't 'feel right' from the swish of our winter coats to the sound of our shoes on the ground to the vibration of our cars. We don't just do passive sensing, but active: we interact with the world in routine ways and can tell something about our environment by the way it reacts to us.

Another thing that I thought was very interesting was that it showed how we extend ourselves with technology already. One of the things that occurred to me in the lecture where I first heard Peter König talk about this work was that taxi drivers might use their car as a kind of big North belt, and improve their sense of direction (in the car) by associating vibrations, the feeling of turns etc. with their position. So, for instance, if you put a London cabbie in a car, blindfolded, you might expect him or her to have a much better idea of where they were after five or ten minutes than your average driver. (London cabbies are good because they have to do the knowledge, i.e. learn central London like the back of their hands, and they've been studied by neuroscientists who found they tend to have a larger-than-normal hippocampi. See PNAS 2000, Maguire et. al. if you want more.)

Thinking about both of these, one thing seems critical: the laws of physics. Not understanding them, but internalising the bits that relate to our everyday interactions with the environment. My argument is simply that the closer the artificial stimuli that we feed into the body fit with what we are expecting from this physical framework, the more quickly and easily we can digest that information. This is what I think an 'intuitive' display is.

So for instance, when I was at Charles Spence's lab in Oxford, we discussed his findings that the skin wasn't a great means of getting information into the body. It can barely sense two separate stimuli at the same time, he said, never mind three. Interesting, but what kind of information is skin supposed to be passing on, what should it be good at? We don't use our skin to 'see' shapes or to count, we generally use it to monitor things that are moving on our bodies: normal, benign things like hair and clothes, and less benign things like plants or animals that could do us damage.

Interestingly, Spence said he didn't have a model for what skin should be good at. As an experimental psychologist, his job is just to measure what it could and couldn't do and to try to find interactions that were useful. To me, this is unsatisfactory: without a model, the state-space of possible 'useful interactions' is enormous, and the chances of happening on the best ones are limited. Only with a clearer idea of what is going on can we start engineering more intuitive interfaces.

Wednesday, 28 March 2007

One of my colleagues told me at lunch that he'd seen an IEEE Spectrum article saying that the Wicab tongue display was one of the 'loser' technologies of 2007, so I thought I'd take a quick break from the idea of intuitive displays to deal with this. I was very disappointed with the article, in that none of the 'experts' quoted seemed to have any specific knowledge of the device, how it was perceived by users, what it felt like to use etc. Also, it was not noted that these people were, essentially, competitors. (I have no axe to grind except that I think it was bad journalism).

I have no idea whether the project will succeed or not (something that will probably have more to do with business model and marketing than technological excellence), but it's very early days and if the company can, as they say they intend, turn the device into something that can be worn as a plate in the mouth, I think it at least has a chance. I won't reiterate why, as it's in my previous blog on the subject, detailing my first-hand experience with the device.

Instead, I thought I'd share a story about another use for the technology that Spectrum didn't mention, one that suggests there is a lot of work to be done in understanding how this new kind of device works with our brain. I was going to get onto this today anyway, but this gives me additional motivation!

Yuri Danilov, Director of Clinical Research at Wicab, told me about Cheryl, a woman whose vestibular system was severely damaged due to a bad reaction to an antibiotic. She was given the 'balance' version of the display, one that fed her information from a gyroscope. The tongue signal was a bit like a 2D spirit level, with a moving 'image' of a circle that had to be kept centered for the patient to stay balanced.

Yuri explained that they let her use the balance display and found that the effect would last for a little while after she removed the device. Five minutes online would give her a couple of unaided, functional minutes. But when they pushed her exposure time up to 20 minutes, she was able to walk around normally for almost five hours, even able to ride a bike and skip rope. According to Danilov, they eventually settled down to a regular regime of 20 minutes every two or three days. “As far as Cheryl was concerned,” he says, “she was cured.”

But she wasn’t. She felt good, though, and she stopped using the display. “Three weeks later, she came back and was almost as bad as she’d been at the very beginning,” says Danilov. Happily, after two days of re-training she got it all back. But now she’s hooked.

Danilov and his colleagues don’t really know why their tongue display helps—they haven’t yet had the chance to do the brain imaging studies that might tell them. But they have some ideas. They think the device is providing positive feedback for the Cheryl’s senses that are working, like sight, and they suspect that her broken vestibular system is sending out a small amount of good information along with lots of nonsense: the tongue display is providing positive reinforcement (feedback) for the good information and helping to filter out the bad. In other words, the gyroscope is enhancing Cheryl’s senses until they can act as an optimized filter for her sense of balance. But without regular feedback, that filter breaks down.

Maybe the tongue display will be a loser for giving people back their sight, but I think the Spectrum team are losers for judging it so incredibly early in its development.

Friday, 23 March 2007

After the last couple of posts I got some interesting comments about familiarity being important to intuition. Roger Attrill pointed out that Adobe users would find Photoshop intuitive (while others wouldn't) and Bob Salmon pointed out that musicians might find software using some kind of musical-score based interface natural to use that the rest of us wouldn't.

For the kind of sensory interfaces I'm interested in, however, I think you'd want something that is cross-cultural, something that relies on being human, maybe, or on a naive understanding of the physics of the world, but not otherwise requiring training.

What I'm getting at might be considered a wider definition of a term I just learned for the first time last year: symbology. When I visited Tom Schell's lab, I talked to him and to Todd Macuda about the various ways that, mainly through visual display, they try to make jobs easier for pilots. One of the examples they showed me, for instance, was having a 'road' displayed over the landscape that you were flying over—a road you were supposed to keep to—or a set of rectangles appearing that you aimed to fly through. This, it seemed to me, was very easy to understand: anyone, from any culture, can understand the concept of a path or a tunnel.

I was also shown some more traditional displays: the kind you're more likely to see in cockpits today. These consist of artificial horizons and dials etc. as shown in the picture, and anyone who has ever tried to use them would agree that they require a lot of training: their connection with the real, physical world is not at all obvious to the lay person.

The three haptic displays I used, on the other hand, all had inherently physical properties. With the vibro-tactile suit to aid spatial orientation, there was almost a direct physical meaning to the location of buzzing. For instance, if you were in a small room with your arms out and leaned forward, your chest might touch the wall (chest vibrators buzz when you lean forward). If you leaned sideways, your hand might touch the wall (wrist vibrators buzz when you lean sideways), and then more of your arm would make contact (elbow/shoulder buzzers start to go). So there is a physical meaning to the signal, relating to the physics of the world.

Photo: Read-outs from the cockpit of the OPL experimental plane.

The version of the tongue-display designed to help with balance feels like having a kind of two-dimensional spirit level on your tongue (and it really works too, more on this later): another physical interface. Likewise for the North (or Feelspace) belt discussed in the Wired article.

Tuesday, 20 March 2007

During my recent research into the world of sensory interfaces, I had at least a dozen discussions, maybe more, about what the word intuitive means. I can't remotely claim to have cracked this: on the contrary, it seems to me that people are still at the stage of figuring out the right questions to ask. It's true that human-machine interfaces have been well studied and there's vast literature on the subject. However, though sensory interfaces that allow you to interact with the real world (rather than a computer world) will certainly have many things in common with virtual displays, I think there will be some differences too.

Anyway, the first part of my definition is that the more intuitive something is, the less training it should require to be understood. Obvious, yes, but sometimes stating the obvious can be useful...

So, if we are looking for an interaction that requires as little training as possible, what does that mean? From talking to two people in particular—Paul Bach-y-Rita from the University of Wisconsin at Madison and Terry Sanger from Stanford—complexity came out as an important issue.

Bach-y-rita had been working on early tactile-to-vision systems that involved using a dentists' chair with an array of pins that could be used to 'show' various kinds of pictures to the back of the person sitting in it. In one study, he said, they scrambled the picture order: essentially they shuffled the pixels around. However, they kept the shuffle constant during training, so the picture was scrambled in the same way every time. What they found was that the person could still learn to 'see' eventually, but it took a long time to learn to do it.

Sanger's research is about giving dispraxic kids (children who have difficulty in controlling their movements due to various medical conditions) a means to get physical feedback from their own bodies. Specifically he uses a vibro-tactile device to feed back how a particular muscle is moving: so the faster or stronger the vibration the stronger the muscle movement. This means they can learn to 'feel' what it's like to perform a task well despite the fact the fact that this feedback isn't coming directly from the muscle. It also means that, unlike with other feedback systems that beep, they don't get startled (startle response is often a problem for such children).

I'll probably say more about Terry's work later, but during our discussions he mentioned that he had been involved with an experiment where the movement of the subject's four fingers were used to control twenty lines on a screen. Where there was a clear relationship between the finger movement and lines, control was easy. When the relationship was more complicated, only one person could do it at all, even with significant training: and she figured it out mathematically!

The problem is that complexity (in the common usage sense, not the mathematical sense) is a relative concept, something is more or less complicated than something else. In the next post I'll talk about what the something else might be.

Friday, 16 March 2007

One of the aspirations of those trying to feed information into the brain through the senses is how to make the process intuitive. Roughly we know what this means: we want to have the information be self-explanatory, to not require any further thought, to immediately and naturally provoke the behavior that we intended. But what exactly does this mean in practice? How does it break down?

During my research for my Wired feature on feeding information through the senses (April 2007 issue), this is one of the questions I really wanted to get to the heart of. Among the people I discussed the issue with was Charles Spence, an experimental psychologist from the University of Oxford here in the UK. Spence is particularly interested in creating devices that will make cars safer by warning drivers about hazards, waking them up, etc. In particular one of the things they are looking at is whether they can build some kind of vibrating device into the seat belt that can give us a "tap on the shoulder," telling us where to look if there's some hazard we're not paying attention to.

Obviously, what he wants is this warning to be intuitive.

But, as he told me after my visit to the lab, there is no functional definition of intuitive in experimental psychology. Instead, most of what his team measures relates to attention. For instance,
in the experiment pictured, I have to respond to the two lights on the
table by pressing buttons, and two vibrating arm bands using floor
pedals. The idea is to see whether I get distracted or fooled because,
say, light and buzz come together, or one buzz is stronger than the
other.

Because they have no definition of intuitive, no theory of how information is most easily 'digested' by the brain, all they can do is measure how quickly and accurately the person does what's required of them. If they do this using a lot of different types of warning system they can then decide that the one that gave the best results is the most intuitive. Great. But how does that help you to make the system more intuitive? Engineers want design rules, and I think the place to start is to look at three things: data complexity, the information having a physical meaning, and relevance. I'll explain in more detail in the next post.

Photo: My chance to be a guinea pig in a University of Oxford experiment. I react to lights on the table and buzzes through the bands on my arms by pressing buttons with my finger and pedals with my feet.

Saturday, 03 March 2007

Stefan Marti is a research scientist with Samsung in San Jose. When he was a student, he had the experience of walking around with a broken vibrating pager that gave him an unexpected extra sense: for certain kinds of electromagnetic fields. From the buzzing in his pocket, he knew when people were making popcorn in the microwave, when there was a wireless router nearby and, most interestingly, when a phone—whether his or someone else’s—was about to ring.

The latter turned out to be a nice feature. Sensing an incoming call, he learned not to start a new sentence, and this experience ended up inspiring his doctoral work. His research involved making phones smart enough to figure out whether or not they should ring at all, and to figure out the appropriate action to take (passing on messages etc.) depending on the circumstances. Now he’s working on long-term projects for Samsung that will involve making phones interact with people in more-intelligent/less-disruptive ways.

For Stefan, and the other engineers out there working on products to help me ‘see’ the world through technology, I have a wish list…

First, I want to know when people are lying. A pair of sunglasses that have—displayed discreetly on the inside—a thermal image of the person I’m looking at, and maybe also with some beating property that tells me about heart rate, would do the trick. I know the display technology exists to do it (maybe one of those micromechanical scanners for each eye?), and I’ve seen single-chip cameras that can image in the infrared. So thermal can’t be too much of a stretch. If I personally am not a big enough market, the FBI will almost certainly be interested.

Second, I love to walk, and want to constantly know which way is north when I wander around a new city: I’m fed up with trying to use the sun as a guide when I’m navigating with a map. OK, so it rises in the east and sets in the west. Vaguely. But really it’s all over the place depending on the time of year (I have an office that faces west, so I know). By the way, the gismo needs to be invisible under a T-shirt: no bomb-like boxes or wires sticking out please.

While we’re at it, I want more out of my digital organizer/phone. If my position is known through a GPS chip or my cell reception, can’t I be warned that I’m going into a sketchy neighborhood? (I took a wrong turn through into the Tenderloin in San Francisco once, in heels… very scary.) A set of bluetooth-controlled buzzers built into a watch or arm band might do the trick there. It might even guide me, safely, to where I want to go, rather than making me get my phone or map out. I hate looking like a tourist!

PS: If you want to have the Wi-Fi sixth sense yourself, Phillip Torrone will show you how to make one. Added 16 March 2007.

Sunday, 25 February 2007

In my previous posts I discussed using tactile displays (the tongue display and spatial-orientation vest) in order to sense the world in new ways. In both cases my immediate reaction was to close my eyes, which may seem foolish: I was cutting out visual information that might have been useful. However, there was method in my madness.

Vision dominates the other senses for spatial tasks and the information provided by the eyes can be misleading. At the Royal Air Force base in Henlow, for instance, pilots take courses on why you shouldn’t trust what you see through your night vision goggles NVGs) too much. To bring the message home, the pilots are shown a film of an accident caused by one pilot misunderstanding what his NVGs were telling him: that the other plane is feet away, not hundreds of feet. This problem is anything but theoretical.

Tactile displays like the one being tested at the Operator Performance Laboratory are intended to help pilots to get around problems with information from their other senses, particularly visual. However, there can be a problem if they conflict. Where the eyes are giving little information (just blue or black sky for a pilot, say) there is no problem. But what about when the two disagree? Even if you might know that the tactile information is more likely to be correct, will visual dominance override your reason?

This is a particularly difficult problem because, with some kinds of information (particularly the tongue-display attached to the camera) one could argue that the tactile information is visual. The fact that it’s coming in through the tongue is irrelevant. In this case, does the sense that the information comes in through matter more than it’s accuracy or relevance?

These are hard questions. Right now even don’t know for sure where the brain decodes the new kinds of information from the tongue (or skin for that matter), although studies in this area are planned. There is lots of speculation however. Some believe that it may depend on the type of information coming in through the display: so that with visual characteristics (i.e. in the form of images rather than some other sort of signal) might somehow be routed through the visual cortex. That could be why the information ‘feels’ so visual.

Another possibility is that the visual cortex is not involved: that the part of the brain that usually processes signals from the tongue adapts to do the job. In this case it the experience may feel visual because the bit of the brain that deals with visual imagination is harnessed as a way of ‘displaying’ the information acquired through the tongue: it ends up being incorporated into our world view as if it came through the eyes.

Either of these speculations (there may be many more) imply that the information is visual, which brings us back to the original question, but in a slightly different way. Now we’re not worried about whether the visual will dominate the tactile, but how the easy the brain will find it to pay attention to two completely different visual signals… More on this in the next post.

Sunday, 15 October 2006

Of all the gizmos I've tried in my recent research, the tactile suit was the most exciting. Pilots can become disoriented because of poor or confusing visual feedback conflicting with other cues, a phenomenon that accounts for a significant proportion (up to 10%) of air accidents. The tactile suit is intended to help by letting the pilot feel their orientation through their body rather than trusting their vestibular (inner ear) or visual systems.

The version I used consisted of a set of three vibrating buzzers (tactors) on each arm and four each on the chest and back. (Additional leg tactors—shown on the screen in the picture—had been disabled in the version I used). It works as follows. As the plane tilts to the right, the right wrist starts to vibrate as it tilts more, the elbow comes in too, then the shoulder. As the plane tilts forwards, the top two chest tactors come in, then the lower as the dive becomes more extreme. The back tactors become active when the plane starts to climb.

Apart from commercial flights, I have had quite limited experience of flying as a passenger and almost no experience of being a pilot (in a simulator or otherwise). I was not, therefore, in any way surprised that I was, well, pretty crap at trying to use some of the Operator Performance Lab flight simulators. However, this was not true while using the tactile suit. When I could feel my orientation, I was much better able to steer. So much so that the researchers were able to put me into an awkward and dangerous orientation (one that most pilots have great difficulty finding their way out of) and I was able to find my way to level flying easily. With my eyes shut.

What was particularly impressive about this is that it's not what this particular suit was designed for: it was meant to warn the pilot, not offer feedback for control. However, Captain Angus Rupert who invented the original idea, almost ten years ago, has now developed the technology to the point where pilots really can use it to tell which way is up... Field trials show pilot improvement whether visual conditions are good or poor.

My personal experience was that the device allows you to 'feel' space in a way that is otherwise impossible, but that nevertheless feels both incredibly natural and physical. This physicality is, I think, the key to the success of the technology.

Photos

Top: Seeing the activation of tactors on screen as I feel them through the suit.

Bottom: Flying blind, using the tactors as the only cue to feel my way to level flying.