Movea’s Data Fusion Transforms Sensors Into Indoor Navigation

Today’s smartphones and other mobile devices are jam-packed with sensors. Micromechanical gyroscopes track the phone’s tilt. Magnetometers figure out which way the phone is pointing. Accelerometers detect motion. GPS receivers track location. Pressure sensors help determine altitude. Add in ambient light detectors, air sensors, temperature sensors, microphones, and cameras, and it all means that app developers can make mobile devices do amazing things.

But tapping into the power of all these sensors isn’t easy. There are no industry-wide standards today for how sensors operate and communicate. Mobile device operating systems access sensor data in a variety of ways. And even the way sensors are built into phones differ—some architectures use sensor hubs, with a dedicated processor controlling the various sensors, others rely on the mobile device’s main processor to do this control.

A 2007 spin-off of Leti based in Grenoble, Movea thinks app developers could do a lot more interesting things if they had a helper to round up the data from the sensors and make it easy to work with—sort of a sensor wrangler.

So Movea has introduced MotionCore, an application programming interface (API) for Android and Windows 8 devices. MotionCore, built into a phone either as software on the main processor or firmware on a sensor hub, is sensor agnostic. That is, it will allow apps to connect with sensors from any manufacturer. That means mobile-device manufacturers will be able to mix sensors from different sensor manufacturers more easily, and older apps will be able to migrate to newer phones without being rewritten to talk to the new mix of sensors.

Movea doesn’t plan to produce those apps itself, preferring to leave that in the hands of the creative app development community. But the company did build a few demonstration apps to show off the power of its software, including a simple “air signature” to unlock a phone and an indoor navigation tool that can identify the floor as well as the location inside a hotel by sensing altitude changes. And, to show just how powerful its API can be, Movea engineers took 15 sensor pods and, instead of putting them in a phone, attached them to a dancer, processing their outputs in real time to generate hypnotic graphics.

IEEE Spectrum met with Dave Rothenberg, director of marketing at Movea, in January at the International Consumer Electronics Show, in Las Vegas, to see just how these worked. Rothenberg had a little trouble demonstrating the air password, though the fault was not necessarily that of Movea’s software, but more his memory; he’s just not used to an air password yet.

The navigation demo was set up to “know” that it was inside the LVH Hotel; in a commercial application, an app would get this initial geographic information from the phone’s GPS receiver. But once setting that general location, the phone did not use any outside signals to find its way around (like GPS, Wi-Fi, or cellphone locators). Instead, it relied on its sensors to spot pressure and direction changes and to count steps. The software checked the data from sensors against a map of the hotel to self-correct, for example, by understanding that a user is more likely to walk into an elevator than through a wall. The body area network on the dancer, Movea says, cost under US $10 000 in sensors and computer hardware to pull together, and is as powerful as professional motion capture systems that today run at more than $50 000.

Tekla S. Perry:Smartphones are filled with an amazing array of sensors that measure movement, altitude, pressure, and much more. But for that sensor data to be useful, it must be processed. Software company Movea creates APIs [application program interfaces] that allow developers to turn that data into an amazing variety of useful things, like indoor navigation apps and smart sports equipment. .

Dave Rothenberg:So we can look at a very basic orientation. And if you take the data coming from these three sensors and you fuse it together the resulting estimation of orientation is very robust.

These are all MEMS [microelectromechanical systems]-based sensors now: accelerometer, a gyroscope, and a magnetometer. And with the S3 there’s also a pressure sensor or a barometer in the phone. And we’re going to be using all three of those—all four of those sensors for our indoor navigation demo.

We ask the user to enter a little bit of information. The real key piece of information though is the height because there is a correlation between height and step length. And step length is a key input to our indoor navigation engine. So what you see is a map of the first floor of the lobby of the LVH hotel and at this point we’re going to start walking and let the app guide us to our meeting suite. So it’s detected that we’re by the elevator and now it’s asking us to go up to the sixth floor.

The motion and awareness come from sensors. They sense what we’re doing. They sense the environment.

Tekla S. Perry:Altitude sensors can even track you in an elevator. The app won't cut out, because all the information is stored internally. And all the processing is done in the background by your phone.

Dave Rothenberg:We bring all this data together through a technique we call data fusion and that data fusion is what delivers intelligence to devices, apps, and services.

Tekla S. Perry:Movea demonstrated the power of MEMS sensors with this live motion capture software.

Dave Rothenberg:So the full-body motion capture demo is being driven by a body area network of 15 sensors that’s being worn by the dancer. Now what makes this MEMS-based motion capture system so interesting is that’s its of equivalent performance—roughly equivalent performance—to a much more expensive image- or video-based motion capture system which could go for as much as $50 000. Whereas this MEMS-based motion capture system can go for under $10 000. And all the information you see about the body movement on the screen is being processed on the host and displayed on the monitor.

And so Movea creates the software that sits on top of these sensors. We read the data from those sensors. We process that data. We’ve done a motion-enabled or a sensor-enabled tennis racket with Babolat.

Many of these sports applications require significantly fewer sensors but the principle is the same. Using sensors to analyze human performance with an eye towards improving that performance through coaching and guidance.

Tekla S. Perry: ForIEEE Spectrum, I’m Tekla Perry.

Dave Rothenberg:But if you use air signature authentication—oh wait. Sorry, I did it the wrong way and it didn’t recognize the gesture. So that’s good. You don’t want a false gesture to be recognized. Let’s try that one more time.

Phone:Welcome, Mr. John Stark.

Dave Rothenberg:Okay, there you go. And you can see it was actually hard to do. And that’s good. You don’t want anybody being able to replicate your air signature.

NOTE: Transcripts are created for the convenience of our readers and listeners and may not perfectly match their associated interviews and narratives. The authoritative record ofIEEE Spectrum’s video programming is the video.