Sensor fusion and MEMS for 10-DoF solutions

Until recently the discussions about smartphones and tablets usually have focused on the latest generation of application processors, quality of the displays, the number of megapixels in cameras, or the newest version of operating systems. Now, sensors and particularly MEMS (micro-electro-mechanical-systems) sensors are becoming a part of that discussion thanks to their proliferation in both smartphones and tablets. As an illustration, Table 1 shows the number of sensors incorporated into some of the newest smartphones.

Table 1 – Number of sensors in some recently launched smartphones

In high-end smartphones, the number of sensors has already reached a half dozen and is rapidly marching towards a dozen per smartphone. Sensors are not only essential to bringing new smart features to products such as smartphones and tablets but also to ultra-books, laptops and PCs.

Moreover, sensors are becoming ubiquitous and are found in many different applications besides mobile devices such as industrial control, automotive industry, smart highway infrastructure, smart grid infrastructure, smart homes, health care, oil exploration, petroleum industry, and climate monitoring. This proliferation of sensors into all spheres of our lives is mainly due to MEMS technology that is finally entering a stage of maturity and becoming a main stream. MEMS maturity has brought new products to life with its low cost of under a dollar, which is in turn fueling further penetration of sensors and new applications. One can say that MEMS sensors are everywhere. Petrov group estimates that the sensor market specific to smartphones and tablets alone will pass 15 billion units by 2015.

Sensor fusion

One of the hottest developments in sensor applications is multidimensional sensing. Taking a closer look at the type of sensors used in mobile devices, it is easy to see that the 3D-accelerometer, 3D-gyroscope, and 3D-magnetometers are becoming standard features. Why the need for multidimensional sensing? The short answer: enhanced user experience. The interesting part about these sensors is that each one of them performs some basic sensing: accelerometer provides x, y, and z linear motion sensing, gyroscope provides pitch, roll, and yaw rotational sensing, and magnetometer provides x, y, and z axis magnetic field sensing. While all of these are powerful capabilities, each one of these sensors also shows some limitations which impact accuracy in applications. For example, accelerometers are sensitive to vibrations and can generate a signal even when smartphones or tablets are at rest; gyroscopes suffer from zero bias drift; similarly, magnetometers are sensitive to magnetic interference which can also create an undesired signal.

Can shortcomings of the individual sensors be compensated? This is where sensor fusion comes into play. Sensor fusion is intelligent and simultaneous sensor data processing (from multiple sensors) whereby the output is greater than the sum of individual parts. In other words, if the signals from an accelerometer, gyro, and magnetometer are taken at the same time and processed in an intelligent way, the deficiencies of separate devices can be eliminated and a synthesized smart output can be obtained. Typically, clever algorithms and special filtering techniques such as Quaternion based extended Kalman Filtering are used to produce more sophisticated results and precision. It should be noted that there are already several companies specifically dedicated to the creation of proprietary algorithms and firmware/software solutions for sensor fusion such as Sensor Platforms, Hillcrest Labs, and Movea. Also, some ODM companies are offering full solutions including STM, Freescale, InvenSense, and Kionix, to mention a few.

Microsoft considers sensor fusion to be so critical that it made it mandatory for Windows 8 to support sensors. To achieve that Microsoft has created a sensor class driver and also worked with industry partners to define the standard for sensors. This led to the introduction of standard for sensors in the Human Interface Device (HID) specification in 2011. They also looked for optimization of a sensor fusion solution by architecting an interface to enable sensor processing at the hardware level. In addition, they implemented a filtering mechanism for sending sensor data up the software stack only at the rate data needs (not faster). All of this is integrated in a programming module called Windows Runtime. Microsoft certainly did not want to make the same omission with Windows 8 that Google had made with the Android by simply creating a place holder for sensors, and leaving it up to the sensor companies to plug in their proprietary solutions.

A typical sensor fusion solution that combines a 3D-accelerometer + 3D-gyro + 3D-magnetometer is called a 9-DoF (nine degrees of freedom) or 9-SFA (nine sensor fusion axis) solution. The best way to understand how such a system works is to take a look at the inputs and outputs, as shown in Figure 1. It is easy to see that the 9-DoF solution allows for two sets of data, one being the pass-through data path that sends raw data directly to an application, and the other being the sensor fusion data path whereby initial raw sensor data is processed and synthesized into a more intelligent data output. An example of the pass-through sensor data is a pedometer application (counting someone’s steps as they walk), while examples of sensor fusion data include compass applications, enhanced navigation, and 3D-games.

Sensor fusion is not limited to a 9-DoF solution. For example, if we include one additional sensing quantity, it becomes a 10-DoF (or 10-ASF) solution. A good example of this would be adding a location sensing inside buildings to the 9-DoF solution. That can be done by adding barometric sensing for altitude. Having a barometer enables altitude detection between floors since pressure changes with altitude at the rate of about 10 Pa/m (in average there is about 3.5 meters between floors). So, the 10-DoF includes a 3D-accelerometer, 3D-gyro, 3D-magnetometer and barometer.

Why stop there? Even more sensing quantities can be added in which case the sensor fusion solution becomes an m-DoF solution, where ‘m’ stands for ‘multiple’ and it can be greater than 10. Why not have your own private lab at you fingertips and check the level of your blood sugar or cholesterol when you need it? It is not unfeasible anymore to see new smartphones, tablets, ultrabooks and PCs with universal sensor hubs that can accommodate many applications. Freescale has already demonstrated a 12-DoF solution that includes a 3D-accelerometer, 3D-gyro, 3D-magnetometer, a barometer, a thermometer, and an ambient light sensor. The m-DoF solutions will be the way of the future.

It is evident that sensor fusion requires substantial MCU power. There is currently a healthy debate about the most efficient way to do sensor data computing. Many industry experts think the way to go is a dedicated sensor processor (co-processor), while alternatives are also being considered such as doing sensor computing on an application processor. Interestingly, firmware/software companies are hedging their risk by providing solutions fully compatible with both - embedded processor and application processor. For example, Sensor Platforms has announced its Free Motion Library of software algorithms that supports both 32-bit embedded processors and 64-bit application processors including both architectures, ARM and x86. Free Motion Solution also supports all accelerometers, gyroscopes, magnetometers, and barometers independent of vendors. This independence from processor instruction set and sensors will allow mobile device manufacturers freedom in choosing a supplier and optimizing performance and cost.

Hello Mr. Ristic,
I would like to appreciate the succinct description of sensor fusion provided in the aforesaid article. But, while working on sensor fusion for the the estimation of velocity of a body frame, I came across some real world challenges, for which I seek advice.
*When the mobile device is mounted on a moving body frame, there are 3 coordinate systems, the geomagnetic earth frame, the frame of device mount and the frame of motion of the body itself. In this case, how can we project, the acceleration vector on to the body frame only?
*Furthermore, from inertial navigation, we can use gyroscopes to lock in inertial vectors in the reference frame. But, how do we account for centripetal forces in this case?
It would be really helpful if the doubts enumerated above could be clarified.
Thank you,
Hrishik Mishra

The author has to be congratulated for an excellent article on sensor fusion and its future directions.
I also would like to call his attention (since he is local to Silicon Valley) to the event on a closely related topic being held on a Saturday this month (29th):
Next Generation Circuit & Systems, Communication and Sensor Technologies in Mobile Devices
http://ewh.ieee.org/r6/scv/comsoc/
MP Divakar