AI Can Sense Humans Through Walls Using Radio Signals

Researchers develop a neural network that can identify people on the other side of the wall based on their gait.

Using radio signals, the AI accurately predicts people’s postures and movements without any cameras.

By 2030, artificial intelligence will contribute more than $15 trillion to the world economy – PwC report. Over the last few couple of years, we have seen a huge growth in this field. Google alone has invested nearly $4 billion in AI research and development.

Recently, a team of engineers at MIT developed a tool (named RF-Pose) that can see people through solid objects and can even identify them based on their gait. Let’s find out how the system works and how did they build it.

Neural Network For Analyzing Radio Signals

For more than a century, we have been using X-rays in medical radiography, security scanners, industrial CT scanning, and much more, to see through an object/layer. But it comes with a downside – you spray all your targets with radiations.

That’s why the tool utilizes radio signals to detect people and their movement on the other side of the wall. It uses the same principal as WiFi – the WiFi signal can travel through wall, but bounces off a human body and comes back through the wall into a detecting machine.

Why it bounces of humans, you asked? Well, an average adult human body is about 60% water, which radio signals can’t penetrate easily. We are talking about radio signals that are thousand times milder than traditional home WiFi.

According to the authors, it was a simple phenomenon, but hard to execute. The radio signal coming back to the detector was very messy; it was giving reflections of every object in the scene.

Image credit: Jason Dorfman / MIT CSAIL

That’s why they used neural network to examine the noisy radio signals. However, most neural networks are trained on hand-labeled data. For instance, a neural network trained to identify dogs requires a large dataset of pictures, where each image should be labeled by people as “dog” or “not dog”. One could do this, but to label radio signals, it would require a tremendous amount of human efforts.

To deal with this issue, they set up a camera along with a wireless device to concurrently record people’s movements. They collected tons of images of people who were talking, sitting, walking and opening doors. From these images, they extracted the key points of the body (what they call stick figures).

Then they trained the neural network on video (recorded from the camera) and matched it with corresponding radio signals. This enabled the network to learn the connection between stick figures and radio signals.

After completing the training, the system was able to accurately predict people’s postures and movements without cameras. The developers reported that 83% of the time, AI correctly identified a specific person from a group of 100. The system identified each person using a two-second clip of the stick figure.

Image credit: MIT CSAIL

Applications

At present, the AI can provide only two dimensional stick figure, but researchers are trying to make three dimensional representation that would be capable of reflecting smaller details. They will explore more sophisticated models and identify people in the wild while performing activities other than walking.

For instance, the system may be able to observe a hand tremor (causes shaky hands) in older people, and give them an early warning sign so they can speak with their doctors before things get serious.

Sure, this type of disease can be detected by wearable sensors, but RF-Pose doesn’t require patients to wear or charge any device. Besides healthcare, it could be used to locate survivors in rescue missions, and even in video games where players move around the house.