You are here:HomeNewsSoftware detects motion that the human eye can't see

Software detects motion that the human eye can’t see

Video technique could lead to remote diagnostic methods

July 24, 2012

An example of using theEulerian Video Magnification framework for visualizing the human pulse. (a) Four frames from the original video sequence. (b) The same four frames with the subject’s pulse signal amplified. (c) A vertical scan line from the input (top) and output (bottom) videos plotted over time shows how our method amplifies the periodic color variation. In the input sequence the signal is imperceptible, but in the magnified sequence the variation is clear. (Credit: MIT)

A new set of software algorithms can amplify aspects of a video and reveal what is normally undetectable to human eyesight, making it possible to, for example, measure someone’s pulse by shooting a video of him and capturing the way blood is flowing across his face, Technology Review reports.

The software process, called “Eulerian video magnification” by the MIT computer scientists who developed the program, breaks apart the visual elements of every frame of a video and reconstructs them with the algorithm, which can amplify aspects of the video that are undetectable by the naked eye.

These aspects could include the variations in redness in a man’s face caused by his pulse.

I’m unclear where we are on the techie curve?
Is anyone monitoring trends in robotics? Having robots faster than you is one thing. Having them able to move while you are frozen between time frames is alarming.

I dont see how human extinction is not a major scenario as we advance???

Well, we wouldn’t be the first species to go extinct by either of the following methods:
1) being out-competed
2) evolving into something radically different
3) altering our own environment too much for our own good
4) a natural or artificial disaster

I can see it being used in court, like in the series “lie to me”. The AI notices perspiration, a quickening of the pulse, subtle facial gestures, then all of a sudden, the robot jumps up, and with the speed of the rock,paper,scissors, robot, shouts out, YOU LIE!!!!!!

The results were generated using non-optimized MATLAB code on
a machine with a six-core processor and 32 GB RAM. The computation time per video was on the order of a few minutes. We used a separable binomial ﬁlter of size ﬁve to construct the video
pyramids. We also built a prototype application that allows users to
reveal subtle changes in real-time from live video feeds, essentially
serving as a microscope for temporal variations. It is implemented
in C++, is entirely CPU-based, and processes 640 480 videos at
45 frames per second on a standard laptop. It can be sped up further by utilizing GPUs.