Game Processors Could Master Intel Video Overload

Aug. 5, 2012 - 02:28PM
|

If there is one mantra among the intelligence community’s full-motion video (FMV) enthusiasts, it is this: While the aircraft and cameras that gather FMV and lower-rate motion imagery are extremely sophisticated, the process of cataloging and analyzing the resulting data is anything but. Analysts stare at images and tap notes into computers: Man enters courtyard. Man stands in courtyard. Man smokes cigarette.

“That’s probably not the most effective use of the high-caliber, talented individuals we put in Distributed Ground System sites,” U.S. Air Force Brig. Gen. Scott Bethel said in an April speech to industry experts shortly before his retirement.

Some Air Force intelligence architects would prefer to use computer software to handle the tedious job of cataloging the contents of full-motion video streams. Software might also enhance video or automatically alert analysts or operators to suspicious activity. That would free analysts to take on more important tasks, such as identifying patterns and figuring out their significance.

That goalcould be within technical reach, industry experts said. The intel community is beginning to apply the video game industry’s graphics processing units (GPUs) to the problem of managing the torrent of imagery. Contractors are installing GPUs in computers on the ground or embedded in aircraft, but wider adoption of automation will mean winning over skeptics in the military who have heard these promises before. On top of that, the need for automation is premised largely on predictions of rising demand for FMV and motion imagery even as the U.S. shifts to a supporting role in Afghani-stan after 2014.

Part of the solution could spring from the realization that rendering complex graphics in response to a gamer’s decisions is a lot like analyzing, enhancing or retrieving a clip of a specific vehicle or person spotted in Afghanistan, for example. In the game market, Advanced Micro Devices and Nvidia have been competing to produce ever more sophisticated GPUs to drive ever more realistic games. When GPUs are paired with the more common CPUs, they can accelerate applications dramatically.

“Instead of being able to process a couple of megapixels in real time, we can process billions of pixels per second,” said Scott Thieret, technical director for Mercury Computer Systems, a Chelmsford, Mass., company that develops digital image and signal processing subsystems for aircraft and ground vehicles.

FMV drawn from UAVs consumes the lion’s share of work at Air Force Distributed Common Ground System sites, “and new wide-area motion imagery sensors now being deployed have the potential to vastly increase the amount of raw data collected,” according to a March report by the Rand Corp. “The information explosion resulting from these vast amounts of motion imagery threatens to leave Air Force intelligence analysts drowning in data.”

Commercial firms are banking that UAVs, like the Reaper aircraft equipped with Gorgon Stare wide-area surveillance pods, will continue to flood analysts with imagery. Those firms are scrambling to harness the power of GPUs in hardware and software for military and intelligence agencies. MotionDSP of Burlingame, Calif., for example, is working to improve the quality of images captured by UAVs and surveillance cameras. The company’s Ikena software uses GPUs to perform the computation-intensive process of reconstructing images frame by frame to remove noise and compression artifacts.

“We reconstruct the video so analysts don’t have to squint their eyes trying to see it,” said Sean Varah, MotionDSP’s CEO.

That type of technology is particularly important to customers gathering full-motion video in places where fog, sandstorms and darkness often obscure the contents of imagery. Even under clear skies, cameras flying on distant aircraft can produce images that are hard to decipher. Image quality is reduced further when U.S. agencies compress the videos to send them to U.S. bases for processing, Varah said.

MotionDSP’s software reconstructs each frame in a video with data drawn from other frames in the series. That job requires so much computing power, however, that when company officials first ran the original algorithm in 2006, it took more than a day to reconstruct a single frame of standard-definition video. GPUs have helped to speed up that process significantly. Customers can now use the Ikena software to process the video feed from a Predator UAV as the data is received, Varah said. That capability is far from common.

“Many companies understand the promise of using graphics cards for image reconstruction, but we are shipping products today,” Varah said. Those products have been sold to the U.S. Air Force and Navy, U.S. intelligence agencies, the U.S. Secret Service, the Naval Criminal Investigative Service and the London Metropolitan Police, Varah said.

Mercury Computer Systems engineers also have developed products that employ high-performance GPUs produced for the consumer market. For Mercury, the trick to using these chips in military systems is packaging them so that they are rugged enough to withstand intense heat, cold and vibration so they can be used in aircraft. In the past, only about one-tenth of the video imagery gathered by aircraft cameras could be processed onboard due to stringent size, weight and power constraints, Thieret said. GPUs are helping Mercury push the number higher.

“There’s a particular algorithm we use for image reconstruction that we have implemented on a high-performance Intel processor and on GPUs that we have embedded and deployed in theater today,” Thieret said. “Using the same, exact algorithm running on the same, exact data, the GPU is 200 times faster.”

What’s more, GPUs are extremely energy-efficient, providing a huge improvement in processing per watt. That increase in capability without additional size, weight or power has enabled Mercury to place additional processing power right next to onboard sensors for various systems, including the Gorgon Stare pods.

GPUs also are one ingredient in the latest video search tools being developed by Cognika Intelligence & Defense Solutions. Cognika uses a proprietary algorithm to index and classify information embedded in each frame of a video.

“It’s like Google for video,” said Shashi Kant, chief scientific officer for the Cambridge, Mass.-based company.

The software automatically detects and tags objects, events and activities as the full-motion video is collected and stored. Searches can be performed by image, text or video clip, Kant said. The software can, for example, highlight scenes of people digging near roads or trucks making U-turns. It can also notify analysts whenever normal patterns change. If four people cross a specific road on most days, the software can alert analysts whenever more than 20 people cross that road, Cognika President Christian Connors said.

Much of Cognika’s software is easy to recognize for people used to using commercial computer applications. Search results are presented in a point-and-click format that resembles Google. Like Amazon, the software also suggests topics that might interest an analyst. The goal is to assist analysts who stare at multiple screens for hours on end. Instead of spending 70 percent of their time staring at screens and 30 percent analyzing images to determine their significance, Cognika is trying to flip the equation, Connors said.