Data fusion is new darling in smart-technology circles

Researchers from more than 20 countries will gather in Sunnyvale, Calif., next month to tell how the emerging technology of data fusion is fanning out into fields as diverse as predicting traffic jams and earthquakes, and building robots with hand-eye coordination. Some partisans see information fusion one day supplanting artificial intelligence and fuzzy logic in the breadth of real-world problems it addresses, but others caution that the technology  which had its origins in military targeting systems  is still wet behind the ears.

Just as the human brain puts together sight, smell, hearing, touch and taste into the savory tang of a red, ripe strawberry, so data fusion in computers aims to unify multiple data streams. Automatic target recognition, for example  the seminal application  melds input from several sources into perceptions of "friend" or "foe."

Adding generalized object recognition equips data fusion for many more object types  from faulty solder joints to the next likely circuit board to fail  making it the hottest new smart technology around, according to the chairman of the 2nd International Conference on Information Fusion  more informally, Fusion '99  to be held July 6-8.

"We are finding that data fusion techniques can solve a lot of problems that are difficult to crack without [them]," said Dongping (Daniel) Zhu, general chairman of Fusion '99 and the founder of Zaptron Systems Inc. (Mountain View, Calif.).

Indeed, the conference will show a technology just beginning to flex its muscles in every direction. Masatoshi Ishikawa, a University of Tokyo professor, will report on progress in fusing video and tactile sensor data to provide eye-hand coordination for robots. Texas A&M University professor Edmond Chin-Ping Chang, who is also a researcher at the Oak Ridge National Lab, will describe combining neural learning and sensor fusion to predict traffic congestion.

Information-fusion algorithms help an electronic eye guide the movements of a robotic arm.

NTT Data Corp. researcher Weijie Liu will tell of fusing diverse biometric sensor data to authenticate security personnel, while professor Ke Zhang of Tsinghua University (Beijing) will disclose how data fusion can help predict earthquakes. More than 200 papers will be presented in 30 sessions over three days.

Many of the same researchers who pioneered neural-network algorithms have turned to information fusion as a more practical alternative. "Even the theoretical developments are more powerful than current neural-network theory, because fusion researchers focus on information processing rather than on exactly how real neurons work in the brain," said George Chapline, a physicist at Lawrence Livermore National Laboratory and chairman of a session on biological and linguistic models.

According to Chapline, data fusion focuses on practical results, but in so doing is uncovering principles that govern information processing regardless of whether done by a computer or a brain. "Information fusion may succeed where neural-network research has failed  that is, by revealing the underlying mathematical principles of how the brain functions," he said.

Chapline said neural researchers zero in on anatomical considerations, trying to unravel just how real neurons work. Fusion researchers look at the underlying mathematical principles of information processing in the brain, without worrying how actual neurons do it.

Proponents of fusion are understandably gung ho about the many possible applications the technology could serve. However, those with experience in artificial intelligence and neural networks are raising a red flag.

"Like any field, information fusion can be oversold, just like what happened to artificial intelligence and neural networks. When they didn't live up to the overinflated expectations, people didn't want to use them at all," said Belur Dasarathy, a researcher at Dynetics Inc. (Huntsville, Ala.), who has been invited to edit the first international journal on fusion (Information Fusion, published by Elsevier). Dasarathy is preparing an overview of the fledgling field for the journal, and has been doing some soul searching over how to best foster growth.

"My goal is to find the circumstances under which fusion is beneficial, so that whenever I point out the pluses, I can also point out the minuses," he said. "For any application, I always ask three questions: when to fuse, what to fuse and how to fuse."

To jump-start developers, Dasarathy is creating software development kits. His Gifts package helps in choosing an architecture for fusing information in a particular application. His latest software, Fuse, helps designers evaluate implementations of information fusion to improve performance and bring down costs.

"Suppose you need a $100 sensor," said Dasarathy. "But if you fuse the information from two 99-cent sensors you get superior performance  that's the kind of result Fuse can help you realize."

Virtual triangulation

Data fusion got its start when Air Force fighter planes began sharing sensor data in real-time. By pooling information, flight computers could triangulate the location of an enemy.

Unfortunately, the technique didn't work if only one or two sensor data streams were available. The breakthrough that spawned the new smart technology as a separate field was the integration of the data from two sensors, over time, to "virtually" triangulate a location. By forming multiple hypotheses regarding the location of an enemy plane, and verifying or rejecting those hypotheses over a period of time, data fusion algorithms were able to triangulate a location even without three independent sensor data streams.

Usually two hypotheses regarding possible plane locations can be resolved by calculating velocities and checking them against plane specifications. For instance, from two possible locations for an enemy plane, or "bogie," at twice the distance between the locations and the time elapsed, the flight computer can calculate the speed at which the bogie must have moved to get from one point to the next. If the calculated speed is faster than any plane can travel  as could be the case for an incorrect or "phantom" target  then the computer can deduce that the other hypothesis must be the right one. This simple example of fusing two separate but similar sensor data streams has given way to an attempt to merge multiple data streams from dissimilar sensors, a taxing job for the data fusion algorithms.

In combat situations, each sensor receives many signals from multiple bogies simultaneously, multiplying the number of hypotheses that must be verified. The situation is dicier when hypotheses made from incomplete data from one type of sensor must be verified with data from dissimilar sensors. For instance, it might be necessary to digest radar, sonar, microwave, radio beacon, infrared, laser range, interferometer, ultrasound, visual or other sensor types. Algorithms for such applications must say yea or nay to hypotheses regardless of which sensor type was used to formulate them. Cracking this tough nut has opened the door to myriad commercial applications.

A page from 007

One of the first applications of multisensor fusion from dissimilar types of sensors was the "moving maps" foreshadowed by James Bond in Goldfinger, where a blinking light on a dashboard-mounted map told where the good guys and the bad guys were lurking. Today, a number of vendors use data fusion to realize commercial versions of this erstwhile fantasy.

Multiple data streams for the moving map must be fused from GPS (global positioning satellite) systems, accelerometers, speedometers, odometers and the internal map database. Just putting the blinking light on the map involves fusing GPS data with the map database, because a direct mapping would usually mislocate the "you-are-here" blinking light in the middle of a building instead of on a street. That's because GPS data is more accurate than land maps, requiring fusion algorithms to nudge the light to the center of the closest street instead of letting it blindly blink on the 11th floor of some skyscraper.

Multiple data streams also need to be fused when the car drives into a tunnel, since whenever GPS data is unavailable, the blinking light falls back on speedometer and odometer readings and the map database.

With the success of such 007 applications, commercial companies have branched out, seeking many other uses for data fusion technology. For instance, fault-diagnosis systems utilize data fusion to detect and locate parts with defects, and also to forecast and prevent failures before they happen. Fuzzy logic simplifies unifying data from dissimilar sensor types when diagnosing faults in assembly lines. Adding neural learning to the brew enables diagnostics to go beyond pinpointing faults to sniffing out the precursors to faults, with an eye to getting them fixed before they bring down the assembly line.

At the conference, papers will demonstrate how fuzzy logic can meld dissimilar sensor data, and how neural learning can detect the warning signs of failure and thereby foretell the future. Papers will show how fuzzy logic and neural nets can help data fusion to successfully create real-time collision-avoidance systems that give drivers advance warning of a possible crash.

Of the many medical applications detailed at the conference, one paper will show how data fusion can infer the 3-D shapes of internal organs from multiple 2-D images. Also, applications will be described that fuse different types of sensor data, such as X-ray and magnetic resonance, and correlate it with knowledge databases to automatically match symptoms with maladies.

Other papers will show that data fusion is inching toward emulating the abilities of humans, such as the ability to sense similarities in the content of image libraries. Using data fusion techniques, conference papers will show how automated systems can retrieve all "still lifes" or "landscapes" without key words or tags.

A few researchers even want to uncover high-level types of data fusion in the human mind, in the hope of someday writing algorithms to emulate them. For instance, Gordon Shaw, professor emeritus at the University of California, Irvine, will delve into why listening to Mozart increases scores on spatial reasoning tests. Shaw will claim in his plenary speech at Fusion '99 that the answer lies in the mechanism of data fusion in the human brain.