A Faculty Early Career Development (CAREER) Award from the National Science Foundation in 2003 helped Radke to advance his work. The CAREER award is the most prestigious honor presented to junior faculty by the NSF, and it aims to jumpstart the careers of promising young researchers.

Radke and his students have worked to develop algorithms that would allow networks of many wireless cameras randomly distributed over a wide area to figure out where they wereand in which direction they were pointed. Careful programming and skillful engineering allowed the creation of a “chain of conversations” between the cameras, comparing landmarks and locations to enable each camera to “find itself” in a 3-D map of the world only by exchanging information with its neighbors. The cameras subsequently would be able to collaborate on a higher-level task, like tracking a vehicle moving through the environment, without requiring a central computer to direct the process.

A related project, which is being conducted with a student who is currently working at Oak Ridge National Laboratories, involves understanding video of heavily populated scenes, such as commuters in train stations or automobiles in rush hour. The goal is to get an accurate count of the number of moving objects as well as to determine the dominant patterns of motion in the scene. Knowing these patterns can help determine when a motion is out of the ordinary and should be investigated.

Radke and his team also are augmenting a distributed camera network with data collected from high-tech laser scanners to create 3-D images of an environmentincluding Rensselaer’s Troy campus.

Several years ago, Radke and colleague Chuck Stewart, professor of computer science, started to experiment with an advanced 3-D scanning technology called Light Detection and Ranging, or LiDAR. The premise is similar to an over-the-counter “laser measuring tape” available from most hardware stores that emits a laser beam and calculates the distance to a target surface by measuring how much time it took the laser to reflect off the surface and bounce back to the device.

“It’s easy for a human to peek into a room and see that there are hundreds of books crammed into a bookshelf, or identify a person poring over a computer typing on their keyboard,” Radke says. “People can pick that stuff up in a fraction of a second, but it’s very difficult to program a computer to do that same kind of automatic understanding of a scene. That’s basically what every computer vision researcher is trying to do: to produce algorithms that will be able to perceive images of the natural world in the same way that people so easily do.”

The LiDAR scanner used by Radke’s students today, however, is more robust and significantly more sophisticated. It sends out several laser beams per second in all different directions and over a large enough area to scan an entire buildingwhich is exactly what Radke’s team is doing across the Rensselaer campus.

Though the result of a LiDAR scan looks like a solid picture, every image is actually composed of hundreds of thousands of single pointseach measured by an individual laser beam. One building scan takes roughly an hour to complete, and from the Jonsson Engineering Center to the Folsom Library to EMPAC, Radke’s team is slowly working its way around campus. He and his team are working to design algorithms that can quickly and accurately stitch together data from several different LiDAR scans, as well as color images from digital cameras, to get the most complete picture possible.

“We’re in the process of building a complete 3-D digital model of the Rensselaer campus,” Radke says. “The project’s going to take a while, and there are several unsolved problems to address, but so far it’s looking great.”

Laser beams from the LiDAR scanner are unable to penetrate solid surfaces, such as trees, so a single scan is often riddled with shadowed-out areas. Radke says it takes many LiDAR scans, from all different angles, to collect sufficient information for building a complete, accurate 3-D model of a building. So far, his group has written code and algorithms that automatically stitched together 20 scans of the Voorhees Computing Center into a complete modelwithout human intervention.