Turning the Super Bowl Into a Game of Pixels

By LISA GUERNSEY

Published: January 25, 2001

Instant replays have always had a fatal flaw: They rely on what a handful of photographers can capture on video during that instant. The wide receiver's toe may have been out of bounds when he caught the football, but unless someone shot the video at an angle that makes the misstep clear, fans and referees may never know what really happened.

Fortunately for truth-seeking sports fans, that may change on Sunday with new instant-replay technology that CBS is introducing during the Super Bowl. Called Eye Vision, the technology uses more than 30 pivoting robotic cameras, spaced apart at 80 feet above the field, to capture what amounts to a 270-degree view of the action.

Takeo Kanade, the director of the Robotics Institute at Carnegie Mellon University, led the system's development. It works this way: A technician will control one of the cameras — following the action on the field and zooming in as necessary. The other cameras, without human handlers, will tilt and zoom after receiving computerized instructions that track the movements of the technician's camera. Each camera will capture the action from its own vantage point. "As a whole," Dr. Kanade said, "they capture the action from totally surrounding views."

When it is time for an Eye Vision instant replay, the audience and referees will be fed the images from all of those cameras in rapid sequence, seeing the same play as if frozen in time from many perspectives. "You will feel as if you are flying around that action," Dr. Kanade said.

The system presents only a fraction, however, of what Dr. Kanade and his students have been working on. When Eye Vision is used during the Super Bowl, for example, the instant replays will provide only the images captured by the 30-odd cameras; shots from angles that are not covered by those cameras will inevitably be missed. But Dr. Kanade has developed technology, called Virtualized Reality, that can fill in many of those gaps without the use of more cameras. Computers do the work instead by analyzing the shots from existing cameras and creating images of what probably happened in between (view examples at www.cs.cmu.edu/virtualized-reality /main.html). That way, a person could view a video of the action from, say, the center of the field — even though no camera was actually shooting from that vantage point.

"I hope that Eye Vision is a beachhead for this whole vision of virtualized reality," he said.