IC placement benchmarks needed, researchers say

MONTEREY, Calif.  Following up on a controversial study that claimed IC placement algorithms are severely deficient, researchers at the International Symposium on Physical Design (ISPD) here Tuesday (April 8) struggled to find a benchmarking methodology for IC physical design. A diverse, publicly-available benchmark suite is needed before solid conclusions can be made, one presenter said.

In a paper presented at the ASP-DAC conference in Japan early this year, UCLA researchers led by Jason Cong, co-director of the VLSI CAD lab at UCLA, evaluated several academic placers along with Cadence Design Systems' QPlace. They concluded that IC placement algorithms leave so much excess wire length that chip designs are essentially several technology generations behind where they should be.

What was perhaps most controversial about the paper was its use of synthetic benchmarks in which optimal wire lengths are known. EDA vendors noted that the benchmarks may not be representative of real-world circuits, and they said that criteria such as routability, timing and signal integrity are more important than wire length.

At ISPD, Cong co-authored a follow-up paper that evaluated both partitioning and placement algorithms. Presented by UCLA student Min Xie, it looked only at the academic placers Dragon, Capo and mPL. Xie said that the previous paper's inclusion of QPlace "raised a lot of eyebrows," and that the researchers felt they'd be better off with academic placers for which they had source code.

The paper concluded that partitioning algorithms perform very well on a bi-partitioning example with known upper bounds (BEKU) suite, but not so well on a multi-way partitioning example with known upper bounds (MEKU) suite. The placement algorithms were evaluated using UCLA's placement examples with known upper bounds (PEKO) suite. Xie acknowledged a serious flaw in the PEKO suiteall nets are local, and there are no global wires, as there would be in real circuits.

Xie said researchers found that the academic placement algorithms, which underlie many commercial tools, diverge from optimal wire-length solutions by 1.46 to 2.38 times. Then they looked at benchmark suites that included global nets. Results weren't good here either, and they differed from the PEKO suite. For example, mPL gave the best results without global nets, but its quality ratio declined by 80 percent in the presence of a few non-local nets.

"None of these placement algorithms perform universally well under different scenarios," said Xie. "We think it's another validation that much needs to be done for the reduction of wire length in placement."

Another presenter was more hesitant to make sweeping conclusions about placement algorithms. Igor Markov, assistant professor at the University of Michigan, said that a truly representative, and public, benchmark suite is what's needed now.

Markov discussed the many requirements faced by placement tools, including the need to manage large amounts of empty "white" space on fixed-die ICs. "With so many requirements and constraints, we have to worry about benchmarking," he said. "We need to understand how algorithms work over a wide set of examples. It's important for the industry to embrace open benchmarking."

Markov reviewed a number of open benchmark suites, and discussed their limitations, such as PEKO's avoidance of global wires. He showed that a placement algorithm that scores well on one benchmark suite might fare very poorly on another. For example, Dragon does well on the IBM benchmarks that come with it, but not others.

"We need real benchmarks not provided by the authors of any of these papers, including myself," Markov said. Markov is the co-author of the Capo placer.

Further, there's more to consider that wire length. Markov noted that there's no common metric for routability, and no good public benchmarks for routing. But the toughest part of benchmarking, he said, is timing-driven placement. The "quality of results" metrics are not clearly defined, he noted.

A significant point about Markov's paper is the collaboration behind it. In addition to the University of Michigan, it has co-authors from Binghamton University, IBM and Monterey Design Systems. It also acknowledges assistance from other researchers, including Jason Cong.