Viewpoint: Mass GPUs, not CPUs for EDA simulations

Simulation has always been about speed. A program that forecasts tomorrow's weather but takes 26 hours to complete is useless, but one that takes 26 minutes is invaluable. It's the same with EDA. If you can get simulation results faster than spinning a board or a chip, you add value. If you don't, you don't.

There are basically three ways to make the simulation go faster: better algorithms, faster processor clocks and parallelism. As David A. Patterson, professor of computer science at the University of California at Berkeley, says: "No one knows how to design a 15-GHz processor, so the other option is to retrain all the software developers" to program parallel machines.

We agree. Processor speeds have topped out. Clock them much faster, and you wouldn't be able to get the heat out fast enough to keep the chip from burning up.

As for algorithms, there is predictable, incremental improvement in algorithms, and sometimes there are breakthroughs. But you can't write a business plan based on that kind of breakthrough.

We are also seeing a trend to leverage graphics processor units (GPUs). These chips originated in the video game industry for high-performance graphics calculations. They have hundreds of cores. And it turns out they can do tasks unrelated to their original target market of rendering a moving 3-D scene onto streaming 2-D screen images.

If you've been in the industry awhile, you may be getting a feeling of déjà vu these days. This configuration is a bit like the vector supercomputers of decades ago, and you may be wondering, "Well, if supercomputers didn't go mainstream, then why GPUs now?"

It's different this time around because vector machines started at the top of the price/performance curve with Pentagon funding and didn't migrate down. GPUs, on the other hand, have a different economic model. They started at the bottom, were sold to millions to gamers, and now have a terrific price/performance ratio.

Why are GPUs different this time around? Moore's Law states that the number of transistors on a chip doubles every two years. (For decades, smaller transistors meant not only more per chip, but also faster transistors as well. We got faster CPUs at the same time we got more sophisticated CPUs.)'

the article states " ... (w)ith this upgrade, computer applications, including EDA tools, get access to a processor that can compute at 1 teraflop.
The leading multicore CPU can only deliver 100 gigaflops. This gives the GPU a 10x performance advantage over the CPU."
The flops part of teraflops/gigaflops are floating point operations per second. Do you have evidence that this is an incorrect statement?

Wouldn't it be cheaper to design a PCB full of FPGAs? At least, you don't depend on any business plan or roadmap from GPU companies. SInce I guess that GPUs have no floating point arithmetic built in, the number of applications which can be ported to a GPU is limited.