The big chill

By Chris Edwards

Published Friday, June 27, 2008

Chip designers need to watch their watts. But they are stymied by today's design techniques, says E&T.

Professor Wolfgang Nebel of Oldenburg University in Germany is under no illusions about one of the biggest problems that faces chip designers today: how much energy the system-on-chip (SoC) products they produce consume. The moderator of one of several panels on low-power design at the Design Automation and Test in Europe (DATE) conference, Nebel explains: "In many systems, if the power consumption is over 20 per cent above the target, they might as well not work." Limor Fix, general chair for the upcoming Design Automation Conference (DAC) and associate director of Intel Research in Pittsburgh, tells E&T: "It is not just about mobile. It is also about all those data-centres that are consuming tonnes of energy. It will be a focus for many years." It is not just overall power consumption that presents a problem: spikes in demand can cause the chip to exceed the thermal rating of its package and suddenly fail. Yet, the design may, on average, draw power well within its specification.

Late delivery

The problem today is that detailed information on power consumption only appears when the chip design is very close to tape-out. It is only when elements such as the power-supply network, the clock tree and actual logic blocks are in place that anyone has a good idea of whether the design's power consumption is on track, or not. Colin Holehouse, chief engineer at IP supplier ARC International, says: "We are looking for a lot more help to be able to express that information at a higher level to allow the system designer to make the right trade-offs." Today, most power optimisation takes place at register-transfer level (RTL), where elements such as gated clocks are usually inserted. Companies such as Calypto Design Systems and Synopsys have developed tools that look at the logic and work out where to put extra gates that shut off clock signals to parts of a circuit when they are not needed. These techniques work well in cutting active power consumption - the result of capacitances charging and discharging in the wiring when transistors in the circuit switch states. But, since the 0.25µm process generation, leakage power has grown. As transistors get smaller, they demand lower threshold voltages to switch quickly. But the lower threshold means that the transistor never really switches off. If current is supplied to a circuit, it will leak. The only way to prevent this is to power it down completely: a technique called power gating. The use of power gating means that power consumption can alter radically based on what the chip is doing at the time. "The subsystems we put on a chip have complex modalities," said Ted Vucurevich, chief technology officer at Cadence Design Systems. "You get into usage patterns, so you want a robust model that will get valid answers under all the possible modalities." There are more subtle problems. Researchers have known for years that architecture plays a major role in determining power consumption. It is something that makers of signal processors for tiny battery-powered devices such as hearing aids have understood for years. They run their designs at very low voltages, too low to support high clock speeds. So they try to put as much as possible into highly parallelised hardware that is clocked very slowly but which can compete in terms of performance with a high clock-speed digital signal processor.

Go slow

Engineers building chips aimed at sub-90nm processes have found the same thing: fast clocks burn too much power. So they have turned to parallelism. But how much is enough? If they cannot power-gate the blocks effectively, they can find that their highly parallel implementation - which uses many more gates than a simpler design - burns just as much juice in actual applications. Changes to the RTL will have comparatively little effect on the power. "We are not only affecting RTL, we are affecting the instruction sets we are using and the overall operating systems. A much more holistic approach is needed to deal with the modes of power shut-off and voltage scaling that we use," explains Holehouse. There remains a massive gap between what is needed and what is in common use today. In contrast to the advanced modelling techniques used to produce virtual prototypes and analysis circuit behaviour, today's tool of choice for system-level power analysis comes as part of Microsoft Office. Vincent Perrier, cofounder and director of marketing of Cofluent Design, claims : "A lot of people are just using spreadsheets. But Excel is not sufficient." Richard Scales, manager of power architecture and estimation at Texas Instruments, agrees: "We spend a lot of time dealing with spreadsheets and it is definitely not the way to go. "We need really a way to be able, ahead of hardware specification, to model the hardware power and provide early power estimations. We need early simulations and not just spreadsheets to model use-cases." Scales' ideal is to use system-level analysis to help provide specifications for the teams implementing the resulting logic. "They have lots of knobs they can use to meet different constraints," says Scales. "For example, they can synthesise slower logic to meet the power requirements."

Model gap

Roberto Zafalon, R&D programme manager and head of STMicroelectronics' competence centre for low-power system design, says the need for better power modelling is urgent: "We recently engaged with a customer in the US. They have a wireless device that goes into a stereo headset. They don't have a verification problem: they know how to do this. And they don't have a software-development problem. They have a power problem. For the next generation they are running twice as much as their power budget. And they will tape out in April, regardless." The option favoured by many working in the field is to use functional models to provide the necessary power information. Frank Berntsen, chief scientist at Nordic Semicon-ductor, says: "With embedded system-level design, it is models, models, models. And it gets worse when you get into power. It should be possible to create usable models but, for power, there is still some way to go." Who will take the lead in driving better system-level power estimation. Some people thought that the large electronic design automation (EDA) tool suppliers would come up with an answer. "Don't be so sure," remarks Zafalon acidly. "In terms of EDA, it will be the smaller vendors who do that," Scales claims. Zafalon added that there is a role for the IP providers, as they provide much of the logic in today's SoC designs. "Some of them are supportive; some of them are less cooperative. A lot depends on how much pull the customer has on the IP providers," Zafalon says. Despite the concerns over power, Frans Theeuwen, department head of system design methods at NXP Semiconductor, warned that it is important not to lose sight of the role digital logic plays in the full system. "What about the RF modules in a handset? Most power goes into the backlight and RF modules," he claims. In the future, the power needs of all these systems will go into one simulation. For Scales, the SystemC used to write many digital models may end up being a language used to pull together many different representations. "We look at SystemC as the engine to do the scheduling: wrapping other languages for use in those models. For IP that is very control-dominated, I might not want to use SystemC. And for a use-case that I want to document, I would want to use UML." Other languages such as The Mathworks' Simulink, Saber from Synopsys and the upcoming mixed-signal language SystemC-AMS may play equally important roles in defining the power behaviour of a design as the industry tries to improve accuracy at the system level.