Raising The IP Abstraction Level

By Ed Sperling
An increasing reliance on commercial and re-used IP and more emphasis placed on software development is adding even more pressure onto semiconductor design teams to figure out the benefits and limitations of myriad possible choices earlier in the design process.

Design teams already are under pressure to meet increasingly tighter market deadlines, and it is stressing every part of the design through manufacturing flow. Pre-developed IP, whether commercially developed or internally sourced, is supposed to help this problem. The reality, though, is the number of choices is raising the bar for data overload.

At least part of the problem is the demand for plug-and-play IP. Because there are so many possible configurations, IP developers are characterizing their IP for every possible scenario they can think of, from proximity effects to manufacturing design rules from multiple foundries. And software developers are writing code to manage increasingly complex interactions between all of this IP, as well as power management and performance optimization. As a result, the amount of data has become so voluminous that just reading through it all and making simple choices has become a nightmare.

The obvious solution is to raise the level of abstraction, but that creates its own issues. First, it takes time to build up enough confidence in abstractions that they are accurate enough to be used routinely. That level of confidence often requires one or more process nodes worth of production chips and a deep understanding of what works and what doesn’t. Second, as with any new approaches, people have to be trained to use the tools and methodologies. And third, even when everything is in place, there are still gotchas that no one expected in complex SoCs.

“If you’re developing IP, then developing it at a higher level of abstraction is great,” said Bill Neifert, CTO of Carbon Design Systems. “But for people in the middle of this, trying to raise the level of abstraction has been a big trap. You understand it and you understand the design guide, but if it’s someone else’s IP it’s not the same as if you’ve developed it yourself.”

Neifert said that even with commonly used IP, corner cases still show up where the specs can be interpreted in a couple different ways. “We had a message from one customer saying the model was not right. It turned out it was correct, but we had to go through all the corner cases to prove that it does behave right. Still, there are always ambiguities in the spec, which is what caused the problem in the first place.”

Even with high-level models, there are simply too many of them to keep track of easily. Lots of iterations are necessary, and each iteration requires communication back and forth, from design to integration to software development.

“There are so many different software stacks that you have to deal with and all these different pieces that are specific to IP,” said Tom De Schutter, senior product marketing manager for Virtualizer Solutions at Synopsys. “The software can really mess up the hardware.”

By raising the abstraction level, though, it allows engineering teams to make some important choices, such as which software will run on which core of a heterogeneous processor, how applications will be affected by dynamic voltage and frequency scaling, and how the I/O needs to be optimized for certain applications and use models. Trying to accomplish this without raising the abstraction level is almost impossible—at least within an acceptable time window.

“You can abstract away a lot of the IP to look at the software point of view,” said De Schutter. “Software is more and more important for determining the functionality of a device, and it doesn’t need the timing accuracy of the IP.”

High-level synthesis
One obvious approach is higher-level tools. High-level synthesis has been around for more than a decade, but it has failed to gain much traction until recently. That’s changing, in part because of the complexity of new IP, such as H.265 for high-efficiency video coding.

“Where HLS is really being used is on cutting-edge stuff, rather than off-the shelf IP,” said Bryan Bowyer, senior design specialist at Calypto. “Even with H.265, that’s being done within companies rather than in commercial IP. With HLS, the C and C++ are iterated between the architecture team and the algorithm team, which allows you to explore before synthesis. If you can get to the source you can abstract the IP, whereas if you’re given a piece of RTL you don’t know where to start. It looks like a plate of spaghetti.”

Mark Warren, group director for system level design at Cadence, said the big drivers for HLS are a fuzzy spec or “ambiguity creep,” and the best way to deal with that is to move up a level of abstraction.

“The main difference with HLS and RTL is that HLS separates functionality from the system details,” Warren said. “RTL will focus on what resources are being shared and clock cycles and those kinds of details. HLS will focus only on what directly affects the things you care about, so there is less code to write, it’s easy to write and read, and you can simulate it faster and debug it.”

So why hasn’t HLS caught on as quickly as its promoters say it should? There are a couple of reasons. First, it’s not easy to use. And second, early versions were not accurate enough to give users confidence in the technology. But work has been underway for more than a decade to improve that accuracy, and proponents say it’s now a big benefit in complex designs—providing users are well trained on the tools.

“It lets designers explore the state space and do tradeoffs,” said Warren. “But that abstraction level also allows you to shoot yourself in the foot. It still takes a trained, seasoned and clever engineer to use it effectively. Previous generations of HLS showed great benefit, too, but they also tainted the market because HLS never gave them great results.”

Made in Japan
In fact, HLS probably would have fallen off the map had it not been for the persistence of Japanese semiconductor companies. They were the ones who saw the value first, and despite the problems they stuck with it.

“Japanese systems and semiconductor companies wanted to get out in front of the curve and SystemC was a great way to do it,” said Brett Cline, vice president of marketing at Forte Design Systems. “The promise of better results in less time not only provided them a time-to-market advantage on current projects, it gave them the ability to not have to predict the future as far out by targeting chips for two holiday cycles ahead instead of three. Of course, the algorithmic data-path oriented nature of many of the consumer devices made SystemC (a C++ class library) and high-level synthesis a natural fit.”

But that only addressed part of the design problem. The HLS industry experimented with a number of different design styles at Japanese and Korean systems and semiconductor companies, said Cline. And while that helped round out the offerings and uses, it wasn’t until complexity, time-to-market pressures and a push for integrating more IP that HLS really began gaining traction.

Nearly all of the top 20 systems and semiconductor companies have serious investments in high-level synthesis today, although many still use it sparingly. They also only use it for digital designs—there is no mixed signal version available today—but Accellera’s SystemC committee does have a dedicated group for mixed signal. Most HLS providers say privately that will be a future focus for development.