Guesswork, And Other Design Paradigms

PPA for soft IP seems like an oxymoron. How do you determine the implementation characteristics (PPA — Power, Performance and Area) for something that has not yet been implemented? Flying blind until implementation would be a rookie move. More likely you are going to estimate based on a prior implementation. Not a bad approach if the IP hasn’t changed significantly and the target library is the same or estimates can be simply scaled. This is only a partial list of the caveats.

But let’s face it: A vanishingly small percentage of IP remains unchanged from one implementation to the next. Much IP is at least configurable and likely to be used this time in a slightly different configuration. A lot of IP reuse is more of the ‘copy and adapt’ variety than the style imagined by earlier reuse evangelists. Throw in a process change, a different Vt mix and you are in terra incognita, where at some unknown point your guess no longer degrades linearly but instead rolls off into significant uncertainty.

Let’s try another line of argument—we’ll guess, and then check our guesses when we get to integration trials. Great, if you guessed right or if you have time for rework. But if your competitors are better guessers, your product may be obsolete before it tapes out.

Or maybe you could put a little engineering behind those guesses, something you would feel more comfortable calling an estimate, before you get into implementation. How? By running real (silicon) performance, power and area trials on the IP. Ah, you say, there’s the fatal flaw: I don’t know how it will be implemented in detail, therefore those trials will be meaningless. But wait a minute, you should be comparing the uncertainty in your gamble with the uncertainty in those trials. After all, if the trial shows negative slack at 500MHz, it’s a pretty good bet it won’t be able to run at 600MHz in the final implementation, no matter how gifted the integration team may be. Ditto for area and power. On the other hand, your guess may have an unknown downside. (See above)

Then there’s a logistics barrier. You buy the general argument, but you don’t have time, resources, expertise to do trials on your IP. That’s certainly a problem if you want to run your trials through the production flow. But if you can emulate the production flow starting from RTL, a technology library and one or more simulation databases, without a need for implementation skills, you could set up that analysis quite easily.

‘Time out,’ I hear you say. Perhaps the previous argument makes sense if I run through the production flow, but if I’m running through some other tools, all bets are off. Well wait again. If the correlation of the other tools is similar to or within the inherent error of standalone trials, the cumulative error will be ≤1.4x the worst of the 2 errors. You still have a better estimate than an unknown guess, and for a greatly simplified setup.

Comparisons between guesses and experiments can be deceptive. Experiments don’t need to be perfect. They just need to be better quantified than a guess, and sufficiently low cost that they add negligibly to resource demands and schedule.