Languages Are Not Technology

There was an interesting article on verification methodology in Chip Design Magazine recently. The author, Carl Ruggiero, works at an IP supplier and, so, doesn’t have any particular agenda to push with respect to verification methodology (unlike most of the articles in this magazine).

He makes the following points:

quality of verification is not correlated to quantity of verification.

Directed testing and constraint-based random testing can both be equally successful or unsuccessful.

quality of verification is not correlated to the language chosen.

Good verification planning, execution, and tracking are the keys to producing a high-quality (low bug) design.

The interesting thing about this is that Ruggiero states that these things were surprising to him. If you understand the concepts of Bugs are Easy, none of these things should be surprising. Let’s try to put these statements in the context of the three laws of verification and the orthogonality concept.

We know that putting effort on multiple, orthogonal methods is better than putting all the effort on a single method. This alone can explain why high verification effort can fail to produce a high quality design compared to using low verification effort.

We know that directed and random testing are orthogonal methods and both are capable of finding a majority of bugs. We also know that either can appear to be efficient or inefficient depending on how they are deployed. Thus, it is not surprising that Ruggierio sees different groups having different levels of success for random and directed testing.

I’ll come back to languages in a minute.

His conclusion that good planning, execution, and tracking is the key to producing a high quality design, however, is counter to the principles of Bus are Easy because it is essentially a statement there is an absolute best methodology. First, I think Ruggiero would take issue with calling good planning a methodology. After all, isn’t good planning essential to any successful endeavor? How could this be a methodology if all methodologies require good planning? Well, it turns out that back in 1996, two researchers from DEC proposed a verification methodology in which the central tenet was to not do any planning (Noack and Kantrowitz, DAC, 1996) (Sorry, can’t find an online version to link to). Their reasoning, which should sound familiar, was that no matter what you did at the beginning of verification, you would find bugs, so why bother spending a lot of time planning up front. We used this methodology successfully on the MCU chip that I worked on at HAL.

So, planning is a methodology, not planning is a methodology and both are, therefore, subject to scrutinization using the laws of verification. The fact that planning was successful does not mean it is the best methodology. Not planning has also proven successful.

Now let’s return to the issue of languages. Ruggiero states that he has seen simple Verilog-based environments produce high-quality designs and complex HVL-based environments produce low-quality designs. He goes on to further say

…a commitment to execute it turned out to be far more important than the tools chosen to implement it…

where , in this context, “tools” refers to languages. This conflation of tools as languages is made more explicit in his concluding paragraph:

…methodology matters far more than tools in delivering working hardware designs. While certain EDA languages can make engineers more productive…

There is a clear assumption in his mind as he equates tools and languages: languages are technology. That is, there is something about advanced languages that makes testbenches written using these languages more likely to find bugs or get higher coverage or whatever. While advanced languages certainly are useful and enhance productivity by including features that you probably would have to create manually, nothing about them is inherently smarter or better with respect to finding bugs. It’s like saying it’s better to design in English than in Chinese. Or that if you have power steering in your car, you are less likely to get lost than if you have manual steering. The languages are technology argument makes no sense, but as Ruggiero’s article shows, this mindset is pervasive.

Ruggiero correctly concludes that languages are not important to final quality, but he misses the more fundamental conclusion: languages are not technology.

Advertisements

Like this:

LikeLoading...

Related

3 Comments

“That is, there is something about advanced languages that makes testbenches
written using these languages more likely to find bugs or get higher coverage
or whatever. While advanced languages certainly are useful and enhance
productivity by including features that you probably would have to create
manually, nothing about them is inherently smarter or better with respect to
finding bugs. It’s like saying it’s better to design in English than in
Chinese.”

Although you seem to be talking about HVLs here when you write “language”, you
also bring up design later on. I guess I am wondering if your comments are
meant to apply only the testbench language, or to both the testbench language
and the design language? Actually, I just don’t think that I understand what
you are getting at in this entry.

In the case of design languages, I would strongly disagree with a statement
that there is nothing inherently smarter or better about one language over
another when it comes to finding bugs. Perhaps, as I said, I’m just misinterpreting
your argument. Indeed, the very nature of what it means to be a bug is tied to
the semantic model of the language being used for design. Eg you couldn’t have
a null pointer exception in a language without pointers. In hardware land you
have that the semantic model of Verilog is very different from that of
something like Bluespec. Here, for example, module interface bugs look
completely different and I would argue are significantly mitigated with
Bluespec.

I think that HVLs also can be considered “inherently better” at finding bugs
given a fixed design language. You argument, taken to extreme, seems to me to
be “well Turing-complete is Turing-complete”. I suppose that that’s true, but
where does that leave “technology”? If you argue that, eg, temporal assertions
are just syntactic sugar, why not also any other program (technology)?

Yes, I think you are right that I was not being clear in exactly what languages I was talking about. The original article was specifically referring to the HVLs, e and Vera, which is what I was referring to. However, having said that, I don’t believe any type of language is technology in the sense that the language itself has any special properties that make it better than other languages.

The only thing that matters is the level of abstraction that the language uses. Languages that use higher levels of abstraction are more productive to use precisely because they use a higher level of abstraction. It is more productive to write programs in C than in assembly language because C presents a higher level of abstraction to the user. If you took all languages that used the same level of abstraction as C, the only difference being syntax, they would be equally productive. I think if you examined any set of languages in which one was claimed to be better in some aspect, it is because it is using a higher level of abstraction for that aspect.

Now, if you want to argue that different levels of abstraction represent different levels of technology, you can do that, but I would disagree. Abstraction is a mathematical concept. I would argue that saying that one level of abstraction represents a higher level technology than another is like saying the number 5 represents a higher level of technology than the number 3.

Where technology is important is in mapping higher levels of abstraction to lower level ones. Mapping (compilation or synthesis) is technology. The higher the level of abstraction in a language, the more difficult it is to map to a lower level (usually for performance reasons). But, languages themselves? Not technology.

Note:I have written a number of other posts on this topic, which you may want to read.

ok, so a very belated response :).
First, I don’t think that it’s productive to focus on the semantics of“technology”. What I guess I object to is the dismissive attitude toward
languages that I get from your post.

You say that:

“I don’t believe any type of language is technology in the sense that the
language itself has any special properties that make it better than other
languages.”

But then go on to write:

“Languages that use higher levels of abstraction are more productive to use
precisely because they use a higher level of abstraction.”

These seem contradictory to me: isn’t “more productive” also “better”? In any
case, I think we can agree that some languages are more productive than others,
C vs assembly, e.g.
I would agree that a language isn’t very useful without associated programs to process it; if it was then the language with a syntactic construct
“do-what-I-want” would be the end of the story. However, I do think that language design, e.g. coming up with the right abstractions, can be enormously
powerful. Many people find static scoping in languages like Haskell to be
better than dynamic scoping; it’s subjective, but a “special” property that
can be debated and isn’t really about a “higher level of abstraction”.

If we agree that one language can be more productive than another, then it seems the relevant question to ask is what is the relative benefit I get from a
more productive verification engineer vs a more productive automated algorithm?
I agree that both are important, but I don’t think that one is universally more important than the other. If a new test gen. algorithm covers a few percent more
coverage goals but a new language allows the verification engineer to develop
the original constrained random testbench 25% faster, I’d take the latter.