Formal Verification – An Overview

Formal verification is a technique used in different stages in ASIC project life cycle like front end verification, Logic Synthesis, Post Routing Checks and also for ECOs. But when you go deep into it, the formal verification used for verifying RTLs is entirely different from others. There are different formal techniques available as follows

Formal Equivalence Checking

Formal Property Checking.

Formal Equivalence Checking

Formal Equivalence Checking is a method to find the functional equivalence of one design by comparing with the golden design. These are the areas where equivalence checking is commonly used.

RTL vs Pre-Routed Netlist

Pre-Routed Netlist vs Post Routed Netlist

Netlist Vs ECO-Netlist

Another point to note here is, Equivalence Checking is always carried out using two inputs and result comes out by comparing the functionality of these two input designs. Combinational and sequential equivalence checking are the two methods used nowadays. Combination Equivalence checking is done by making one-to-one mapping of flops between golden design and revised design. But Sequential equivalence checkers can verify structurally different implementations which do not have one-to-one flop mapping.

Major EDA providers in this area are Cadence (Conformal LEC) and Synopsys (Formality).

Formal Property Checking
Formal property checking is a method to prove the correctness of design or show root cause of an error by rigorous mathematical procedures. It does not require test benches or stimuli and turnaround time is very less.

Property checking can be carried out by using either using property languages (eg: ITL Interval Language) or Assertion languages (SVA,PSL). SVA is the assertions subset of the System Verilog language. Assertions or properties are primarily used to validate the behaviour of a design and can be checked statically by property checker tool and proves whether or not a design meets its specifications.

A holding SVA/ITL/PSL means that the assertion/property has been formally and exhaustively checked and it holds in all possible traces of the design. A failing SVA/ITL/PSL means that a counterexample was found that represents a violation of the intended design behaviour.

Formal Verification compared with Simulation
Even if modern test-bench concepts allow for flexible and efficient modeling and sophisticated coverage analysis, Functional verification by simulation is still incomplete, causes high efforts in test-bench design and consumes a deal in simulator run-time.

Formal methods overcome the insufficiencies of simulation in reliability by proving a design’s behavior and its correct functionality instead of observing selected traces and hunting bugs. They are static and give you 100% coverage. In addition, experience has shown that formal techniques
not only improve verification quality, but also can reduce the verification effort and time and also a quick and thorough module verification

Limitations of formal property checking

Formal Methods are limited in complexity, time and gates.

There are ways to cope with such issues. But, it makes verification cumbersome and leads
to loss of efficiency. And, lowering the level of abstraction too much always holds the risk
of rewriting RTL by properties.

Verifying complete transactions, transfers

For formal property checking, the behaviours that leads to a certain sequential depth being
too large to fit into a single proof window.

Verifying Algorithms

Algorithms incorporate sources of complexity issues, e.g. multiplications. Moreover, an
algorithm will not be verifiable without breaking it down to single operational parts.

Conclusion
Formal property checking can be used starting from the RTL development Stage and then to IP verification and finally to SoC integration. It is quite easy for the designers to use it while developing RTL, as it does not require any other testbench environment. The same assertions can be used in the later stage for verification engineers as well. For IP verification, this can used to find corner case bugs which cannot be caught in simulation. In SoC level this is used mainly for connectivity verification and pad multiplexing etc.

Major EDA players in this area are OneSpin Solutions (OneSpin), Cadence (Incisive Formal Verifier) and Jasper. The formal technology is extensively used in the industry now and experience from different projects shown that, this helps you to get bug free silicon.

8 Comments

The concept of verification is related to a development process which complies to a V-Model, that means the architecture shall be structured in levels and blocks, there are inputs which can be represented in a form of specifications related to each stage of the development process, and output which are going to be integrated in a final product.

The task of a verification is related to a design as every engineer is familiar with, but it differs in the sense of what are the inputs and result produced. A verification is is fact the opposite of designing, not reverse engineering, but rather checking whether the final result (here a netlist which connects library elements from a foundry) to the wanted result. One talk about “formal verification” when the development process shall comply to standards, which is nowaday usualy the case.

The task of verification, from my own experience, is somewhat complex compare to the design itself, and involves techniques which can be described as wierd to common design methodology. In fact, what is important, as any enginering job, is the result, and here the result is a proof that the design complies to the requirements.

In the context of this article, there is one more thing to know about verification in the semiconductor industry. Once the design is at a foundry, the cleanroom uses masks, and at that stage one speak about production runs, one production run is so expensive that it requires some verification activity in order to avoid repeating the whole process. One talk about RTL because the design comes from a high-level language and ends with a description in terms of enementary blocks which are merely transistors or cells including an elementary circuit. This is where the assertion comes into play, because one use some simulation environment, which in this case supports assertion (stops the simulation) in case an error is detected. Since the simulation not only takes the useful cases as input, but also any other combination which will bring the system in an unused state, the amount of data such a simulation produces is huge, and if any mistake appear at that level, it will be hard to find it in a manual process, so one use assertion to make sure a detection will still be possible, even though the simulation environment did not expect it to occure in a certain test.

Understanding this kind of concepts could be unusual for an engineer who only take charge of the design, hopefuly this explanation somehow helps…

I am working in C-to-RTL Sequential Equivalence Checking problem. I know Hector and Jasper are the two tools that does the same work. Can you please name some other tools that are used for C-to-RTL Sequential Equivalence Checking or C-to-C Sequential Equivalence Checking.

Also, how do you classify different Sequential Equivalence Checking problems. The above description says and I quote “Sequential equivalence checkers can verify structurally different implementations which do not have one-to-one flop mapping”. My question is that what are the various sequential optimizations that you can perform on the implementation to obtain sufficiently transformed code compared to the golden reference so as to make Sequential equivalence checking problem more challenging?

Good Morning!
I would like to request you if you can suggest me a good book for soc power verification, as I am currently having a job opportunity in this field and would like to know more about the methodologies in power verification.