Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Back to reality, here’s a real example relevant to audience – the rules for an AHB master. Also have APB, and AXI and OCP are in development (30 secs)

The customer’s design was a multi-channel bus bridge based on ARM’s AXI architecture. The design receives instructions via a proprietary interface through multiple channels, and performs arbitration, splitting, and aggregation as required, finally generating standard bus transactions through its AXI interface. The customer’s existing functional verification environment used VCS to simulate a constrained random testbench written in SystemVerilog. The testbench itself was pretty straightforward, generating random values for 20 variables, with variable interdependencies described in several algebraic constraints (in SystemVerilog). But you’ll see on the next slide that it wasn’t as simple as it seemed.

It turns out that even after constraining the variables to their legal interdependencies, the total functional space is much too large to cover. The 20 variable labels are described in the grey column of this table. Their legal ranges are described in the white column. Trying to cross them all would take forever. The customer realized that not all combinations were important anyway, so the variable ranges were reduced to include only the values that were considered important for verification purposes. Notice the green column where several variable ranges were considerably reduced. However, noticing that when using constrained random testing, the total coverage domain was still too large to cross completely, the customer further reduced the verification goals to a practical level. The next slide will describe each verification goal in more detail.

In verification goal number 1, a constrained random simulation will be run until each value of each variable is achieved. No crossing will be attempted yet. The goal is merely to simulate a testcase that covers each of the 1360 values - the additive total of each cover-point. And notice that the largest cover-point contains 776 bins. That will become important later. In verification goal number 2, a constrained random simulation will be run until a cross of the bytes and addr cover-points is achieved. The other cover-points will not be crossed or measured. Only the bytes and addr variables will be measured. When crossing 776 bytes cover-points and 255 addr cover-points, minus a few undesired testcase combinations, a total of 196,608 cross cover-point bins will need to be hit. Let’s look at the next slide to see the results.

For Verification Goal #1, coverage of the 1360 coverpoint bins was achieved after randomly generated 475,500 stimulus patterns. Achieving coverage of the ‘bytes’ coverpoint took the longest time, both because it contained the most bins and because it described corner-case coverage goals that were difficult to hit with randomly-generated stimulus. Only 79% of Verification Goal #2 was achieved, even after randomly generating 26m stimulus patterns. Presentation Title

inFact was added to the verification environment was the goal of accelerating achievement of Verification Goal #1 and achieving Verification Goal #2. Very few changes were required to the environment. Specifically, no changes were needed to the design. The testbench architecture and language did not change. An inFact graph was added to the testbench to generate stimulus, reusing the existing Verification IP. Bottom line: very few changes to the existing environment Presentation Title

The process of creating an inFact graph and adding inFact to the verification environment was simple. First, the graph was created. This was doine by describing each variable and its domain using inFact rules, then describing constraints and variable relationships. The coverage goals for Verification Goal #1 and #2 were annotated on the graph (shown by the shaded region). Adding the inFact graph to the testbench was simple. Existing calls to the SystemVerilog randomize function were replaced with a call to the inFact Graph’s ‘fill’ function. The process took a total of one day, requiring around 100 lines or rule code, and less than 10 lines of SystemVerilog code. Presentation Title

Let’s look at the results of simulating with inFact. On Verification Goal #1, inFact achieved coverage of the 1360 coverpoint bins in 776 stimulus patterns. Notice that 776 is the number of bins in the largest coverpoint. inFact was able to hit the total 1360 coverpoint bins in 776 stimulus patterns because inFact was able to target multiple coverage goals with each stimulus item. On Verification Goal #1, inFact achieved coverage 612x faster compared to constrained random stimulus generation. On Verification Goal #2, inFact achieved coverage of the 196,608 cross-coverpoint bins in exactly 196,608 stimulus items. Since constrained-random stimulus generation did not achieve coverage closure, it’s a but more difficult to find an appropriate basis on which to compare results. One way to look at the results is that inFact achieved 100% coverage 170x faster than constrained random achieved 79% coverage. Another way to look at the results is that when inFact has achieved 100% coverage closure, constrained-random generation has only achieved 1.15% coverage. Presentation Title

7.
Key Concepts <ul><li>The Rules are compiled into an NDFSM (non-Deterministic Finite State Machine) Representation </li></ul><ul><li>Action Functions are written in Verilog, SystemVerilog, VHDL or C++ </li></ul><ul><li>The Rule Graph is then traversed during simulation and </li></ul><ul><li>the Action functions are called to produce stimuli </li></ul><ul><li>Without coverage goals, the traversal will be random </li></ul>

9.
Using Coverage to drive Stimuli Generation Path Coverage is used to define the Coverage goals A single Path Coverage Object can cover all legal paths in a graph…. Or you could use multiple PC Objects to cover specific goals and cross products