SystemVerilog reference verification methodology: RTL

The VMM for SystemVerilog supports both top-down and bottom-up approaches to building a verification environment within the layered approach. In a bottom-up approach, designers may develop simple testbenches that operate primarily at the signal layer, with tests that are little more that sequences of binary values. As individual blocks are combined into subsystems, complete chips and perhaps multi-chip systems, the verification team adds the higher-level testbench components to complete the overall verification environment.

In a top-down approach, the verification team may build the entire design using transaction-level models written in SystemVerilog or SystemC and run tests on these models, as shown in Figure 2. The top-down approach allows the verification team to build a complete verification environment early in the development process, even before any RTL code has been developed. This environment then becomes the “golden reference” for verifying additional verification components and the RTL design.

As the designers complete the RTL implementation, it can be brought into the verification environment to replace the transaction-level models, thereby verifying that the design is functionally equivalent to its transaction-level counterpart.

This approach establishes a process of successive refinement, in which the transaction-level models can be replaced by synthesizable RTL and finally (if desired) by gate-level netlists. This process leverages and reuses the system-level environment to verify the design itself. It also provides a way to address a key verification challenge  checking that transaction-level models behave the same as the RTL.

The layered approach facilitates reuse in several additional ways. Lower layers can be removed and replaced with transaction-level models for architectural or system performance analysis. Layers can be reused between different projects, since the interaction between each layer is clearly defined. Most fundamentally, the entire testbench is reusable across tests since only the test layer needs to be modified in order to generate new tests.

Results checking

Although constrained-random stimulus generation produces many tests very quickly, results checking is needed to ensure that the design executes each test properly. Results checking can be subdivided into data checking and protocol checking. Data checking relies on the ability of the testbench to account for variability in the delay and/or order in which results come out of the design being tested. This variability is critical for covering all possible scenarios.

Building results-checking functionality into the testbench is often a substantial part of the difficulty in creating the testbench in the first place. SystemVerilog was designed with language constructs and primitives to help implement the communication between the stimulus and response checking of the testbench, and help manage the expected results in such a way as to account for the variability of the possible output.

Including data coverage recording in the response checkers ensures that, for all of the data combinations in the input, the appropriate output combinations were received. It also allows the verification engineer to analyze the coverage data and evaluate whether the right input stimulus combinations were generated to verify all possible output conditions.

Protocol checking typically requires the monitoring of behavior over time, establishing a temporal relation. Some high-level protocols are most naturally specified with SystemVerilog verification constructs and monitored within the testbench. Other implementation-specific protocol checking of design assumptions is most naturally checked by SystemVerilog assertions within the design and on its interfaces.

Most verification engineers would argue that it is not appropriate to report that a directed or constrained-random test has passed when an assertion has been violated. Accordingly, the VMM for SystemVerilog discusses methods for accessing assertion results within the results checking components of the verification environment. This approach provides an important link between assertions and the overall verification environment.

Implementing coverage-driven verification

Every verification methodology ever implemented has been, at least to some extent, driven by coverage. There is always some goal that must be met, and if a particular test does not reach that goal, either the test is modified or a new test is created. Even a simple manual test in which the results are verified by looking at a waveform is driven by coverage, although the coverage recording and analysis of such a test are implicit and not very reliable.

The VMM for SystemVerilog allows users to build a significantly more effective verification environment that relies heavily on testbench automation, assertions and coverage metrics to improve productivity. Productivity entails both the ability to create new tests more quickly and the ability to avoid redundant test runs.

If a new test is created that exercises the same functionality (and therefore has similar functional coverage) as a previous test, then it is not worth adding the new test to the verification suite. Functional coverage also allows the user to know what features have not yet been tested and to tune future tests to concentrate on those areas. The VMM for SystemVerilog describes how to use functional coverage in this way to achieve the verification goals more quickly.

SystemVerilog is especially well suited for functional coverage. Temporal cover properties, with capabilities similar to SystemVerilog assertions, can be used to capture important corner-case conditions deep within the design.

Higher-level coverage points can be specified using cover groups, which can track values as well as ranges and combinations of values. This capability is especially useful for monitoring memory address ranges, contents of date packets, and other multi-bit signals in the design or testbench.

The VMM for SystemVerilog describes the use of temporal cover properties, cover groups, and cross-coverage to specify coverage points and to track these points along with traditional code coverage metrics. SystemVerilog enables such a unified approach, since code coverage, functional coverage points, and assertions are all defined by the same language.