In many verification environments, you reuse the same configuration cycles across different testcases. These cycles might involve writing and reading from different configuration and status registers, loading program memories, and other similar tasks to set up a DUT for the targeted stimulus. In many of these environments, the time taken during this configuration cycles are very long. Also, there is a lot of redundancy as the verification engineers have to run the same set of verified configuration cycles for different testcases leading to a loss in productivity. This is especially true for complex verification environments with multiple interfaces which require different components to be configured.

The Verilog language provides an option of saving the state of the design and the testbench at a particular point in time. You can restore the simulation to the same state and continue from there. This can be done by adding appropriate built in system calls from the Verilog code. VCS provides the same options from the Unified Command line Interpreter (UCLI).

However, it is not enough for you to restore simulation from the saved state. For different simulations, you may want to apply different random stimulus to the DUT. In the context of UVM, you would want to run different sequences from a saved state as show below.

In the above example apart from the last step which varies to large extent, the rest of the steps once established need no iterations.

Here we explain how to achieve the above strategy with the simple existing UBUS example available in the standard UVM installation. Simple changes are made in the environment to show what needs to be done to bring in this additional capability. Within the existing set of tests, the two namely, âtest_read_modify_writeâ and âtest_r8_w8_r4_w4â, differs only w.r.t the master sequence being executed â i.e. âread_modify_write_seqâ and âr8_w8_r4_w4_seqâ respectively.

Letâs say that we have a scenario where we would want to save a simulation once the reset_phase is done and then start executing different sequences post the reset_phase the restored simulations. To demonstrate a similar scenario through the UBUS tests, we introduced a delay in the reset_phase of the base test (in a real test, this may correspond to the PLL lock, DDR Initialization, Basic DUT Configuration).

The following snippet shows how the existing tests are modified to bring in the capability of running different tests in different ârestoredâ simulations.

As evident in the code we made two major modifications.

Shifted the setting of the phase default_sequence from the build phase to the start of the main phase.

Get the name of the sequence as an argument from the command-line and process the string appropriately in the code to execute the sequence on the relevant sequencer.

As you can see, the changes are kept to a minimum. With this, the above generic framework is ready to be simulated.Â In VCS, one of the different ways, the save/restore flow can be enabled as follows.

Thus above strategy helps in optimal utilization of the compute resources with simple changes in your verification flow. Hope this was useful and you manage to easily make the changes in your verification environment to adopt this flow and avoid redundant simulation cycles.

In my final installment of the series of blogs summing up the various SNUG verification papers of 2012, I try to cover the user papers on the Design IP/Verification IP and SystemC and SystemVerilog co-simulation. Please find my earlier blogs on the other domains here: System Verilog Language, Methodologies & VCS technologies

DesignWare core USB3.0 Controller (DWC_usb3) can be configured as a USB3.0 Device Controller. When verifying a system that comprises a DWC_usb3 Device Controller, the verification environment is responsible for bringing up the DWC_usb3 Device Controller to its proper operation mode to communicate with the USB3.0 Host. The paper âIntegrating DesignWare USB3.0 Device Controller In a UVM-based Testbenchâ from Ning Guo of Paradigm Works describes the process of configuring and driving the DWC_usb3 Device Controller in a UVM based verification environment using the Discovery USB 3.0 Verification IP. This paper describes how the verification environment needs to be created so that itâs highly configurable and reusable.

The AMBA 4 ACE specification enables system level cache coherency across clusters of multicore processors, such as the ARM Cortex-A15 and Cortex-A7 MPCoreâ¢ processors .This ensures optimum performance and power efficiency of complex SoC designs. However, the design complexity associated with these capabilies is also higher. Â And it throws up new verification challenges. Â In the paper, âCreating AMBA4 ACE Test Environment With Discovery VIPâ, Whitney Huang, Sean Chou, MediaTek Inc, demonstrates how to tackle complex verification challenges increase their verification productivity by using Synopsys Discovery AMBA ACE VIP.

The paper, âVerification Methodology of Dual NIC SOC Using VIPsâby A.V. Anil Kumar, Mrinal Sarmah, Sunita Jain of Xilinx India Technology Services Pvt. Ltd, talks about how various features of Synopsys PCIe and Ethernet Verification IPs can be exploited to help in the efficient verification of the DUT across various traffic configurations. The paper explores how the VIP Application Programming Interface (API)s can leveraged in the Â tests cases to reach high Â functional coverage numbers Â in a very short duration. They also show how a dual NIC verification environment can effectively use Ethernet VIP APIs to test various Media Access Control (MAC) features. Finally conclude how of the implementation can be used across future revisions of their design.

The ability to analyze the performance of the SoC at the early stage of the design can make a significant different to the end product. Â This can lead to more accurate and an earlier estimate of the desired performance that is expected. Â Dayananda Yaraganalu Sadashivappa, Igal Mariasin, Jayaprakash Naradasi of SanDisk India Device Design Centre Pvt. Ltd., in the paper âGeneric MLM environment for SoC Performance Enhancementâ, outlines the solution that was found by using the Synopsys VIP models. The VIPs were used in conjunction with interconnect, which in this case is a Multi-Layer-Matrix (MLM). The environment was built leveraging the VMM base classes. The VMM multiple stream scenario (vmm_ms_scenario) base class was used to create the traffic across the matrix, and the performance meters were constructed using the base classes. The callbackswere leverage appropriately help in collating the statistics. Multiple knobs were used to make the environment generic and configurable. The approach helped in finding multiple performance bugs which could not have been easily found using conventional verification.

In the paper, âUser Experience Verifying Ethernet IP Coreâ, Puneet Rattia of Altera Corporation, presents his experience with verifying the AlteraÂ® 40-100Gbps Ethernet IP core utilizing VMM environment while integrating the Ethernet VIP from Synopsys. He explains how he created a full suite of system and blocks level regression tests and then goes on to show how he Â utilizes the coverage mapping capabilities of VCS to merge the results across these various testbenches and produce meaningful reports. Besides showing how to reuse the verification infrastructure at the SoC level, the paper also demonstrates how they went in for horizontal reuse by integrating the reference SystemC based models developed and prototyped in the early phase of the project.

UVM 1.x includes support for the communication interfaces defined by the SystemC TLM-2.0 standard. This enables integration of SystemC TLM-2.0 IP into a SystemVerilog UVM verification environment. Dr David Long, John Aynsley, Doug Smith, Doulos in the paper âA Beginner’s Guide to Using SystemC TLM-2.0 IP with UVMâ describes how this is done best. They talk about the fact that the connection between SystemC and SystemVerilog currently requires a tool specific interface such as Synopsys Transaction Level Interface (TLI). This paper begins with a brief overview of TLM-2.0 aimed at novice users. It then discusses the steps required to add a SystemC TLM-2.0 model into a SystemVerilog UVM environment and simulate it with VCS. At each step, issues that users will face are explored and suggestions made for practical fixes, showing the relevant pieces of code. Finally, the paper gives a summary of areas where the UVM implementation of TLM-2.0 differs from the SystemC standard and proposes workarounds to ensure correct communication between the SystemVerilog and SystemC domains.

There is an inherent need to enable the horizontal reuse of components created during the architecture and exploration stage. Subhra S Bandyopadhyay, Pavan N M, Intel Technology India Pvt. Ltd, in âIntegrating SystemC OSCI TLM 2.0 Models to OVM based System Verilog Verification Environmentsâ talks about how Â theur architecture team creates SystemC models Â for early performance analysis and accelerated software development. In OVM-based verification environment, the objective was to reuse this model as a reference model and thus helped in reducing the overall environment bring-up time. The challenge was not only to integrate the SystemC model in the OVM-based verification environment but also to be able to efficiently send transactions from SV to SystemC and vice versa. This paper explores the successful integration of SystemC TLM2 components in OVM based verification environments and also highlight how the VCS TLI (Transaction Level Interface) adapters help TLM2.0 sockets in SystemC to communicate with those in SV and vice versa.

Truly, I feel overwhelmed by the numbers of papers and the interesting use of technology across a variety of domains on which user share their experiences across the various SNUG conferences. As we speak, the SNUG events for 2013 have started, and the stage is all set for a new set of very informative and interesting sessions. I am sure most of you would be attending the SNUIG conferences in your area. . You can find the detailed schedule of those here.

Functional coverage has been the most widely accepted way by which we track the completeness of any constrained random testbench. However, does achieving 100% functional coverage means that the DUV is bug free? Certainly not , but it boosts the confidence of the verification engineer and management team.

Based on my experience of defining functional covergroups for different projects, I realized that coverage constructs and options in the SystemVerilog language have their own nuances for which one needs to keep an eye out. These âgotchasâ have to be understood so that coverage can be used optimally to achieve appropriate usage results in correct alignment with the intent desired. Let me talk about some of these finer aspects of coverage so that you can use the constructs more productively.

Usage of ignore_bins

The âignore_binsâ construct is meant to exclude a collection of bins from coverage. Â While using this particular construct, you might end up with multiple âshapesâ issues (By âshapesâ I mean âGuard_OFFâ and âGuard_ONâ, which appears in the report whenever âignore_binsâ is used). Lets look at a simple usage of ignore_bins is as shown in figure 1.

Looking at the code in figure 1, we wouldassume that since we have set âcfg.disable = 1â the bin with value 1 would be ignored from the generated coverage report.Â Here we use the âiffâ condition to try to match our intent of not creating a bin for the variable under the said condition. Â However in simulations, where the sample_event is not triggered, we see that we end up having an instance of our covergroup which still expects both the bins to be hit. (See the generated report in figure 1). Why does this happen? If you dig deep into the semantics, you will understand that the âiffâ condition will come into action only when the event sample_event is triggered. So if we are writingÂ âignore_binsâ for a covergroup which may/may not be sampled on each run then we need to look for an alternative. Â Indeed there isa way to address this requirementand that is through the usage of the multi-mastered intelligent ternary operator. Look at the code in figure 2Â to see how the ternary operator is used to model the same intent.

Now the report is as you expect!!!

Using the above mentioned coding style we make sure that the bin which is not desired in specific conditions is ignored irrespective of the condition of whether or not the covergroup is being sampled. Â Also, we use the value â2âb11â to make sure that we donât end up in ignoring a valid value for the variable concerned.

Using “detect_overlap“

The coverage option called “detect_overlap” helps in issuing awarning if there is an overlap between the range list (or transition list) of two bins of a coverpoint.Â Whenever we have plenty of ranges to be covered, and there is a possibility of overlap, it is important to use this option.

Why is it important and how canÂ you be impacted if you donât use it? You might actually end up with incorrect and unwanted coverage results!

Letâs look at an example. In the above scenario, if a value of 25 is generated, the coverage scores reported would be 50% when the desired outcome would ideally have been 25%. This is because the value â25â contributes to two bins out of four bins when that was probably not wanted. The usage of âdetect_overlapâ would have warned you about this and you could have fixed the bins to make sure that such a scenarioÂ doesn’tÂ occur.

Coverage coding for crosses and assigning weight

What does the LRM (Table 19-1âInstance-specific coverage options) say about the âweightâ attribute? Â “

If set at the covergroup syntactic level, it specifies the weight of this covergroup instance for computing the overall instance coverage of the simulation. If set at the coverpoint (or cross) syntactic level, it specifies the weight of a coverpoint (or cross) for computing the instance coverage of the enclosing covergroup. The specified weight shall be a non-negative integral value.â

What kinds of surprises can a combination of cross and option.weight create?

The SystemVerilog LRM shows a very simple way of writing a cross. Letâs look at the code below.

The expectation here is that for a single simulation (expecting one of the bins to be hit), we will end up with 25 % coverage as we have specified the weight of the individual coverpoints to zero. However, what essentially happens is the following, 2 internal coverpoints for check_4_a and check_4_b are generated, which are used to compute the coverage score of the âcrossedâ coverpoint here. So you’ll end up having a total of four coverpoints, two of which have option.weight specified to 0 (i.e. CHECK_A and CHECK_B) and two of which are coverpoints with option.weigh as 1 (i.e. check_4_a and check_4_b). Thus for a single simulation, you will not get the 25% coverage desired.

Now with this report we see the following issues:

=> We see four coverpoints while expectation is only two coverpoints

=> The weights of the individual coverpoints is set to be expected to zero as option.weight is set to â0â

=> The overall coverage numbers are undesired.

In order to avoid above disastrous results we need to take care of following aspects:

=> Use theÂ type_option.weightÂ = 0, instead ofÂ option.weightÂ = 0.

=> Use the coverpoint labels instead of coverpoint names to specify the cross.

Hope my findings will be useful for you and you will use these options/attributes appropriately to get the best value out of your coverage metrics (without losing any sleep or debug cycles to figure out why theyÂ didn’tÂ behave as you expected them to)!

Continuing from my earlier blog posts about SNUG papers on the SystemVerilog language and verification methodologies, I will now go through some of the interesting papers that highlight core technologies in VCS which users can deploy to improve their productivity. We will walk through various stages of the verification cycle including simulation bring up, RTL simulation, gate-level simulation, regression and simulation debug which each benefit from different features and technologies in VCS.

Beating the SoC challenges

One of the biggest challenges that todayâs complex SoC architectures pose is rigorous verification of SoC designs. Functional verification of the full-system represented by these mammoth scale (> 1 billion transistors per chip) designs calls for the verification environment to employ advanced methodologies, powerful tools and techniques. Constrained-random stimulus generation, coverage-driven-completion criteria, assertion-based checking, faster triage and debug turnaround, C/C++/SystemC co-simulation, gate-level verification, etc. are just some of these methods and techniques which aid in tackling the challenge. Â Patrick Hamilton, Richard Yin, Bobjee Nibhanupudi, Amol Bhinge of Freescale in their paper âSoC Simulation Performance: Bottlenecks and Remediesâ discuss the several simulation and debug bottlenecks experienced during the verification of a complex next-generation SoC; they discuss how they gained knowledge of these bottlenecks and Â overcame them using VCS diagnostic capabilities, profile reports, VCS arguments, testbench modifications, smarter utilities, fine tuning of computing resources, etc.

The challenge of simulation environment is the sheer amount of tests that are being written and need to be managed. As more tests are added to the regressions, there is a quantifiable impact on several aspects of the project. These include a dramatic and unsustainable increase in the overall regression time. Â As the regression time increases, the intermediate interval for collecting and analyzing results between successive regressions run shrinks.Â Overlaps arising from having multiple regressions in flight can cause failure to track design bugs for several snapshots, which can also result in the inability to ensure coverage tracking by design management tools. Given constantly shortening project timelines, this affects the time-to-market of core designs and their dependent SoC products. âSimulation-based Productivity Enhancements using VCS Save/Restoreâ by Scot Hildebrandt, Lloyd Cha, AMD, Inc. looks at using VCSâs Save/Restore feature to develop steps involving binary image capture of sections of simulation. These âsectionsâ consist of aspects replicated in all tests, like the reset sequence, or allow the skipping of the specific phases of a failing test which are âcleanâ. They further provide statistics in terms of the reduction in the regression time and the memory footprint that the saved image would typically enable. They also talk about how the dynamic re-seeding of the test case with the stored images enabled them to leverage the full strength and capabilities of the CRV methodologies.

The paper âSoC Compilation and Runtime Optimization using VCSâ by Santhosh K.R., Sreenath Mandagani of Mindspeed Technologies(India) talks about the Partition Compile flow and associated methodology to improve TAT(turnaround time) for SOC compilations. The flow leverages v2k configurations, parallel compilation and various performance optimization switches of VCS-MX. They further explain how a SoC can be partitioned into multiple functional blocks or clusters and each block can be selectively replaced with empty shells if that particular functionality is not exercised in the desired tests. Also the paper demonstrates how new tests can be added and run without requiring to recompile the whole SoC. Thus using Partition Compile flow, only a subset of SoC or test bench blocks would be recompiled based on the dependencies across clusters. They share the productivity gains in compile TAT as well, overall runtime gains for the current SoC and the savings in overall disk space requirement. This is then shown to correlate with the reduction in the license usage time and disk space which leads to savings desired.

By the way, now there been further developments in the latest VCS release to help ensure isolation of switches between partitions in the SoC. This additional functionality helps reduce memory, decrease runtime, and reduce initial scratch compile time even further while maintaining the advantages of partition compile.

Addressing the X-optimism challenges â X-prop Technology

Gate simulations are onerous and many of the risks normally mitigated by gate simulations can now be addressed by RTL lint tools, static timing analysis tools and logic equivalence checking. However, one risk that persists, until now, is the potential for optimism in the X semantics of RTL simulation.Â The semantics of the Verilog language can create mismatches between RTL and gate-level simulation due to X-optimism Also, the semantics of Xâs in gate simulations are pessimistic resulting in simulation failures that donât represent real bugs.

âImproved X-Propagation using the xProp technologyâ by Rafi Spigelman of Intel Corporation presents the motivation for having the relevant semantics for X-propagation. The process of how such semantics was validated and deployed on a major CPU design at Intel is also described. He delves upon its merits and limitations, and comments on the effort required in enabling such semantics in the RTL regressions.

Robert Booth of Freescale Inc. in the paper âX-Optimism Elimination during RTL Verificationâ explains how the chips suffer from X-optimism issues that often conceal design bugs. The deployment of low power techniques such as power-shutdown in todayâs designs exacerbates these X-optimism issues. To address these problems they show how they leverage the new simulation semantics with VCS that more accurately models non-deterministic values in logic simulation. The paper describes how X-optimism can be eliminated during RTL verification.

In the paper âX-Propagation: An Alternative to Gate Level Simulationâ,Adrian Evans, Julius Yam, Craig Forward @cisco.com explores X-Propagation technology which attempts to model X behavior more accurately at the RTL level. In this paper, they review the sources of Xâs in simulation and their handling in the Verilog language. They further describe their experience using this feature on design blocks from Cisco ASICs including several simulation failures that did not represent RTL bugs. They conclude by suggesting how X-Propagation can be used to reduce and potentially eliminate gate-level simulations.

InÂ the paper âImproved x-propagation semantics: CPU server learningâ, Peeyush Purohit, Ashish Alexander, Anees Sutarwala of Intel stresses on the need to model and simulate silicon like behavior in RTL simulations. They bring out the fact that traditionally Gate-Level Simulations have been used to fill that void but come at the cost of time and resources. Then they go on to explain the limitations with the regular 4-value Verilog/System Verilog based RTL simulation and also cover the specifications for enhanced simulator semantics to overcome those limitations. They explain how design issues that were found on their next-generation CPU server project used the enhanced semantics; the potential flow implications and a sample methodology implementing the new semantics are provided.

Power-on-Reset (POR) is a key functional sequence for all SoC designs and any bug not detected in this logic can lead to dead silicon. Complexities in reset logic pose increasing challenges for verification engineers to catch any such design issue(s) during RTL/GL simulations. POR sequence simulations are many times accompanied by âXâ propagation due to non-resettable flops and un-initialized logic. Generally uninitialized and non-resettable logic is initialized to 0âs or 1âs or some random values using Forces or Deposits to bypass unwanted X propagation. Ideally, one would like to have a stimulus to try all possible combinations of initial values for such logic but this is practically impossible due to short design cycle and limited resources. This practical limitation can leave space for critical design bugs that may remain undetected during the design verification cycle. Deepak Jindal, Freescale, India in the paper âGaps and Challenges with Reset Logic Verificationâ discusses these reset logic simulation challenges in detail and shares the experience of evaluating the new semantics in VCS technology which can help to catch most of the POR bugs/issues during RTL stage itself.

SNUG allows users to discuss their current challenges and emerging solutions they are using to address them. You can find all SNUG papers online via solvnet (Of course a login required!!!).

In my previous post, we discussed papers that leveraged SystemVerilog language and constructs, as well as those that covered broad methodology topics.Â In this post, I will summarize papers that are focused on the industry standard methodologies: Universal Verification Methodology (UVM) and Verification Methodology Manual (VMM).

Papers on Universal Verification Methodology (UVM)

Some users prefer not to use the base classes of a methodology directly. Adding a custom layer enables them to add in additional capabilities specific to their requirements. This layer would consist of a set of generic classes that extend the classes of the original methodology. These classes provide a convenient location to develop and share the processes that are relevant to an organization for re-use across different projects. Pierre Girodias of IDT (Canada) in the paper, âDeveloping a re-use base layer with UVMâ focuses on the recommendations that adopters of these âmethodologiesâ should follow while developing the desired âbaseâ layer. Â In the paper typical problems and possible solutions are also identified while developing this layer. Some of these including dealing with the lack of multiple-inheritance and foraging through class templates.

UVM provides many features but fails to define a reset methodology, forcing users to develop their own methodology within the UVM framework to test the âresetâ of their DUT. Timothy Kramer of The MITRE Corporation in the paper âImplementing Reset Testingâ outlines several different reset strategies and enumerates the merits and disadvantages of each. As is the case for all engineering challenges, there are several competing factors to consider, and in this paper the different strategies are compared on flexibility, scalability, code complexity, efficiency, and how easily they can be integrated into existing testbenches. The paper concludes by presenting the reset strategy which proved to be the most optimal for their application.

The âFactoryâ concept in advanced OOP based verification methodologies like UVM is something that has baffled most verification engineers. But is it all that complicated? Not necessarilyÂ and this is what is Â explained by Clifford E. Cummings of Sunburst Design, Inc. in his paperâ âThe OVM/UVM Factory & Factory Overrides – How They Works – Why They Are Importantâ . This paper explains the fundamental details related to the OVM/UVM factory and explain how it works and how overrides facilitate simple modification to the testbench component and transaction structures on a test by test basis. This paper not only explains why the factory should be used but also demonstrates how users can create configurable UVM/OVM based environments without it.

Register Abstraction Layer has always been an integral component of most of the HVL methodologies defined so far. Doug Smith of Doulos, in his paper, âEasier RAL: All You Need to Know to Use the UVM Register Abstraction Layerâ, presents a simple introduction to RAL. He distills the adoption of UVM RAL into a few easy and salient steps which is adequate for most cases. The paper describes the industry standard automation tools for the generation of register model. Â Additionally the integration of the generated model along with the front-door and backdoor access mechanism is explained in a lucid manner.

The combination of the SystemVerilog language features coupled with the DPI & VPI language extensions can enable the testbench to generically react to value-changes on arbitrary DUT signals (which might or might not be part Â of a standard interface protocol). Â Jonathan Bromley, Verilab in âI Spy with My VPI: Monitoring signals by name, for the UVM register package and moreâ, presents a package which supports both value probing and value-change detection for signals identified at runtime by their hierarchical name, represented as a string. This provides a useful enhancement to the UVM Register package, allowing the same string to be used for backdoor register access.

Proper testing of most digital designs requires that error conditions be stimulated to verify that the design either handles them in the expected fashion, or ignores them, but in all cases recovers gracefully. How to do it efficiently and effectively is presented in âUVM Sequence Item Based Error Injectionâ by Jeffrey Montesano and Mark Litterick, Verilab. A self-checking constrained-random environment can be put to the test when injecting errors, because unlike the device-under-test (DUT) which can potentially ignore an error, the testbench is required to recognize it, potentially classify it, and determine an appropriate response from the design. This paper presents an error injection strategy using UVM that meets all of these requirements. The strategy encompasses both active and reactive components, with code examples provided to illustrate the implementation details.

The Universal Verification Methodology is a huge win for the Hardware Verification community, but does it have anything to offer Electronic System Level design? David C Black from Doulos Inc. explores UVM on the ESL front in the paper âDoes UVM make sense for ESL?â The paper considers UVM and SystemVerilog enhancements that could make the methodology even more enticing.

Papers on Verification Methodology Manual (VMM)

Joseph Manzella of LSI Corp in âSnooping to Enhance Verification in a VMM Environmentâ discusses situations in which a verification environment may have to peek at internal RTL states and signals to enhance results, and provides guidelines of what is an acceptable practice. This paper explains how the combination of vmm_log (logger class for VMM) and +vmm_opts (Command-line utility to change the different configurable values) helps in creating a configurable message wrapper for the internal grey-box testing. The techniques show how different assertion failures can be re-routed through the VMM messaging interface. An effective and reusable snooping technique for robust checking is also covered.

Andrew Elms of Huawei in âVerification of a Custom RISC Processorâ presents the successful application of VMM to the verification of a custom RISC processer. The challenges in verifying a programmable design and the solutions to address them Â are presented. Three topics explored in detail are the – Use of Verification Planner, Constrained random generation of instructions, Coverage closure.The importance of the Verification Plan as the foundation for the verification effort is explored. Enhancements to the VMM generators are also explored. By default VMM data generation is independent of the current design state, such as register values and outstanding requests. RAL and generator callbacks are used to address this. Finally, experiences with coverage closure are presented.

Keep you covered on the varied verification topics in the upcoming blog ahead!!! Enjoy reading!!!

As in the previous couple of years, last yearâs SNUG â Synopsys User Group showcased an amazing number of useful user papers Â Â leveraging the capabilities of the SystemVerilog language and verification methodologies centered on it.

I am always excited when I see this plethora of useful papers and I try to ensure that I set aside some time to go through all these user experiences.Â Now, as we wait for SNUG, Silicon Valley to kick-start the SNUG events for this year, I would want to look back at some of the very interesting and useful paper from the different SNUGs of the year 2012.Â Let me start with talking about a few papers in the area of the System Verilog language and SV methodologies.

Papers leveraging the SystemVerilog language and constructs

Hillel Miller of Freescale in the paper âUsing covergroups and covergroup filters for effective functional coverageâ uncovers the mechanisms available for carving out the coverage goals. In the p1800-2012 of the SystemVerilog LRM, new constructs are provided just for doing this. The construct that is focused on is the “with” construct. The new construct provides the ability to carve out of a multidimensional range of possibilities for a sub-range of goals. This is very relevant in a âworkingâ or under development setup that requires frequent reprioritization to meet tape-out goals.

The paper âTaming Testbench Timing: Timeâs Up for Clocking Block Confusionsâ by Jonathan Bromley, Kevin Johnston of Verilab, reviews the key features and purpose of clocking blocks and then examines why they continue to be a source of confusion and unexpected behavior for many verification engineers. Drawing from the authorsâ project and mentoring experience, it highlights typical usage errors and how to avoid them. They clarify the internal behavior of clocking blocks to help engineers understand the reasons behind common problems, and show techniques that allow clocking blocks to be used productively and with confidence. Finally, they consider some areas that may cause portability problems across simulators and indicate how to avoid them.

Inference of latches and flops based on coding styles has always been a topic creates multiple viewpoints. There are other such scenarios of synthesis and simulation mismatches that one typically comes across. To address all such ambiguity, language developers have provided different constructs to provide for an explicit resolution based on the intent. To help us gain a deeper understanding of the topic, Don Mills of Microchip Technology Inc., presented the related concepts in the paper âYet Another Latch and Gotchas Paperâ @ SNUG Silicon Valley.This paper discusses and provides solutions to issues that designers using SystemVerilog for design come across, such as: Case expression issue for casez and casex, Latches generated when using unique case or priority case, SRFF coding style problems with synthesis, SystemVerilog 2009 new definition of logic

Gabi Glasser from Intel presented the paper âUtilizing SystemVerilog for Mixed-Signal Validationâ @ SNUG Israel, where he proposed a mechanism for simplifying analysis and increasing coverage for mixed signal simulations.Â The method proposed here was to take advantage of SystemVerilog capabilities, which enables defining a hash (associative) array with unlimited size. During the simulation, vectors are created for required analog signals, allowing them to be analyzed within the testbench along or at the end of the simulation, without requiring saving these signals into a file. The flow change enables the ability to launch a large scale mixed signal regression while allowing an easier analysis of coverage data.

Design pattern is a general reusable solution to a commonly recurring problem within a given context. The benefit of using design patterns is clear: it gives a common language for designers when approaching a problem, and gives a set of tools, widely used, to solve issues as they come up. Â The paper âDesign Patterns In Verificationâ by Guy Levenbroun of Qualcomm explores several common problems, which might rise, during the development of a testbench, and how we can use design patterns to solve these problems. The patterns are categorized majorly into following areas: creational (eg factory), structural (eg composite) and behavioral (eg template) are covered in the paper.

Arik Shmayovitsh, Avishay Tvila, Guy Lidor of Sigma Designs , in their paper âTruly reusable Testbench-to-RTL Â connection for System Verilogâ , presents Â a novel approach of Â connecting the DUT and testbench using consistent semantics while Â reusing the testbench. This is achieved by abstracting the connection layer of each testbench using the SystemVerilog âbindâ construct. This ensures that the only thing that is required to be done to reuse the testbench for a new DUT would be to identify the instance of the corresponding DUT.

In the paper, âA Mechanism for Hierarchical Reuse of Interface Bindingsâ, Thomas Zboril of Qualcomm (Canada) explores another method to instantiate SV interfaces, connect them to the DUT and wrap the virtual interfaces for use in the test environment. This method allows the reuse of all the code when the original block level DUT becomes a lower level instance Â in a larger subsystem or chip. The method involves three key mechanisms: Hierarchical virtual interface wrappers, Novel approach of using hierarchical instantiation of SV interfaces, Another novel approach of automatic management of hierarchical references via SV macros (new)

Thinh Ngo & Sakar Jain of Freescale Semiconductor, in their paper, â100% Functional Coverage-Driven Verification Flowâ propose a coverage driven verification flow that can efficiently achieve 100% functional coverage during simulation. The flow targets varied functionality, focuses at transaction level, measures coverage during simulation, and fails a test if 100% of the expected coverage is not achieved. This flow maps stimulus coverage to functional coverage, with every stimulus transaction being associated with an event in the coverage model and vice versa. This association is derived from the DUT specification and/or the DUT model. Expected events generated along with stimulus transactions are compared against actual events triggered in the DUT. The comparison results are used to pass or fail the test. 100% functional coverage is achieved via 100% stimulus coverage. The flow enables every test with its targeted functionality to meet 100% functional coverage provided that it passes.

Papers on Verification Methodology

In the paper, âTop-down vs. bottom-up verification methodology for complex ASICsâ , Paul Lungu & Zygmunt Pasturczyk of Ciena at Canada covers the simulation methodology used for two large ASICs requiring block level simulations. A top-down verification methodology was used for one of the ASICs while a larger version needed an expanded bottom-up approach using extended simulation capabilities. Some techniques and verification methods such as chaining of sub environments from block to top-level are highlighted Â along with challenges and solutions found by the verification team. The paper presents a useful technique of Â of passing a RAL (Register Abstraction Layer) mirror to the C models which are used as scoreboards in the environment. The paper also presents a method of generating stable clocks inside the “program” block.

In the paper, âIntegration of Legacy Verilog BFMs and VMM VIP in UVM using Abstract Classesâ by Santosh Sarma of Wipro Technologies(India) presents an alternative approach where Legacy BFMs written in Verilog and not implemented using âClassesâ are hooked up to higher level class based components to create a standard UVM VIP structure. The paper also discusses an approach where existing VMM Transactors that are tied to such Legacy BFMs can be reused inside the UVM VIP with the help of the VCS provided UVM-VMM Interoperability Library. The implementation makes use of abstract classes to define functions that invoke the BFM APIs. The abstract class is then concretized using derived classes which give the actual implementation of the functions in the abstract class. The concrete class is then bound to the Verilog instance of the BFM using the SystemVerilog “bind” concept. The concrete class handle is then used by the UVM VIP and the VMM Transactor to interact with the underlying Verilog BFM. Using this approach the UVM VIP can be made truly reusable by using run time binding of the Verilog BFM instance to the VIP instead of using hardcoded macro names or procedural calls.

âA Unified Self-Check Infrastructure - A Standardized Approach for Creating the Self-Check Block of Any Verification Environmentâ byJohn Sotiropoulos,Matt Muresa , Massi Corba of Draper Laboratories Cambridge, MA, USApresents a structured approach for developing a centralized âSelf-Checkâ block for a verification environment. The approach is flexible enough to work with various testbench architectures and is portable across different verification methodologies. Here, all of the designâs responses are encapsulated under a common base class, providing a single âSelf-Checkâ interface for any checking that needs to be performed. This abstraction, combined with a single centralized scoreboard and a standardized set of components, provides the consistency needed for faster development and easier code maintenance. It expands the concept of âself-checkâ to incorporate the white-box monitors (tracking internal DUT state changes etc.) and Temporal Models (reacting to wire changes) along-with traditional methodologies for enabling self-checking.

For VMM users looking at migrating to UVM, there is another paper from Courtney Schmitt of Analog Devices, Inc. âTransitioning to UVM from VMMâ discusses the process of transitioning to a UVM based Â environment from VMM Differences and parallels between the two verification methodologies are presented to show that updating to UVM is mostly a matter of getting acquainted with a new set of base classes. Topics include UVM phases, agents, TLM ports, configuration, sequences, and register models. Best practices and reference resources are highlighted to make the transition from VMM to UVM as painless as possible.

When I see around most common electronic devices I find are smartphones, tablets, laptops, televisions. One of the common user need which drives competition among these devices is to conserve battery life and at the same time save/restore the application. From the semiconductor perspective, requirement is translated to âdesigns should be using advanced low power techniques to use minimum power and at the same time save/restore critical design statesâ.

Present and future chip designs, it is almost a necessity that from very beginning power intent is mostly clear. This is the main reason that power intent description has become integral part of RTL development. Are you concerned about power consumption during RTL development?

Using UPF 2.0 abstract power supply networks are defined, also called as âsupply setsâ. There is no ambiguity in defining supply network as it is part of planning power architecture. But the most important step which needs careful planning is to define valid power states combinations. Let me go in more depth to explain this, once supplies are defined explicitly using âcreate_supply_portâ OR implicitly using âcreate_supply_setâ, then valid supply states are defined next using âadd_port_stateâ and âadd_power_stateâ. This means defining when state will be âoffâ, âhigh_voltageâ, âlow_voltageâ etc. There can be many permutation and combination of these supply states which are possible, but not all are expected and needed for design functionality. So valid combinations (or legal states) of these supplies are defined as âpower state tableâ (PST). How complex is PST in your current designs?

PST holds the key regarding which power domain crossovers needs protection policies like isolation. Implementation tools uses PST as reference to insert protection devices on power domain crossovers, similarly static verification tools uses PST as reference to validate correctness of power intent. But what if PST has some bug, like while defining UPF user provided invalid state in PST. In such cases relying on static tools and implementation tools is not enough, and this is one of the many reasons where there is a need for dynamic verification. Based on design specification (power specification), verification team will create testplan which will create test vectors and this includes putting design into all specified valid states and once power-aware simulation will corrupt crossovers during shutdown, failing simulation will uncover PST bug. Did you ever catch PST bug in simulation? What if that bug was not caught and slipped to implementation?

Â Consider following example, where there are 2 domains PD1 and PD2, where VDD1 is primary supply of PD1 and VDD2 is primary supply of PD2, as per specification below are the valid states

Â
While writing UPF, user forget to define power state âState3â, in such cases static verifier such as MVRC may not be able to spot this problem as PST is reference. But verification engineer developing test will create a test to cover state âState3â and VCS-NLP will spot the problem in simulation.

In above example consider a situation where âState3â is not a valid state and not defined, in such situation if isolation policy is defined for domain PD1 which is applying isolation on all outputs, static verifier such as MVRC will immediately pin-point that paths from VDD1 to VDD2 does not need isolation as they are simultaneously getting ON and OFF states.

As size of SoC is growing and increased usage of IP, there is a strong need for hierarchical methodology, which is also applicable on low power. In hierarchical low power methodology, block level power intent is captured in UPF and once blocks are verified at top-level all block UPF are loaded using âload_upf âscopeâ, this create some interesting situations where there can be many supplies and power states at block level, but top level supplies are not that many, which means many block supplies are connected to same supply at top-level and so are there power states. This is the basic need of PST merging, concept where different power state tables needs to be merged in a smart way to create top-level PST.

Consider the following example:

PST defined in block level UPF of block_b and block_c are following:

Merged PST will merge in such a way that matches top level connection like VDD_B2 and VDD_C1 is connected to same supply at top level which is VDD2. Below is the merged PST for this example:

Once dynamic verification/simulation is done, it is very important to review low power specific coverage to ensure all valid states mentioned using PST are properly verified. How your current low power verification flow ensures that proper coverage mechanism is in place to cover power intent. Does it covers PST states/transitions, power switch states/transitions, supply net/set power state/simstate transitions?

Reviewing coverage results is same as non-LP coverage, it is important from PST verification perspective to review illegal states reached during simulation. How do you ensure all legal state combinations are covered during simulation?

To summarize, best practices for implementing low power techniques on a design, power network should be defined in UPF, along with this careful addition of power states and PST will be done. This point onwards either manually defines protection policies or quickly verifies using static checker like MVRC OR iterates with MVRC and completes the protection policies. Verification team will create test plan based on design specification which will also include tests for bringing design in all legal PST states. After performing power aware simulation using tool like VCS-NLP, usual simulation debug will be done for failing tests, it is important to ensure LP specific coverage is met. At this point, design is ready for implementation using tool like Design Compiler. Are you using these best practices in your low power verification flow? Share your experience or best practices.

In this blog, I will be sharing the necessary steps one has to take while writing a sequence to make sure it can be reusable. Having developed sequences and tests in UVM, while using Verification IPs, I think writing sequences is the most challenging part in verifying any IP. Â Careful planning is required to write sequences without which we end up writing one sequence for every scenario from scratch. This makes sequences hard to maintain and debug.

As we know, sequences are made up of several data items, which together form an interesting scenario. Sequences can be hierarchical thereby creating more complex scenarios. In its simplest form, a sequence should be a derivative of the uvm_sequence base class by specifying request and response item type parameter and implement body task with the specific scenario you want to execute.

class usb_simple_sequence extends uvm_sequence #(usb_transfer);

rand int unsigned sequence_length;

constraint reasonable_seq_len { sequence_length < 10 };

//Constructor

function new(string name=âusb_simple_bulk_sequenceâ);

super.new(name);

endfunction

//Register with factory

`uvm_object_utils(usb_simple_bulk_sequence)

//the body() task is the actual logic of the sequence

virtual task body();

repeat(sequence_length)

`uvm_do_with(req, {

//Setting the device_id to 2

req.device_id == 8âd2;

//Setting transfer type to BULK

req.type == usb_transfer::BULK_TRANSFER;

})

endtask : body

endclass

In the above sequence we are trying to send usb bulk transfer to a device whose id is 2. Test writers can invoke this by just assigning this sequence to the default sequence of the sequencer in the top level test.

So far, things look simple and straight forward. To make sure the sequence is reusable for more complex scenarios, we have to follow a few more guidelines.

First off, it is important to manage the end of test by raising and dropping objections in the pre_start and post_start tasks in the sequence class. This way we raise and drop objection only in the top most sequence instead of doing it for all the sub sequences.

task pre_start()

Â Â if(starting_phase != null)

Â Â starting_phase.raise_objection(this);

endtask : pre_start

task post_start()

Â Â if(starting_phase Â != null)

Â Â starting_phase.drop_objection(this);

endtask : post_start

Note that starting_phase is defined only for the sequence which is started as the default sequence for a particular phase. If you have started it explicitly by calling the sequenceâs start method then it is the userâs responsibility to set the starting_phase.

class usb_simple_bulk_test extends uvm_test;

Â Â usb_simple_sequence seq;

Â Â â¦

Â Â virtual function void main_phase(uvm_phase Â phase );

Â Â Â Â â¦

Â Â Â Â //User need to set the starting_phase as Â sequence start method
Â Â Â Â is explicitly called to invoke the sequence

Â Â Â Â seq.starting_phase = phase;

Â Â Â Â seq.start();

Â Â Â Â â¦

Â Â endfunction : main_phase

endclass

Use UVM configurations to get the values from top level test. In the above example there is no controllability given to test writers as the sequence is not using configurations to take values from the top level test or sequence (which will be using this sequence to build a complex scenario). Modifying the sequence to give more control to the top level test or sequence which is using this simple sequence.

With the above modifications we have given control to the top-level test or sequence to modify the device_id, sequence_length and type. A few things to note here:-Â the parameter type and string (third argument) used in uvm_config_db#()::set should be matching the type being used in uvm_config_db#()::get. Make sure to âsetâ and âgetâ with exact datatype. Otherwise value will not get set properly, and debugging will become a nightmare.

One problem with the above sequence is: if there are any constraints in the usb_transfer class on device_id or type, then this will restrict the top-level test or sequence to make sure it is within the constraint.

For example if there is a constraint on the device_id in the usb_transfer class, constraining it to be below 10 then top-level test or sequence should constraint it, within this range. If the top-level test or sequence sets it to a value like 15 (which is over 10) then you will see a constraint failure during runtime.

Sometimes the top-level test or sequence may need to take full control, and may not want to enable the constraints which are defined inside the lower level sequences or data items. One example where this is required is negative testing:- the host wants to make sure devices are not responding to the transfer with a device_id greater than 10 and so wants to send a transfer with device_id 15. So to give full control to the top-level test or sequence, we can modify the body task as shown below:-

It is always good to be cautious while using `uvm_do_with as it will add the constraints on top of any existing constraints in a lower level sequence or sequence item.

Also note that if you have more variables to âsetâ and âgetâ then I recommend you create the object and set the values in the created object, and then set this object using uvm_config_db from the top-level test/sequence (instead of setting each and every variable inside this object explicitly). This way we can improve runtime performance by not searching each and every variable (when we execute uvm_config_db::get) , and instead get all variables in one shot using the object.

Â Â //If status of uvm_config_db::get is true Â then try to use
Â Â //the values set in the object we received.

Â Â if(status)

Â Â begin

Â Â Â Â `uvm_create(req)

Â Â Â Â this.sequence_length Â = local_obj.sequence_length;

Â Â Â Â //Copy the entire req object inside the object which we
Â Â Â Â //received from uvm_config_db Â to the local req.

Â Â Â Â req.copy Â (local_obj.req);

Â Â end

Â Â else

Â Â begin

Â Â Â Â //If we did not get the object from top level sequence/test
Â Â Â Â //then create one and Â randomize it.

Â Â Â Â `uvm_create(req)

Â Â Â Â req.randomize();

Â Â end

Â Â repeat(sequence_length)

Â Â Â Â `uvm_send(req)

endtask : body

Always try to reuse the simple sequences by creating a top level sequence for complex scenarios. For example, in below sequence am trying to send bulk transfer followed by an interrupt transfer to 2 different devices. For this scenario I will be using our usb_simple_sequence as shown below:-

Note that in the above sequence, we get the values using uvm_config_db::get from the top level test or sequence, and then we set it to a lower level sequence again using uvm_config_db::set. This is important without this if we try to use `uvm_do_with and pass the values inside the constraint block then this will be applied as an additional constraint instead of setting these values.

I came across these guidelines and learned them, at times the hard way. So I am sharing them here. I sure hope these will come in handy when you use sequences that come pre-packed with VIPs to build more complex scenarios, and also when you wish to write your own sequences from scratch. If you come across more such guidelines or rules of thumb for writing re-usable, maintainable and debuggable sequences, please share them with me.

Over the past two years, several design and verification teams have begun using SystemVerilog testbench with UVM. They are moving to SystemVerilog because coverage, assertions and object-oriented programming concepts like inheritance and polymorphism allow them to reuse code much more efficiently. Â This helps them in not only finding the bugs they expect, but also corner-case issues. Building testing frameworks that randomly exercise the stimulus state space of a design-under-test and analyzing completion through coverage metrics seems to be the most effective way to validate a large chip. UVM offers a standard method for abstraction, automation, encapsulation, and coding practice, allowing teams to build effective, reusable testbenches quickly that can be leveraged throughout their organizations.

However, for all of its value, UVM deployment has unique challenges, particularly in the realm of debugging. Some of these challenges are:

Debugging even simple issues can be an arduous task without UVM-aware tools. Here is a public webinar that reviews how to utilize VCS and DVE to most effectively deploy, debug and optimize UVM testbenches.

When I was preparing for a customer presentation on UVM RAL, I could not understand what the UVM base class library is saying about updating the values of desired value and the mirror value registers. Also I felt that the terms used are not exactly reflecting the intent. After spending some time, I came up with a table which will help to understand the behavior when the register model APIs are called. . . . .

One ofÂ the challenges faced in SOC verification is to validate the designs in mixed language and mixed Â abstraction level. SystemC is widely used language to define the system model at higher level of abstraction. Â SystemC is an IEEE standard language for System Level modeling and it is rich with constructs for Â describing models at various levels of abstraction i.e. Untimed, Timed, Transaction Level, Cycle Accurate, Â and RTL. The transaction level model simulates much faster than RTL model, besides OSCI defined the TLM Â 2.0 interface standard for SystemC which enables SystemC model interoperability and reuse at transaction Â level.

On the other side, SystemVerilog is a unified language for design and verification. It is effective for designing advance testbenches for both RTL and Transaction level models, since it has features like constraint randomization for stimulus generation, functional coverage, assertions, object oriented constructs(like class,inheritance etc). Early availability of standard methodologies (providing framework and testbench coding guidelines for resue) like VMM, OVM, UVM enabled wide adoption for System Verilog in industry. The UVM 1.0 Base Class Library which was Â Â released on Feb 2011 Â includes OSCI TLM 2.0 socket interface to enable interoperability for UVM with SystemC . Essentially it allows UVM testbench to include SystemC TLM 2.0 reference models. The UVM testbench can pass (or receive) transactions from SystemC models. The transaction passed across System Verilog ÃÃ SystemC could be TLM 2.0 generic payload OR uvm_sequence_item. The implementation of UVM to SC TLM 2.0 communication is vendor dependent.

Starting with with the 2011.03 release, VCS provides anew TLI adaptor which enables UVM TLM 2.0 sockets to communicate with SC TLM 2.0 based environment to pass transactions across language domains. Â You can also check out Â a couple of earlier post from John Aynsley, (VMM-to-SystemC Communication Using the TLIandÂ Blocking and Non-blocking Communication Using the TLI) on SV-SystemC communication using TLI. Â Â In this Blog, I am going to describe VCS TLI connectivity mechanism between UVM and SystemC. Thereare other advance TLI features in VCS ( like direct access of data, invoking task/functionsÂ across SV and SC language),Â message unification across UVM-SC, transaction debug techniques, extending TLI adaptor for user defined interface other than VMM/UVM/TLM2.0which can be written about on later.

With the support for TLM2.0 interfaces in both UVM and VMM, the importance of OSCI TLM2.0 across both SystemC and SystemVerilog is now apparent. UVM provides the following TLM2.0 socket interfaces (for both blocking and non-blocking communication)

uvm_tlm_b_initiator_socket

uvm_tlm_b_target_socket

uvm_tlm_nb_initiator_socket

uvm_tlm_nb_target_socket

uvm_analysis_port

uvm_subscriber

SystemC TLM2.0 consists of following TLM 2.0 interface

tlm_initiator_socket

tlm_target_socket

tlm_analysis_port

The Built-in TLI adaptor solution for VCS is a general purpose solution to simplify the transaction passing across UVM and Â SystemC as shown below. The transactions can be TLM 2.0 generic payload OR uvm_sequence_item object. The UVM 1.0 does have the TLM 2.0 generic payload class as well.

The Built-in TLI adaptor is available as a pre-compiled library with VCS. The user would need to follow two simple steps to include the TLI adaptor in his/her verification environment.

Include a header file in System Verilog and SystemC code. The System Verilog header file provides a package which implements the bind function parameterized on uvm_sequence_item object.

Invoke the bind function on System Verilog and SystemC side to connect each socket across language.Â The bind function has a string argument which must be unique for each socket connection across System Verilog and SystemC.

The TLI adaptor bind function uses the unique string âstr_udf_pktâ to identify the socket connectivity across SystemVerilog and SystemC domain.Â For multiple sockets, the user needs to invoke the TLI bind function once for each socket. The TLI adaptor supports both blocking and non-blocking transport interfaces for sockets to communicate across System Verilog and SystemC.

Thus, the Built-in UVM-SC TLI adaptor capability of VCS ensures that SystemC can be connected seamlessly in UVM based verification environment.

On August 8, Synopsys announced the launch of VIP-Central.org, a technical community site focused on system-on-chip (SoC) verification engineers and users of verification IP (VIP). This is a new resource which will be meant to provide relevant forums and blogs focused on verification of industry-standard protocols and bus interfaces. Through the resources and blogs on this community driven portal, I expect that engineers can accelerate their understanding of intricacies and foibles of each protocol and also effectively leverage industry-available verification IP to verify these protocols while employing methodologies such as UVM, OVM and VMM.

With the increasing number of complex protocols used in SoCs, verification engineers face a tough challenge to quickly acquire the protocol expertise needed to verify a SoC as well as all of its on-chip fabrics and off-chip interfaces. The challenge is made tougher by the frequent release of new and more sophisticated generations of protocols to improve performance, power and quality of service. Verification engineers must complete many tasks requiring both protocol and methodology expertise including developing environments, integrating VIP, using and modifying test sequences, debugging complex protocol results and analyzing coverage data. The site would aggregate information from industry experts across the verification community, providing best practices and ideas for better verification performance, protocol debug, methodology, verification planning, coverage management and ease-of-use. I am sure most of the discussions and blogs in VIP-Central.org would be relevant to the folks who follow the Verification Martial Arts blog and I would encourage everyone to register and start contributing at : http://www.vip-central.org

In this blog I want to introduce how non-blocking communication works in TLM-2.0. This is same inÂ UVM, VMM and SystemC. Only the names of the functions and arguments may differ.

As name suggests, non-blocking communication returns immediately. Due to this behavior target may or may not process the request immediately when the nb_transport_fw function is called. For this very reason we have backward path using which target can intimate initiator whenever the request is processed using nb_transport_bw method.Â We can look at this as 2 function calls one from initiator to target for sending the request and second is from target to initiator for sending the response once the request is processed.

If this is as simple as shown above then why do we need an additional argument in form of phase and a return type. Letâs see what these additional arguments are conveying. I have used the term âpartnerâ (as in link partner) in the blog to identify either target or initiator.

Phase argument in the nb_transport function calls is used to communicate the phase of this transaction. Phase can be one of the following

BEGIN_REQ: This is sent by initiator along with the transaction to tell the target that this is a new request.

END_REQ: This is sent by target to tell the initiator that request has accepted but not yet processed.

BEGIN_RESP: This is sent by target to tell initiator that request is processed and this is the response for the corresponding request.

END_RESP: This is sent by the initiator to tell target that response has been accepted. With this the transaction is complete.

Return type for the nb_transport function calls is used to tell the partner (initiator/target, whoever initiated the call) whether the transaction is processed at this stage or not.Â Return type can be one of the following

TLM_ACCEPTED: This is used to tell the partner (Initiator/target, whoever initiated the call) that transaction is accepted but processing has not yet started. Whenever this is returned we can simple ignore the arguments as they are not changed.

Example:

Target returns TLM_ACCEPTED when it has accepted the request sent by the initiator but not processed yet. Initiator need not process the function arguments as they are not changed by the target.

TLM_UPDATED: This is returned to tell the partner (initiator/target, whoever initiated the call) that the transaction is accepted and the function arguments are modified.

Example:

Target returns TLM_UPDATED when it has accepted the request sent by the initiator and modified the function arguments. Initiator need to take action depending on the phase argument.

TLM_COMPLETED: This is returned to tell the partner (initiator/target, whoever initiated the call) that the transaction is completed and the function arguments are modified.

Example:

Target returns TLM_COMPLETED when it has accepted the request sent by the initiator and completed processing it. Initiator need to take action depending on the phase argument.

In summary, non-blocking interface has a protocol associated with it which is used to make sure each transaction is properly processed and communicated the response back. If you see the blocking interface, this is not required as initiator waits for target to complete the transaction. In non-blocking there would be no way for initiator to know if the target processed the transaction or not without this protocol.

Non-blocking coding style is generally used in coding accurate models in virtual prototypes. If you are planning to use TLM models from virtual platform to speed up your simulations or for any other reasons then you will end up connecting the non-blocking TLM models to your verification environment where non-blocking interface will be useful. I welcome you to share other use cases of non-blocking interface.

One of the critical problems developers cope with during SoC development process (architecture plan, RTL design, verification, software development) is the constant need to synchronize between different forms of the same data structure: the SoC registers database. The SoC registers database can be found at the SoC architecture team (who write the SoC registers description document), design engineers (who implement the registers structure in RTL code), verification engineers (who write the verification infrastructure â such as RAL code, and write verification tests â such as exhaustive read/write tests from all registers), and software engineers (who use the registers information to write embedded software).

Since the same information is used in different forms, it is only natural to have a single, central database which will hold all SoC registers data. You would ideally like to generate all the required output filesÂ (documentation, UVM or VMM Regsiter Abstraction Model, RTL, C headers, â¦) from this central database ..Â Different vendors and CAD teams provide different automation solutions for doing this.

The RALF specification contains all of the necessary information to generate RTL and testbench Â code that implements the specified registers. There are many ways to code and implement RTL, so it is not possible to provide a general purpose solution. Â As far as the testbench abstraction model is concerned, there are multiple ways of customizing your model post generation in both UVM and VMM. Callbacks, factories, configurable RAL modelÂ attributes are some of the ways through which the desired customization can be brought in. Â âThe âuserâ in RALF : get ralgen to generate âyourâ codeâ highlights a very convenient way of adding of bringing in Â SystemVerilog compatible code which will be copied as-is into the RAL model and in the desired scope. When it comes down to generating the RTL and the âCâ headers, we cannot leave the customization to such a late stage.Â Also, different organizations and project groups have their own RTL and C-code coding styles which means Â a generated output of a very generic nature might not be very helpful. For RTL generation, engineers would want the generated code to be power and gate-count efficient. Similar for C registers header generation, it often needs to follow coding styles and match CPU Firmware API. How do we bring in all this customizations to the end user?

Using the RALF C++ API, you have full access to parsed RALF data (through C++ routines), which you can use to implement a customized RTL code generator, or any other feature that needs RALF data. So, you can useÂ it to generate your C header files, HTML documentation, or translate the i/p RALF files to another register description format, custom covergroups and coverage convergence sequences (DAC 2012 User Track poster 6U.8 âÂ Register Verification on a Fast Lane: Using Automation to Converge on UVM REG Coverage Models)

I have seen two instances of the need to generate a different register specification in the recent past and that is one of the reasons I decided to put this down in a blog. Â Â Let me talk about the first instance..

One of the project groups were in the process of migrating from their own SV base classes to UVM. They had their own Register description format from which they used to generate their Register Abstraction Model. This was a proven flow.

So, when they migrated to UVM , they wanted to have a flow which would validate the changes that they are doing..

Given that they were moving to using RALF and âralgenâ, they didnât want to create Register Specification in the legacy format anymore. So, they wanted to have some automation for generating scripts in the earlier format.Â So, how did they go about doing this?..Â Â They took the RAL C++ APIs and using them there were able to create the necessary automation to to generate the legacy format from RALF in no time.. (From what I remember, it was a half days work).. Everyone were involved in doing what they were best at, and that helped in the overall scheme of things.

The other customer had their own format from which they were generating RTL, firmware code and HTML. They had the necessary automation to create RALF for generating the UVM register Model. They also had a mechanism in place to generate IPXACT from this format as well as vice versa.. So, to complete the traceability matrix, they wanted a RALFÂ to IPXACT conversion.. Again, the most logical approach was to take the RALF CPP APIs and get those to iterate through the parsed RALF data and generate IPXACT.. Though, this effort is not complete, it just took a day or so to be able to generate valid IPXACT1.5 schema and all that is required now is some more additional work to smoothen the corners.

How do you start using these APIs and build your own code/html generators? You need to include âralf.hppâ (which is in $VCS_HOME/include) in your âgeneratorâ block. And then to compile the code, you need to pick up the shared library libralf.so from the VCS installation.

Essentially, you have a handle to the parsed database, and with the available APIs you can do whatever you want with it ..Â The documentation of the APIs are in the documentation shipped with the VCS installation.. Also, if you are like me and would rather hack away at existing code rather than start with something from scratch, you can just check with Synopsys support to give you existing templates to dump out code in specific format, and you can starting modifying that for your requirements..

Sometimes driving to work can be a little bit boring so a few days ago I decided to take advantage of thisÂ time slot toÂ introduceÂ myself and tell you a little bit about the behind-the-scenes of my video blog. Hope you’ll like it !

Are you afraid of breakpoints? Don’t worry, many of us have been.Â After all, breakpoints are for software folks, not for us chip heads, right??
Well, not really… In many ways, chip verification is pretty much a software project.
Still, most of the people I know fall into one of these two categories – the $display person, or the breakpoints person.

The former doesn’t like breakpoints. He or she would rather fill up their code with $display’s and UVM_INFO’s and recompile their code every time around.
That’s cool.
The latter likes and appreciates breakpoints and uses them whenever possible instead of traditional $display commands.

So if you are the second type – here’s some great news from DVE that will help you be even more efficient.
And $display folks – stay tuned as it may be time for you to finally convert

In this short video I show how to use Object ID’s – a new infrastructure in DVE – to break in a specific class instance !

As described in the video, vkits are our convenient method of lumping together reusable UVM packages with the interface(s) that they operate on. Because code within packages can only peek or poke wires that are contained by a virtual interface, it is often useful to wrap these together somehow, and vkits are our technique at Cavium for doing that.

What goes in a vkit? Anything that is reusable. From simple agents and the interfaces they work on to complete UVM environments that connect these agents together, including scoreboards, sequence libraries, types, and utility functions.

What does not go in a vkit are items that are bound to a specific testbench, including the tests themselves.

The video describes the wildcard import syntax as an “egregiously bad idea.” First and foremost, doing so can lead to namespace pollution, which comes about when one engineer independently adds types or classes to their package and only later finds out that they conflict with those of another package. Secondly, wildcard imports prevent our shorter naming conventions of having an agent_c, drv_c, env_c, etc., within each package.

Not described in the video are CSR packages that are auto-generated by RAL, IP-XACT, or your script of choice. These packages should be independent of your vkits, such that your vkits refer to them with their explicit scopes (i.e., chx_csr_pkg::PLUCKING_CFG_C).

Future posts will go into more detail about how we architect UVM testbenches and some of our other conventions that work within this framework. Until then, I’ve got a lot of pies to eat.

PS. I’ll be at DAC this year! Come see me on Tuesday, June 5, during the “Industry Leaders Verify with Synopsys” lunch. Hopefully they’ll be serving some of my favorite foods!

When the industry decided to standardize on UVM, we had high hopes that some day we will be able to use the standard to raise the level of abstraction and solve many open issues. Take the case of AMS verification of High-speed IOs, which has largely been the turf of hand-crafted custom designs verified mostly with directed tests.

Warren Anderson and his team at AMD here describe a simple, innovative approach they used with UVM and Verilog-AMS running VCS and CustomSim/XA… yes, UVM really works wonders when used prudently with the right flow.

Implementing the response checking mechanism in a self-checking environment remains the most time-consuming task. The VMM Data Stream Scoreboard package facilitates the implementation of verifying the correct transformation, destination and ordering of ordered data streams. This package is intuitively applicable to packet-oriented design, such as modems, routers and protocol interfaces. This package can also be used to verify any design transforming and moving sequences of data items, such as DSP data paths and floating-point units. Out-of-the-box, the VMM data stream scoreboard can be used to verify single-stream designs that do not modify the data flowing through them. For example, it can be used to verify FIFOs, Ethernet media access controllers (MACs) and bridges.

The VMM data scoreboard can also be used to verify multi-stream designs with user-defined data transformation and input-to-output stream routing. The transformation from input data items into expected data items is not limited to one-to-one transformation. An input data item may be transformed into multiple expected data items (e.g. segmenters) or none (e.g. reassemblers). Compared to this, the functionality available through UVM in-order comparator or the algorithmic comparator is significantly less. Thus, users might want to have access to the functionality provided by the VMM DS Scoreboard in a UVM environment. Using the UBUS example available in $VCS_HOME/doc/examples/uvm/integrated/ubus as a demo vehicle, this article shows how simple adapters are used to integrate the VMM DS scoreboard in a UVM environment and thus get access to more advanced scoreboarding functionality within the UVM environment

The UBUS example uses an example scoreboard to verify that the slave agent is operating as a simple memory. It extends from the uvm_scoreboard class and implements a memory_verify() function to makes the appropriate calls and comparisons needed to verify a memory operation. An uvm_analysis_export is explicitly created and implementation for âwriteâ defined. In the top level environment, the analysis export is connected to the analysis port of the slave monitor.

The simple scoreboard with its explicit implementation of the comparison routines suffices for verifying the basic operations, but would require to be enhanced significantly to provide more detailed information which the user might need. For example, lets take the âtest_2m_4sâ test. Here , the environment is configured to have 2 Masters and 4 slaves.. Depending on how the slave memory map is configured, different slaves respond to different transfers on the bus. Now, if we want to get some information on how many transfer went into the scoreboard for a specific combination (eg: Master 1 to Slave 3), how many were verified to be processed correctly etc, it would be fair enough to conclude that the existing scoreboarding schemes will not suffice..

Hence, it was felt that the Data Stream Scoreboard with its advanced functionality and support for data transformation, data reordering, data loss, and multi-stream data routing should be available for verification environments not necessarily based on VMM. From VCS Â 2011.12-1, this integration have meed made very simple. Â This VMM DS scoreboard implements a generic data stream scoreboard that accepts parameters for the input and output packet types. A single instance of this class is used to check the proper transformation, multiplexing and ordering of multiple data streams. The scoreboard class now Â leverages a policy-based design and parameterized specializations to accepts any âPacketâ class or d, be it VMM, UVM or OVM.

The central element in policy-based design is a class template (called the host class, which in this case in the VMM DS Scoreboad), taking several type parameters as input, which are specialized with types selected by the user (called policy classes), each implementing a particular implicit method (called a policy), and encapsulating some orthogonal (or mostly orthogonal) aspect of the behavior of the instantiated host class. In this case, the âpoliciesâ implemented by the policy classes are the âcompareâ and âdisplayâ routines.

By supplying a host class combined with a set of different, canned implementations for each policy, the VMM DS scoreboard can support all different behavior combinations, resolved at compile time, and selected by mixing and matching the different supplied policy classes in the instantiation of the host class template. Additionally, by writing a custom implementation of a given policy, a policy-based library can be used in situations requiring behaviors unforeseen by the library implementor .

So, lets go through a set of simple steps to see how you can use the VMM DS scoreboard in the UVM environment

Step 1: Creating the policy class for UVM and define its âpoliciesâ

Step 2: Replacing the UVM scoreboard with a VMM one extended from âvmm_sb_ds_typedâ and specialize it with the ubus_transfer type and the previous created uvm_object_policy.

Given that for any configuration, one master and slave would be active, define the appropriate streams in the constructor (though this is not required if there are only single streams, we are defining this explicitly so that this can scale up to multiple input and expect streams for different tests)

Since, we are verifying the operation of the slave as a simple memory, we just add in the appropriate logic to insert a packet to the scoreboard when we do a âWRITEâ and an expect/check when the transfer is a âREADâ with an address that has already been written to.

Step 2.b: Implement the stream_id() method

You can use this method to determine to which stream a specific âtransferâ belongs to based on the packetâs content, such as a source or destination address. In this case, the BUS Monitor updates the âslaveâ property of the collected transfer w.r.t where the address falls on the slave memory map.

Step 3: Create the UVM Analysis to VMM Analysis Adapter

The uvm_analysis_to_vmm_analysis is used to connect any UVM component with an analysis port to any VMM component via an analysis export. The adapter will convert all incoming UVM transactions to a VMM transaction and drive this converted transaction to the VMM component through the analysis port-export. If you are using the VMM UVM interoperability library, you do not have to create the adapter as it will be available in the library

Create the âwriteâ implementation for the analysis export in the adapter

The write method, called via the <analysis_export> would just post the receive UBUS transfer from the UVM analysis port to the VMM analysis port.

Step 4: Make the TLM connections

In the original example, the item_collected_port of the slave monitor was connected to the analysis export of the example scoreboard. Here, the DataStream scoreboard has an analysis port which expects a VMM transaction. Hence, we need the adapter created above to intermediate between the analysis port of the UVM Bus monitor and the analysis export of the VMM DS scoreboard..

This step is not required for a single master/slave configuration. However, would need to create additional streams so that you can verify the correctness on all the different permutations in terms of tests like âtest_2m_4sâ .

In this case, the following is added in the test_2m_2s in the connect_phase()

And that is all, as far and you are ready to go and validate your DUT with a more advanced scoreboard with loads of built-in functionality. This is what you will get when you execute the âtest_2m_4sâ test

Thus, not only do you have stream specific information now, but you now have access to much more functionality as mentioned earlier. For example, you can model transformations, checks for out of order matches, allow for dropped packets, and iterate over different streams to get access to the specific transfers. Again, depending on your requirements, you can use the simple UVM comparator for your basic checks and switch over to the DS scoreboard for the more complex scenarios with the flip of a switch in the same setup. This is what we did for a UVM PCIe VIP we developed earlier ( From the Magician’s Hat: Developing a Multi-methodology PCIe Gen2 VIP) so that the users has access to all the information they require. Hopefully, this will keep you going, till we have a more powerful UVM scoreboard with some subsequent UVM version

Closing the coverage gap has been a long-standing challenge in simulation-based verification, resulting in unpredictable delays while achieving functional closure. Formal analysis is a big help here. However, most of the verification metrics that give confidence to a design team are still governed by directed and constrained random simulation. This article describes a methodology that embraces formal analysis along with dynamic verification approaches to automate functional convergence: http://soccentral.com/results.asp?CatID=488&EntryID=37389

Recently one of the engineers I work with in the networking industry was describing the challenges in debugging the UVM timeout error message. I was curious and looked into his test bench. After spending an hour or so, I was able to point out the master/slave driver issue where the objection was not dropped and the simulation thread hung waiting for the objections to drop. Then I started thinking, why not use the run time option to track the status of the objection: +UVM_OBJECTION_TRACE? Well, this printed detailed messages about the objections, a lot more than what I was looking for! The problem now was to decipher the overwhelming messages spitted by the objection trace option! In a hierarchical test bench, there could be hundreds of component, and you might be debugging some SoC level test bench which you didnât write or are familiar with. Here is an excerpt of the message log using the built in objection trace:

As a verification engineer, you want to begin debugging the component or part of the test bench code which did not lower the objection as soon as possible. You want to minimize looking into the unfamiliar test bench code as much as possible or stepping through the code using a debugger.

The best way is to call the display_objections() just before timeout has been reached. As there is no callback available in the timeout procedure, I thought of writing the following few lines of code which can be forked off in any task based phase. I would recommend doing this in your base test which can be extended to create feature-specific tests. You can save some CPU processing cycles by coding this into a run time option:

From the above table, it is clear that the master and slave driver did not drop the objection. Now you can look into the master and slave driver components, and further debug why these components did not drop their objection. There are many different ways to achieve the same results. I welcome you to share your thoughts and ideas on this.

Quoting from one of Janickâs earlier blog on the VMM Performance Analyzer Analyzing results of the Performance Analyzer with Excel,
âThe VMM Performance Analyzer stores its performance-related data in a SQL database.SQL was chosen because it is an IEEEANSI/ISO standard with a multiplicity of implementation, from powerful enterprise systems like Oracle, to open source versions like MySQL to simple file-based like SQLite. SQL offers the power of a rich query and analysis language to generate the reports that are relevant to yourapplication.â

And given that everyone doesnât understand SQL, he goes on to show how one can get VMM Performance Analyzer data from a SQLite database into an Excel spreadsheet and then subsequently analyze the data by doing any additional computation and creating the appropriate graphs. This involves a set of steps leveraging the SQlite ODBC (Open Database Conduit) and thus requires the installation of the same.

This article presents a mechanism how TCL scripts are used to bring in the next level of automation so that the users can retrieve the required data from the SQL DB and even automate the process of results analysis by auto-generating the relevant performance charts for statistical analysis.. Also, as users migrate to using DVE as a single platform for their design debug, coverage analysis, verification planning, it is shown how these scripts can be integrated into DVE, so that the generation process is a matter of using the relevant menus and clicking on the appropriate buttons in DVE.

For generating the SQL databases with the VMM Performance Analyzer, an SQLite Installation is required which can be obtained from www.sqlite.org. Once, you have installed it, you would need to set the SQLITE3_HOME environment variable to the path where its installed. Once that is done, these are the following steps that you need to follow to generate the appropriate graphs out of the data gets generated out of your batch regressions runs..

Once it is extracted, you can try it on the tl_bus examples that ships with the utility. You would need to go the directory vmm_perf_utility/tl_bus.

Use make to run the tl_bus which will generate the sql_data.db and sql_data.sql. Now, go to the âchart_utilityâ directory

(cd vmm_perf_utility/chart_utility/)

The TCL scripts which are involved in the automation of the performance charts are in this directory.

This script vmm_perf_utility/chart_utility/chart.tclÂ can then be executed from inside DVE as shown below

Once, that is done, it will add will add a button “Generate Chart” in View menu.. BTW, adding a button is fairly simple..

eg:Â Â Â gui_set_hotkey -menu “View->Generate Chart” -hot_key “G”

is how the button gets added..

Now,Â click on a “Generate Chart” to select the sql database.

This will bring up the dialog box to select the SQL database..

Once, the appropriate data base is selected, the user can select which table to work with and then generate the appropriate.. The options would be provided to the user based on the data that is dumped into the SQL database.. From the combinations of charts, that is shown, select the graph that you want to generate and the required graphs will be generated for you. This is what you can see when you use the SQL DB generated for the TL bus example

Once, you have made the selections, you would see the following chart generated..

Now, obviously, you as a user would not just want the graphs to beÂ generated but you would also want these values to be available to you..

Thus, once you use this chart generation mechanism, the relevant .csv files corresponding to the graphs that you have generated would also be dumped for you..

This will be generated in the perfReports directory that would be created as well.. So, you can do any additional custom computation in Excel or by running your own scripts..Â To generate the graphs for any other example, you just need to pick up the appropriate SQL DBÂ that was generated based on your simulation runs and then subsequently generate the reports and graphs of your interest.

In the previous article titled âAutomatic generation of Register Model for VMM using IDesignSpecTMâ we discussed how it is advantageous to use a register model generator such as IDesignSpecTM, to automate the process of RALF model generation. Taking it forward, in this article we will discuss how to close the loop on register verification.

Various forms of coverage are used to ensure that registers are functioning properly. There are three coverage models in VMM. They are:

1. reg_bits coverage: this model is used to make sure that all the bits in the register are covered. This model works by writing and reading both 1 and 0 on every register bit, hence the name. This is specified using âcover +bâ in the RALF model.

2. field_vals coverage: field value coverage model is implemented at the register level and supports value coverage of all fields and cross coverage between fields and other cross coverage points within the same register. This is specified using âcover +fâ in the RALF model. User can specify the cross coverage depending on the functionality.

3. Address map: this coverage model is implemented at block level and ensures that all registers and the memories in the block have been read from and written to. This is specified using âcover +aâ in the RALF model.

We will discuss how coverage can be switched on/off and how the type of coverage can be controlled for each field directly from the register specification.

Once the RALF model is generated, the next step in verification is to generate the RTL and the SystemVerilog RAL model using âralgenâ. The generated RAL model along with the RTL can be compiled and simulated in the VMM environment to generate the coverage database. This database is used for the report generation and analysis.

Reports can be generated using IDesignSpecTM (IDS). IDS generated reports have advantages over other report in that it generates the reports in a much more concise way showing all the coverage at one glance.

Turning Coverage ON or OFF

IDesignSpecTM enables the users to turn ON/OFF all the three types of coverage from within the MS Word specification itself.

Coverage can be specified and controlled using the âcoverageâ property in IDesignSpecTM which has the following possible values:

The hierarchical âcoverageâ property enables users to control the coverage of the whole block or at the chip level.

Here is a sample of how coverage can be specified in IDesignSpecTM:

This would be the corresponding RALF file :

The coverage bins for each CoverPoint along with the cross for the various CoverPoints can also be defined in the specification as shown below:

This would translate to the following RALF:

Now, the next step after RALF generation would be to generate the RAL Model from the IDS generated RALF.

RAL MODEL AND RTL GENERATION FROM RALF:

The IDS generated RALF can be used with the Synopsys âralgenâ to generate the RALÂ (VMM or UVM) model as well as the RTL.

RAL model can be generated by using the following command:

If you specify âuvm above in the fisrt ralgen invocation above, a UVM Register Model would be generated.

COMPILATION AND REPORT GENERATION:

Once the RTL and the RAL model are generated using the âralgenâ, the complete model can be compiled and simulated in the VMM environment using VCS.

The compilation and simulation generates the simulation database which is used for the generation of the coverage reports.

Coverage reports can be generated in various forms but the most concise form can be in the form of the graphics showing all the coverage at a glance. For this, a tcl script âivs_simif.tclâ takes up the simulation database and generates the text based report on execution of the following command:

% ivs_simif.tcl -in simv.vdb âsvg

For running the above command set the environment variable âIDS_SIM_DIRâ, the text report are generated at this location. This will also tell IDS where to look for the simulation data file.

A detailed graphical view of the report can be generated from IDS with the help of this text report. To generate the graphical report in the form of âscalable vector graphicsâ (SVG) select the âSVGâ output from the IDS config and regenerate.

Another way of generating the SVG could be by using the IDS-XML or the Doc/Docx specification of the model as the input to the IDS in batch mode to generate the graphical report of the simulation by using the following command:

Coverage Reports

Field_vals report gives the graphical view of the field_vals coverage and the address coverage of the various registers and their respective fields.

The amount of coverage for the field (CoverPoints) is depicted by the level of green color in the fields, while that for complete register (CoverGroup) is shown by the color of name of the register.

The address coverage for the individual register (CoverPoint) is shown by the color of the address of the register (green if addressed; black if not addressed), while that of the entire block (CoverGroup) is shown by the color of the name of the block.

The coloring scheme for all the CoverGroups i.e. register name in case of the field_vals coverage and block name in case of the address coverage is:

1. If the overall coverage is greater than or equal to 80% then the name appears in GREEN color

2. If the coverage is greater than 70% but less than 80% then it appears in YELLOW

3. For coverage less than 70% name appears in RED color

Figure1 shows the field_vals and address coverage.

Figure:Â Closed loop register verification using RALF and IDS

The above sample gives the following coverage information:

a. 2 registers, T and resetvalue, are not addressed out of total of 9 registers. Thus the overall coverage of the block falls in the range >70% &<80% which is depicted by the color of the Stopwatch (name of the block).

b. All the fields of the registers are filled with some amount of the green color which shows the amount of the coverage. As an example field T1 of register arr is covered 100% thus it is completely filled and FLD4 of register X is covered only about 10%. The exact value of coverage can be obtained by hovering over the field to get the tooltip showing the exact coverage value

c. Color of the name of the register, for example X is red, show the overall coverage of the whole register , which is less than 70% for X.

Address coverage for reg_bits is shown in the same way as for the address coverage in field_vals. Reg_bits coverage has 4 components, that is,

1. Written as 1

2. Read as 1

3. Written as 0

4. Read as 0

Each of the 4 components is allocated a specific region inside a bit. If that component of the coverage is hit then the corresponding region is shown as green else it is blank. The overall coverage of the entire register is shown by the color of the name of the register as in the case of the field_vals.

The above sample report shows that there is no issue in âRead as 1â for the âresetvalueâ register. While other types or read/write has not been hit completely.

Thus, in this article we described what the various coverage models for a register are and how to generate the RALF coverage model of the registers automatically with minimum effort. An intuitive visualization of the register coverage data will ease the effort involved in deciphering the coverage reports from simulation lengthy log files. This type of closed loop register verification ensures better coverage and high quality results in less time. Hope you found this useful.. Do share with me your feedback on the same and and also let me know if you want any additional details to get the maximum benefits from this flow..

As data rate increases to 100 Gbps and beyond, optical links suffer severely from various impairments in the optical channel, such as chromatic dispersion, polarization-mode dispersion etc. Traditional optical compensation techniques are expensive and complex. ViaSat has developed DSP IP cores for coherent, differential, burst and continuous, high data rate networks. These cores can be customized according to system requirements.

One of the inherent problems with verifying communication applications is that there is large amount of information which is arranged over space and time. These are generally dealt using Fourier Transform, equalization and other DSP techniques. Thus, we needed to come up with interesting stimulus to match these complex equations which exercises the full design. With horizontal and vertical polarization (Four I and Q streams running at 128 samples per cycle), there was high level of parallelism to deal. To address these challenges, we decided to go with Constraint Random Self Checking Test Bench Environment using SystemVerilog and VMM. We have extensively used the reusability, direct programming interface and scalable features with various interesting coverage techniques to minimize our efforts and meet aggressive deadlines of the project. Our system model was bit and cycle accurate developed using C language. Class configurations were used to allow different behaviors such as sampling output at every cycle vs valid cycle only. Parameterized VMM data classes were used for control signals and feedback path which required parameterized generators, drivers, monitors and scoreboards so that they can be scaled as required to match different filter designs and specifications.

Code and functional coverage was used as benchmark to gauge completeness of verification. We used lot of useful constructs from SV in FCM like â âignore binsâ to remove any unwanted sets, helping us to avoid any overhead efforts and âillegal binsâ to catch error conditions and intersect keyword etc. Here is an example:

data_valid_H_trans: coverpoint ifc_data_valid.data_valid_H {

bins valid_1_285= (0=>1[*1:285]=>0);

illegal_bins valid_286 = (0=>1[*286]);

bins one_invalid = (1=>0=>1);

illegal_bins two_invalid = (1=>0[*2:5]=>1);

}

Covergroup âdata_valid_H_transâ covers a signal âdata_valid_Hâ which should never have consecutive 286 or more asserted cycles. Also, data_valid_H signal should never be low for two consecutive data cycle. These are interesting scenarioâs and can be found in many designs under test where two blocks have dependency between each other and there is data input/output rate that needs to be met for maintaining the data integrity between blocks else the data might overflow/underflow or can induce other possible errors. In such situations, an illegal bin can be effectively used to continuously check this condition through out the simulation. An easy usage of an illegal bin, keeps an eye on this condition and if this condition ever occurs, VCS flags a runtime error

Another interesting feature that we found out was the capability to merge different coverage reports using flexible merging. As we move along the project, due to various reasons like any system specification changes, signal name change etc we might have to modify our cover groups. Currently, if we have a saved data base of vdb files from previous simulations and we run urg command to create directory of coverage report, we will find multiple cover groups with same name in the new coverage report, this can make things very confusing to identify which cover groups are of our interest. Thus, corrupting our previous efforts, coverage report and leading to more engineering efforts and resource usage. To counter this problem, flexible merging can be used.

Note: URG assumes the first specified coverage database as a reference for flexible merging.

This feature is available only for covergroup coverage and is very useful when the coverage model is still evolving and minor changes in the coverage model between the test runs might be required. To merge two coverpoints, they need to be merge equivalent. Requirements for merge equivalence are as follows -

1. For User defined coverpoints:

Coverpoint C1 is said to be merge equivalent to a coverpoint C2 only if the coverpoint names and width are the same.

2. For Autobin Coverpoints:

Coverpoint C1 is said to be merge equivalent to a coverpoint C2 only if the name, auto_bin_max and width are the same.

3. For Cross coverpoints:

Coverpoint C1 is said to be merge equivalent to a coverpoint C2 only if the crosspoint have same number of coverpoints

If the cover points are merge equivalent. The merged cover points will contain a union of all the cover points for different tests. If the cover points are not merge equivalent then merged coverpoint will only contain all the coverpoint bins in the most recent test run and older test run data is not considered.

To achieve our verification goals, SystemVerilog and VMM Methodology features were very helpful in achieving our verification goals by giving us a robust verification environment which was very productive and reusable over course of project. Moreover, it also gave us a head start to our next project verification efforts. To find more details, please refer to the paper I presented at SNUG, San Jose, 2011, âFunctional Coverage Driven VMM Verification scalable to 40G/100G Technologyâ