Let's Get System-Level Functional Verification Under Control

A senior engineer with Vayavya outlines the challenges for system-level verification of assemblages of IP cores in leading-edge SoCs and provides a view on how to address scalability.

We have to get the functional verification challenges at the system-level under control.

Why? Finer geometry process nodes open up the possibilities of higher levels of IP integration, which gives SoC companies the opportunity to innovate and differentiate their market offerings. However, a shift to more advanced process nodes brings increased design and complex manufacturing processes. At 32 nm and below, the stakes are high and can be fatal for a SoC company if it releases a faulty or sub-standard IC. And therefore the verification process in the design cycle becomes mission critical.

Here's the problem. Every additional IP core integrated into the SoC increases the state-space and adds to the verification complexity of the system. This is why system-level verification teams are hard pressed to shrink time-to-market and tackle increased complexity. The current strategy has been to increase verification team size, to develop more tests, and to use larger compute farms to run those tests in a more or less fixed amount of time.

The result: Verification budgets are rising at 30 percent per year and stressing profit margins. So the challenge is how to do more with given resources. So we have to ask: What is required in methodology or automation tools to manage better verification at system-level without compromising on verification goals?

Previously, EDA companies addressed the needs of block-level verification through methods and tooling. Methods such as UVM -- the Universal Verification Methodology standardized by the Accellera industry body -- were successful and adopted widely by IP core vendors. Testbench creation and integration became structured and seamless. These methods are based on constrained random techniques and rely on improving the quality of random stimulus through better solvers and manual/automatic tweaking of constraints to achieve faster coverage. So I believe the pain points of block-level functional verification have been addressed well by EDA.

However, system-level verification is much more complicated. Beyond the testing of individual blocks there is the testing of IP integration, IP-to-IP interaction, power, functionality/performance of the entire system, and of shared resources such as memory blocks.

Many verification test scenarios are drawn from real-world applications to which the SoC can be subjected. For example, an Ethernet and USB-device IP -- commonly found in SoCs -- depends on a myriad of modes. Lots of use and test cases need to developed and verified. Such tests span from simplistic data-transaction tests to complex ones.

Since much of the SoC's functionality is controlled by an embedded CPU, either single-core or increasingly multi-core, programmable IP verification scenarios require the development of test software.

At present, verification engineers translate scenarios into test software manually, which limits the number of scenarios. What's more, combining test software with the intent of generating new scenarios or increasing concurrency in the system becomes effort-intensive and error-prone, thereby limiting scenario exploration. This insufficient approach leads to under-verification. The result is that engineers leave many system-level issues/bugs unexplored.

Simply put, system-level functional verification through current manual test-software development cannot be a scalable or effective solution to deliver on the quality of verification and TTM. With innovation dependent on IP integration, current approaches will certainly limit what can be designed and delivered to the market -- a case of verification productivity stunting design innovation.

The way to solve this impending verification crisis is with automation tools targeted at system-level verification that can provide comprehensive coverage in a scalable and cost-effective manner.

For any verification automation solution to be accepted by the EDA software user base, these tools must accept verification intent in terms of scenarios and automatically generate test software in the form of multi-threaded C-language test applications.

The scenario specification mechanism for such a tool should be rich and intuitive enough for verification engineers to express complex scenarios and combine scenarios. Such an automated system-level verification tool will drastically reduce the time and effort to translate, and it will empower engineers to employ scenarios. Additional support in terms of coverage model and constraints must provide the necessary means to visualize status and focus the tool to explore corner cases.

The scenario-to-test software approach should be agnostic to varied verification test platforms such as simulation/emulation or virtual-prototype models, provided a CPU model exists and the right kind of abstraction is provided by the tool to interface with testbench components, simulation, or emulation. This is a crucial feature since many scenarios may have to be duplicated on simulation and emulation platforms. The result would be a system verification knowledge base and context that could be reused across projects.

I believe that as fabrication costs rise for process nodes at the leading edge, the market success of future SoCs will be defined by the quality of verification and the cost and time taken to achieve it.

So my call to action for the EDA industry is: Address the looming functional verification challenges at the system-level before they ambush SoC design.

1. Typically SoC platform from companies have derivative platforms with some features added or removed. There seems to be redundancies/duplication of efforts in functional verification in these kind of scenarious which is potential areas for savings.

2. Placing higher priority to VP platform for system level verification makes sense. Getting software run on VP without issues can ensure lot of success to business. Key reason being time to market. Software stack is critically tested on VP, and running on hardware ensures minimal software issues.

3 If the software use/test case can capture details of the the components, registers, memory sections... from the entire state space that are activated, one can gain higher level of confidence in verifcation.

Hi Janine, I am assuming that you are referring to different simulation platforms (like virtual-prototype, RTL simulator or emulators like Palladium) when you use the word "environment". These platforms provide different levels of observability and simulation-performance (sim-performance) throughput. For example: RTL simulators give better observability but low sim-performance while emulators give marginal observability but deliver on sim-performance. Depending upon the verification stage, platforms are chosen, for example initially RTL simulator is chosen as the scenarios one would run are simple and the expectation is more bugs will be flushed out at this stage. But when confidence on RTL is built, folks take it to emulation or Sim-accelerators for stress testing and benchmarking. Presently emulation vendors are improving observability and investing on transactional VIPs (partly synthesize-able) to re-use test-bench. My take, emulation platform may be the way to go but bottlenecks like compilation, cost etc will have to be addressed before it is adopted widely.