Reduce tester-based silicon debug time (Part 1)

Rob Rutenbar, professor of electrical and computer engineering at Carnegie Mellon University, considers post-silicon debug a "dirty little secret" that can cost an embedded system-on-chip design project $15 million to $20 million and take six months to complete, and even then there is a possibility the chips will not work perfectly. No designer can be absolutely sure that all components of the design will work seamlessly at the first trial by the customer and subsequently under all operating conditions. A chip may fail at particular frequencies and operating conditions; design functionality may be perfect stand-alone but things can start failing the moment it is plugged into a bigger and more complex cluster of chips at the customer's end; a customer may configure the chip in an unforeseen scenario, making the chip fail long after it has been launched in the market.

On average, tester activities consume up to 40% of the total time and cost of modern chips and is one of the least predictable processes.

As feature size decreases, operating speed increases, and design complexities increase. The customer may request a new feature or a change in an existing feature, requiring new patterns and techniques to test the new but critical features. As a result testing methods become more challenging and the time and effort expended on testing increases.

Unfortunately this rise in testing complexity has not seen an equivalent increase in dedicated development of testing on a scale comparable to the rest of the commercial chip business. But depending on hit-and-miss methods to make up the difference—enough to convince customers of the device functionality—can prove to be a major oversight in the long run. What is needed, instead, is a robust, rigorous, and—as much as possible—infallible test program.

To maintain high quality and customer confidence, current generation sub-micron integrated circuits need to pass a high quality and fairly exhaustive testing program before being shipped off to customers or released into the market. Complicating the problem, each part of the testing process has distinctive characteristics due to differences in frequency, IO timing, voltage levels, and other features. In spite of these complexities, it is crucial to develop a set of general practices and techniques that can address generic issues that plague the current generation of sub-micron-geometry devices.

The number of possible test scenarios for a chip can be quite high, and so, as a first step, there needs to be ongoing discussions among the verification engineer, design team, tester engineer, production team, and the customers themselves to arrive at a concise and feasible list of patterns that are capable of testing all critical functional scenarios with minimum time and effort.

Pre-silicon verification for testingPre-silicon verification engineers must deal with tester pattern generation and simulation (VFT) to deliver high quality, fool-proof pattern suites to be tried on the testers.

Pre-silicon VFT is a super set of functional verification activities, and involves developing a testbench environment capable of simulating tester conditions and silicon behaviour as closely as possible before silicon is available, with device-specific start-up routines and generating functional tester patterns.

Design, verification, and testing can no longer remain exclusive domains. They must converge and complement each other to detect potential issues at an early stage.

Once the tester pattern suite is finalized, each pattern needs to be planned in as much detail as possible because it will be the foundation upon which the targeted code with incremental and iterative modifications will run. Dealing with potential issues on the tester requires intelligent and conscious analysis of the design behaviour and waveforms.

Some features of verification provide better debug capability compared to the later testing phase:

 Since the environment is simulated, there is flexibility in setting up test cases at the block and gate level.  Verification allows full visibility into design waveforms and internal signals.  Inputs can be injected and outputs probed and logged from virtually anywhere in the design.  Ability to verify individual blocks before they are integrated in the SoC model is something that can be done at VFT level. In silicon, we cannot physically separate the blocks and thus have to deal with the entire SoC as one unit.  The ability to inject faults and errors is also something that can be done only in verification.  In simulation, designers have long enjoyed many advanced capabilities such as transaction-level modelling and assertion-based verification.  Verification allows much detailed analysis and ability to find point of failure if issues can be reproduced in VFT environment.