Adaptive Test (AT) has been defined
as a "broad term used to describe methods that change test conditions,
test flow, test content and test limits (potentially at the die/unit or sub-die
level) based on manufacturing test data and statistical data analysis.” The
International Technology Roadmap for Semiconductors (ITRS) concludes that AT
methods will deliver benefits including "lower test costs, higher yields,
better quality & reliability and an improved data collection environment
for more effective and rapid yield learning."

Some IDMs have implemented various
AT methods, but many fabless semiconductor companies and others with multiple
products and test flows have struggled with implementation and with isolating
the cost savings to verify the AT business payback. Without common definitions, interfaces and
protocols, expectations from customers in the value chain (including OSATS and
system OEMs), and acknowledged Best Practices, many believe there is a need for
industry-wide collaboration, guidelines and standards to enable cost-effective
implementation of AT methods.

AT
is typically applying feedforward data derived from ATE and handlers, including
die ID, parametric data, timing, IDDQ, device failures and other data, for
real-time analysis and optimization.
Statistical analysis of this real-time data can then be used to
dynamically change test flows, limits and parameters in the ATE, potentially
improving yield, quality, reliability and cost reduction. Post-test analysis
and dispositioning data can also be fed back and forward into the test flow to
change statistical bin limits to more meaningful levels. In practice, both real-time and post-test
data can deliver significant insights and improvements on yield, test time and
failure rates.

The
workshop began with a panel of speakers currently deploying AT in their test
operations, including Phil Nigh from IBM and chairman of the ITRS subgroup on
Test and Testability, Glenn Plowman of Qualcomm, Jeffrey Roehr of Texas
Instruments, and Matthias Kamm providing the customer perspective from Cisco.

Phil
Nigh worked with a cross-section of companies on ITRS, and he described various
implementations of AT — including real-time analysis and sampling for test
pattern reduction to reduce test times. Nigh explained that IBM has performed a couple of different types of
“test pattern sampling” methods: at wafer probe to select the optimal test
pattern set for specific wafer based on sample probing results, and at final
test where they use historical data and off-line analysis to drive real-time
sampling and dynamic analysis. IBM also uses data feedforward using chip ID,
which was originally purposed to feed-forward RAM repair data, then deployed
for many other useful purposes, such as parametric data from one test step for
use in future search/limit optimization.

Plowman
provided an overview of how Qualcomm deploys AT at its test subcontractors to
use real-time and near real-time statistical process control on data from
in-line test. This data is used to
adjust test limits and content “on-the-fly,” thereby improving yield and
reducing test time and cost. He provided insights and recommendations into the
process of rule building, simulation, tester integration, and data logging
architecture.

Roehr
provided practical advice on how to develop and deploy AT in the real world,
including the typical need to provide basic statistical training for test
engineers and the need to collect sample test run data for statistical
analysis. From there, it’s a matter of
determining how to write AT tests, involving design teams into Adaptive Test
solutions, and how to integrate production planners so that they understand how
to react to AT yield-related loss. For
AT to be successful, a 100% daily exposure to statistical AT elements is
required throughout the entire design to production test chain. To be successful, “Adaptive Test must become
as universal as traffic lights.”

Roehr
described how successful AT implementation requires “data, lots of data.” Every wafer and IC chip, from cradle to
grave, needs traceability; including wafer inspection and parametric data,
probe data per die, and assembly information. Test and burn-in data on multiple
passes are required to fully implement AT.
To achieve the ITRS vision, board level test data and customer use
(failure) data are also required.

Adaptive Test at the
Card/System Level

Mathias
Kamm of Cisco explained how his firm applies AT to use data that is fed forward
or backward to enhance test applied to the card/system — inside and outside of
the plant. From Cisco’s perspective, data
must be carefully structured to make it available throughout the manufacturing
process. Many data attributes are
evaluated including die identification, location on wafer, latency,
persistence, data size, security, and other variables. The database architecture consists of local
DB for real-time ATE/test cell operation where latency is 1-2 seconds and
retention is hours, non-real-time DBs for lot size and dispositioning decisions
(latency in minutes; retention for days), and company-wide DBs for long-term
storage.

For
card level AT, the challenges include dealing with more functional and fewer
parametric tests (Iddq, Fmax, etc.), a potentially high mix of various
component types, product specific tests and diagnosis, variability of traffic
tests for each product, and the lack of standard 3rd party
offerings. Traceability is a key requirement: each component is traced by lot
code, date code, unique board serial number and ECID (100 percent on
ASICs). The benefits include tracked
ASIC repair flow at the repair station, optimized test flow (less rework, lower
cost, and avoidance of mass recalls).

After
the opening presentations, the core of the workshop was an interactive
discussion of key challenges and opportunities for industry-wide AT adoption.
In addition to chipmakers, it included technical experts from ATE and Adaptive
Test software suppliers. The wide
ranging discussion included the barriers to industry-wide die level
identification, industry standard algorithms, lack of statistical knowledge in
the test flow, and the difficulties in achieving OSAT/subcon support. The industry still lacks standard definitions
and terminology and ATE interfaces and data formats vary widely.

Among
the action items was a decision by ATE suppliers to explore and discuss
standard data interfaces. An industry standard data interface to test
equipment, while believed not to be a useful means of differentiating ATE
equipment, would significantly reduce implementation time and cost for AT and
ATE suppliers, chip companies and test houses. It would also facilitate more
widespread usage of common rules, algorithms and process control/statistics
with the knowledge necessary to implement comprehensive AT strategies.

CAST
and the ITRS subgroup on Test and Testability will continue to explore
opportunities for industry-wide collaboration and education to realize and
benefit from industry-wide adaptive test implementation. For more information
on the ITRS efforts in Adaptive Test, visit http://icdt.ece.pdx.edu/~icdt/cgi-bin/adaptive.cgi/AdaptiveTest