CMOS statistical variability: The skeleton in the closet

For many years great swathes of the semiconductor industry tried to hide their heads in the sand and ignore the messages coming from research establishments concerning the importance of CMOS statistical variability introduced by the discreteness of charge and matter.

First they completely ignored the problem then they tried to hide it. Now that statistical variability is finally entering the public domain, it’s set to hit the fabless and chipless design companies like a steam hammer.

Thankfully several events coincided in 2008 to challenge the status quo. The big CMOS and electronic device conferences such as the VLSI Technology Symposium, and IEDM were flooded with papers solely focused on the issue of statistical variability in 45-nm and 32-nm technology devices. Statistical variability lay at the heart of special sessions focused on the interaction between technology and design. TSMC replaced the traditional ‘total corners’ with ‘global corners’ and started to advise its customers to superimpose statistical Monte Carlo simulations on top of the global corners to capture the effect of statistical variability in design.

The 2008 ‘update’ of the International Technology Roadmap for Semiconductors (ITRS) also introduced drastic changes compared to the 2007 edition. Some of the most important changes were motivated by the specter of statistical variability in CMOS. The former disparity between the number in nanometers identifying the technology generation (which itself is now divorced from the definition of the half-pitch and has become a purely commercial pointer) and the physical gate length, present in the 2007 ITRS edition, practically disappears at the 22-nm technology generation in the 2008 ITRS update. This is to a great extent motivated by the fact that statistical variability almost ‘explodes’ with the previous prescription for substantial over-scaling of physical device dimensions.
In addition, research into new gate stack materials and new device architectures has been mainly motivated by the drive to improve device performance – but not any more. One of the main driving forces behind the introduction of metal gate technology, fully depleted silicon-on-insulator and FinFET devices has been the promise of a reduction in statistical variability.

On top of statistical variability, problems relating to statistical aspects of reliability are looming that will reduce the life-span of contemporary circuits from tens of years to one or two years, or less in the near future. In combination with random discrete dopants, which are the dominant source of statistical variability, the statistical nature of discrete defect charges associated with hot electron degradation and negative bias temperature instability (NBTI) result in relatively rare but anomalously large transistor parameter changes, leading to loss of performance or circuit failure. This is already a fundamental problem in flash and SRAM memories and starts to reduce dramatically the lifetime of digital chips. The irony is that some of the technology innovations, such as the introduction of high-k/metal gate stacks in 45-nm technology generation, which help reduce statistical variability, may in themselves become a reliability time bomb.

First of all the high-k dielectric has lower quality and higher density of fixed/trapped charges. The p-channel high-k transistors are more susceptible to NBTI which can cause an increase in statistical variability with aging. This problem is exacerbated by the creeping positive bias-temperature instability (PBTI) in n-channel high-k transistors which was insignificant in their silicon dioxide gate stack counterparts.

The realization that there is no escape from statistical variability and reliability forces designers to think outside of the box and to find innovative solutions. Such solutions have to cope, not only with the fact that at the moment of fabrication transistors will have a broad statistical variation in their parameters but that during the useful lifetime of the chips aging will cause variability to increase and time to failure will become shorter and shorter if no design countermeasures are implemented.

The urgent need to find design-level solutions to the variability and reliability problems was highlighted in the first call for proposals of the European Nanoelectronics Initiative Advisory Council (ENIAC) Joint Technology Undertaking (JTU) issued in April 2008. As a result a project called MODERN was funded by the European Commission in 2009.

In addition, the National Microelectronics Institute (NMI), the trade association representing the semiconductor industry in the U.K. and Ireland, in collaboration with the U.K.’s nanoCMOS Consortium, is to host its second international conference on CMOS variability, May 12 and 13 2009 at the IET, Savoy Place, London.

Aimed at chip designers, technology developers, wafer foundries and EDA tool vendors, ICCV 2009: “Living with Variability” is set to explore the impact of CMOS variability and how it can be managed at 45-nm and below. Sessions will introduce the issues, discuss the options and share techniques for meeting the challenges of CMOS variability head-on.

The issue of statistical variability in CMOS is being pulled kicking and screaming out of the closet – for all our sakes it’s not before time!

Asen Asenov is a Professor at the University of Glasgow and academic director of its process and device simulation program. He has worked on the simulation of statistical variability in nanoscale CMOS devices.

This story appeared in the April 2009 print edition of EE Times EuropeEuropean residents who wish to receive regular copies of EE Times Europe, subscribe here.

Every now and then something new comes along that causes you to want to dig down and find out a little more information. Tearing my attention away from Caitlin Jenner for a moment, I thought I'd take a closer look at the remarkable Solar Impulse 2 – an airplane powered solely by solar energy.

When analog engineers get together, the discussion always turns technical with a touch of fun. Laptops open up, schematics are surveyed and discussed, good hearty laughter abounds, and fond reminiscing of analog icons no longer with us brings out old stories and some good memories.

It might seem counterintuitive that an active device solution consumes less power than a passive device. Every design engineer knows that a passive crystal resonator (XTAL) doesn’t draw power, so why use an oscillator in place of an XTAL in a power sensitive application? The answer becomes clear when total system power is considered.