Presentations

Lithographic process variations, such as changes in focus, exposure, resist thickness introduce distortions to line shapes on a wafer. Large distortions may lead to line open and bridge faults. Locations of such defects vary with lithographic process corner. Based on lithographic simulation, it is easily verified that for a given layout, changing one or more of the process parameters shifts the defect location. Thus, if the lithographic process corner of a die is known, test patterns can be better targeted for both hard and parametric defects. In this talk, we will present design of control structures such that preliminary testing of these structures can uniquely identify the manufacturing process corner. If the manufacturing process corner is known, we can easily attain highest possible fault coverage for lithography related defects during manufacturing test. Parametric defects such as delay defects are notorious to test because such defects may affect paths that are subcritical under nominal conditions and not ordinarily targeted for test. Adoption of the proposed approach can easily flag such paths for delay tests.Dr. Sandip Kundu, University of Massachusetts, Amherst

Technology has come to one of many cliffs. Too often, massively complex constructions are brought down by an event that is lost in the minutia. Case in point, the initial failure of the Hubble Space Telescope. Through a series of what appeared to be defensive steps, a complete failure was inadvertently created. What has been clearly taught over the last 30 years is that it is not effective to perform your contribution perfectly; you must defend your contribution from potential attack. You must chose to climb to the top of the cliff that appears at the beginning of the project to a higher ground, not be swept to your demise over the cliff at the end of the path that is not yet seen in the rush to complete. But, which of the cliffs that you see ahead, or to the side, do you chose to climb and how? This talk will point to how to choose where to put extra effort and it is left to you to determine how best to make the ascent.Dr. Kevin Thompson, Synopsys

Analysis of the transistor evolution happening in the industry over the last several technology nodes reveals that despite the on-schedule chip area scaling, there has been a crisis in transistor scaling. In leading edge technology, the critical transistor size did not change all the way from 90nm to 20nm nodes. The underlying physical mechanisms explain why it is happening and point to the upcoming changes in transistor architecture that will enable transistor shrinking to resume. The scaling crisis is responsible for the slower than anticipated variability increase and for the stress engineering taking over the driver’s seat in the performance race. A change in the transistor architecture at 15nm node will change the balance of major variability mechanisms and some of the lithography requirements and design rules. It will also open the door for introduction of non-silicon transistors built on top of a silicon wafer. Comparative analysis of a planar bulk MOSFET with FDSOI MOSFET and a FinFET shows their pros and cons and suggest their likely roles in the future. Dr. Victor Moroz, Synopsys

Over the last 25 years, there have been two major revolutions in how we do digital design: the move to language/synthesis based design (starting in 1986) and design reuse (starting around 1996). We are well overdue for a third revolution. Current design methods are not meeting the needs dictated by the complexity and size of today’s SoC designs, much less the designs of the future.
This talk will describe the current candidates for the next revolution in digital design: high level synthesis, chip generators, and radical extensions to the synthesizable subset of current RTL languages. It will also describe how the economics of SoC design and manufacturing, as well as the economics of EDA, will affect and possibly de-rail the third revolution.
Mike Keating, Synopsys Inc.

Software and Functional Verification are the two largest and fastest-growing components of chip design cost. They are also the aspects of chip design that involve large – or even huge – amounts of code. The languages used in chip design – SystemVerilog, C++ - allow the creation of very complex functionality, and semiconductor technology and EDA tools allow us to implement these complex designs. But our approach to code-based design has not enabled us to manage this complexity effectively. In particular, newly graduated students do not have the tools or the theoretical grounding to develop code-based designs that will meet the needs of the chips of the next decade – when we may well see chips with 1 trillion transistors. This talk will outline the nature of the challenges of code-based design and suggest a path for managing the functional complexity demanded – and enabled – by tomorrow’s SoC designs.
Mike Keating, Synopsys Inc.

Scaling of technology over the last few decades has produced an exponential growth in computing power of integrated circuits and an unprecedented number of transistors integrated into a single chip. However, scaling is facing several problems – severe short channel effects, exponential increase in leakage current, increased process parameter variations, and new reliability concerns. We believe that device aware circuit and architecture design along with statistical design techniques can provide large improvement in power dissipation (Vdd scaling) while providing the required reliability and yield. In this talk design techniques to address power and reliability problems in scaled technologies for both logic and memories will be presented.
Kaushik Roy, Purdue University

In this tutorial, we focus on circuit/architectural design techniques for low power under parameter variations. We consider both logic and memory design and encompass modeling, analysis as well as design methodology to simultaneously achieve low power and variation tolerance. Design techniques to minimize power under parametric yield constraint as well as major process adaptation techniques using voltage scaling, adaptive body biasing or logic restructuring will be presented. Techniques to deal with within-die parameter variations in logic and memory circuits primarily caused by random dopant fluctuations will be discussed. Finally, we will discuss temperature-aware design, dynamic adaptation to temperature and on-going research activities on low-power and variation tolerant multi-core processor design.Swarup Bhunia,
Case Western Reserve University

In the first part of the talk, we will present process variation induced failures (read, write, access, hold) in 6T cells and introduce various self-tuning and self-healing schemes to improve memory yield in scaled technologies. Other bitcell configurations for improved memory stability with high dynamic range (supply voltage) will also be presented. In the second part of the talk, we will consider double gate technologies such as FinFETs and technology/circuit co-design for SRAMs. Kaushik Roy,
Purdue University

Many approaches have been introduced to address the concerns regarding both active and standby power. Yet, none of these provides a persistent answer that extends into the foreseeable future. Going to the next step will require us to venture in some new directions, some of which may be quite unorthodox. In this presentation, we will browse some of the opportunities that may arise through ultra low-power design and outline some potential solutions. Dr. Jan Rabaey
Donald O. Pederson Distinguished Professor
University of California Berkeley

In this presentation, Dr. Richard Newton introduces us to Bio Design Automation (BDA) or Synthetic Biology, the practice of assembling new living systems on a biological substrate to perform a specific function. A new field which has the potential to revolutionize our world just as microelectronics has done in our lifetime.Dr. A. Richard Newton,
Late Dean and Roy W. Carlson Professor of Engineering
University of California, Berkeley

There are two great truths in design: If it's not tested, it's broken. And if it's not simple, it's broken. This talk focuses on aspects of both issues as it applies to software code development.Mike Keating, Synopsys Fellow

In this presentation, Dr. Richard Newton presents two major advanced research activities underway at Berkeley and at other major research centers that promise to close that gap: the first at the materials level and second at the system level.Dr. A. Richard Newton,
Late Dean and the Roy W. Carlson Professor of Engineering
University of California, Berkeley