Memory design challenges require giga-scale SPICE simulation

Embedded memory is now consuming most of the area on an SoC chip. Complexity and circuit size are only growing, which means smaller process geometries, larger designs, and tighter design margins. This also translates to the need for larger post-layout and more highly accurate simulation. With memory designers’ growing concern about increasing circuit sizes, giga-scale SPICE simulation is the term to describe the emerging trend in large-scale circuit designs. This is due, in large measure, to FastSPICE’s inability to meet the simulation accuracy challenge. Likewise, traditional SPICE simulators are limited by their capacities.

The semiconductor industry is calling for high-capacity, high-performance parallel SPICE simulators to support giga-scale circuit simulation primarily for memory applications and leading-edge technologies including 16/14nm FinFET. Recently emerging giga-scale SPICE simulators would meet the need and could enable the simulation of complex designs using pure SPICE engine –– say, more than one-billion element memory blocks or full chip circuits –– with no loss of accuracy. And, deliver competitive simulation performance as a FastSPICE simulator.

Memory designers, more typically, have had to make trade-offs between accuracy and performance/capacity with FastSPICE simulators for large scale memory simulation and verification. FastSPICE simulators may offer speed/capacity while sacrificing accuracy, while new giga-scale SPICE simulators are able to combine pure SPICE accuracy with FastSPICE capacity and performance, readily handling design sizes.

Let’s look at several successes. One giga-scale SPICE simulator was used recently for a 495-million element post-layout SRAM simulation to verify full-chip leakage power. It took 11.37 hours to finish the entire simulation on a 24-core computer server, consuming 69.1 GB of memory. Meanwhile, an aggressive FastSPICE run took 15.3 hours to finish the same simulation with inaccurate results, while a conservative simulation with FastSPICE took 35.5 hours and 173 GB of memory to achieve reasonable verification results. Obviously, the giga-scale SPICE simulator beats FastSPICE on simulation accuracy, and delivers reliable results with no need to experiment with complicated partitioning and other FastSPICE options. Moreover, it can also provide competitive or better performance and less memory consumption than FastSPICE simulators. It’s a natural choice for memory designers when accuracy is a concern in memory characterization and verification.

In another example of the superior capabilities of a giga-scale SPICE simulator, it was able to simulate a full-chip SRAM circuit with 109-million elements within 6.54 hours on a 32-core computer server, consuming 44.6 GB of memory. Ready for one more examples? The giga-scale SPICE simulator was able to finish simulating a 576-million element DRAM within eight hours, consuming only 14.8 GB of memory.

Other commercially available parallel SPICE simulators were unable to handle any of these sample cases.

Now, readers may be wondering what new giga-scale SPICE simulator is able to accomplish this rate of success. I’m pleased to report that it’s ProPlus’ newly announced NanoSpice Giga that offers accuracy, performance and capacity. As a pure SPICE simulator, it is able to accurately solve full matrix and analytical device model calculations without approximations, and has full SPICE analysis features. It supports industry-standard inputs and outputs. With its newly designed database for efficient memory handling and innovative matrix solver technology, NanoSpice Giga is the only SPICE simulator that can handle one-billion+ element memory circuits with superior performance.

NanoSpice Giga was created primarily for large-scale memory circuit simulations, including the characterization of large embedded SRAM blocks, simulation and verification of memory integrated circuits (ICs) such as SRAM, DRAM and flash memory, typically the domain of FastSPICE simulators. With shrinking technology, supply voltage reductions and the impact of increasing process variations, memory circuit simulation requires better accuracy, often a limitation of FastSPICE simulators.

ProPlus will exhibit at the 51st Design Automation Conference (DAC) in Booth #905, demonstrating the latest DFY solutions for giga-scale SPICE simulations and nano-scale SPICE modeling. Interested readers may reserve a time for meeting by sending email to dac@proplussolution.com. DAC will be held Monday, June 2, through Wednesday, June 4, from 9 a.m. until 6 p.m. at the Moscone Center in San Francisco. Information about DAC can be found at www.dac.com.

FEATURED PRODUCTS

TECHNOLOGY PAPERS

The research group led by Professor Peter Kinget at the Columbia University Integrated Systems Laboratory (CISL) focuses on cutting edge analog and RF circuit design using digital nanoscale CMOS processes. Key challenges in the design of these circuits include block-level characterization and full-circuit verification. This paper highlights these verification challenges by discussing the results of a 2.2 GHz PLL LC-VCO, a 12-bit pipeline ADC, and an ultra-wideband transceiver.March 13, 2015Sponsored by Mentor Graphics

The use of imaging colorimeter systems and analytical software to assess display brightness and color uniformity, contrast, and to identify defects in FPDs is well established. A fundamental difference between imaging colorimetry and traditional machine vision is imaging colorimetry's accuracy in matching human visual perception for light and color uniformity. This white paper describes how imaging colorimetry can be used in a fully-automated testing system to identify and quantify defects in high-speed, high-volume production environments.February 27, 2015Sponsored by Radiant Vision Systems

WEBCASTS

Die stacking enables better chip performance in a small form factor, meeting the needs of smartphones, tablets, and other advanced devices. Through-silicon vias are moving into volume packaging production, but problems with reliability, cost, and scaling remain. The supply chain also must adjust to this “mid” step between front- and back-end chip production. This webcast will explore the wafer thinning, bonding, TSV formation and other critical process steps necessary to enable 3D integration.

Success in electronics manufacturing increasingly relies on the materials used in production and packaging. In this webcast, experts will focus on changing material requirements, the evolving material supply chain, recent advances in process and packaging materials and substrates, and the role new materials will play in the future.