Open side-bar Menu
 Real Talk
Graham Bell
Graham Bell
Graham is VP of Marketing at Real Intent. He has over 20 years experience in the design automation industry. He has founded startups, brought Nassda to an IPO and previously was Sales and Marketing Director at Internet Business Systems, a web portal company. Graham has a Bachelor of Computer … More »

Semi Design Technology & System Drivers Roadmap: Part 6 – DFM

 
December 12th, 2013 by Graham Bell

Andrew B. Kahng, Professor of CSE and ECE, Univ. of California at San Diego presented a paper on “The ITRS Design Technology and System Drivers Roadmap: Process and Status” at the 50th Design Automation Conference in Austin, TX.   This important review of the technology challenges that are in front of the EDA industry and what is the current status is presented here below in this sixth part of a blog series.

6. DFM, VARIABILITY, RESILIENCE

Increasing process variability, mask cost, data size and lithography hardware limitations pose significant design challenges across different abstraction levels. The ITRS Design Chapter first introduced the design for manufacturing (DFM) section in 2005 to discuss DFM requirements and the corresponding solutions. DFM requirements can be broadly classified as (1) fundamental economic limitations, and (2) variability and lithography limitations. Requirements due to economic limitations focus on mask cost, which is a key limiter for SOC innovations coming from small companies and emerging-market entities. Requirements due to variability and lithography limitations include quantified bounds on the variability of supply voltage, threshold voltage, critical dimension, circuit performance and circuit power consumption.

Since variability can cause circuits to exhibit faulty behavior, the DFM section in the 2009 Design Chapter adds projections for circuit-level impacts of variability, focusing on three canonical CMOS logic circuits which are the key components of a digital CMOS design, i.e., (i) SRAM bitcell for storage (see footnote 1) ; (ii) latch for circuit synchronization (see footnote 2) ; and (iii) inverter for logic functions. Failure probabilities for the three canonical circuits in future high-performance technology nodes are obtained by simulating their behavior under the influence of manufacturing process variability. The simulations use Predictive Technology Model (PTM) [23] with variability estimates down to 12nm node.

Revised DFM discussion in the 2011 ITRS observes that SRAM failure rate has already become a significant problem in the current technology node. Furthermore, although the latch has a lower failure rate compared to the SRAM, this circuit, too, is predicted to be problematic by the 20nm foundry node. The 2011 analysis also shows that enlarging circuits (i.e., reverse scaling) can be moderately effective in controlling the impact of variability. Other analyses show that failure rate can be reduced by more than an order of magnitude when supply voltage is increased from 90% to 120% of its nominal value, i.e., there is a clear engineering tradeoff between power and robustness.

Over the eight-year history of the Design Chapter’s DFM section, potential DFM solutions have been divided into three categories, (i) solutions that address fundamental economic limitations; (ii) solutions that address the impact of variability; and (iii) solutions that address the impact of lithography limitations. Among these, early solutions that directly handle variability (e.g., in timing analysis) have emerged as predicted. The embedding of statistical methods throughout the design flow has been slower than initially forecast, but is still viewed as inevitable. DFM techniques that directly model and simulate lithographic non-idealities are becoming more popular, but will take longer to become qualified in production flows as a consequence of their tighter link to manufacturing models.

Footnotes

  1. An SRAM bitcell is considered to be faulty when the SRAM is unable to store the correct logic value during a write operation or when the it fails to preserve the stored logic value during a read operation.
  2. A latch or an inverter is considered to be faulty when its signal delay (e.g., clock-to-output delay for latch) is 10 times the nominal value.

References

[23] Predictive Technology Model. http://ptm.asu.edu

Copyright Notice

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. DAC’13, May 29 – June 07 2013, Austin, TX, USA. Copyright 2013 ACM 978-1-4503-2071-9/13/05 …$15.00.

Related posts:

Leave a Reply

Your email address will not be published. Required fields are marked *


*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

CST Webinar Series



Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy