Thursday, February 13th, 2014
At last years Design and Verification Conference (DVCon) in San Jose, Real Intent sponsored a panel on “Where Does Design End and Verification Begin?”
The panel was moderated by Brian Hunter, Cavium, Inc. and panelists:
Pranav Ashar – Real Intent, Inc.
John Goodenough – ARM, Inc.
Harry Foster – Mentor Graphics Corp.
Oren Katzir – Intel Corp.
Gary Smith – Gary Smith EDA
Brian opened the panel with the following remarks and then asked a series of questions. Below are links into the video recording where the question is asked and the immediate replies and comments by the panelists. If you don’t have time to listen for the full 20 minutes, jump down to Questions 6, 7 and 8 to see the highlights for part 1. Next week we present part 2.
(Brian Hunter begins) Our topic today is the blurring lines between design and verification. Most people know that verification schedules are the long pole and getting too long and we need designers to take on a larger role in the verification process. (more…)
Thursday, February 6th, 2014
Ed Sperling, Editor-in-Chief of SemiEngineering.com spoke with Dr. Roger B. Hughes, Director of Strategic Accounts at Real Intent, about what’s changing in verification as design complexity increases and where engineers typically make mistakes.
Thursday, January 30th, 2014
I originally wrote and posted this blog here on SOCcentral.com. It is reproduced below.
January 22, 2014 — As SOC design crosses the billion-gate threshold the cost of errors grows dramatically. The demand that engineers ensure their work is as correct as possible — and as soon as possible — in the design process has become more insistent. Letting errors slip forward one stage closer to implementation means their impact will grow while their causes become obscured and success is delayed. The design sign-off process itself has grown more complex, and the register-transfer level (RTL) is now where sign-off begins.
A starting point for the sign-off regimen is verification of the timing behavior of the heterogeneous IP used in an SOC and how the IP interfaces with the host design, including how clocks and signals cross any interfaces. Clocking schemes must be defined to enable earlier static analysis before it reaches the simulation stage. However, before timing analysis and simulation begin, designs must be cleaned using Lint tools.
Modern Lint tools have evolved to the point where they can handle full-chip designs and yet still offer concise hierarchical reporting. The availability of low-noise reporting means less time waiving violations and more time cleaning easy-to-fix issues. Because of the lower-noise, designers can use the tool earlier and more often. However, an RTL Lint tool requires only rule-setup and, therefore, cannot provide a deep analysis.
Thursday, January 30th, 2014
Qun Li, Senior Staff Engineer at Real Intent with Michiel Ligthart, President and COO of Verific
Just this week, Michiel Ligthart from Verific dropped by for a visit. Real Intent has enjoyed a long-term winning collaboration with their team and have used Verific’s leading HDL front-end for many years. They continue to keep us up to date with the latest developments in the SystemVerilog language.
Michiel brought a gift for one of our software developers, Qun Li, who has been supporting the integration of Verific software into our verification suite. He surprised her with the Verific mascot: a huge stuffed giraffe!
A special thanks to Michiel for honoring Qun and we look forward to more success in the years ahead.
Thursday, January 23rd, 2014
In Parts One and Two, we discussed the use of structural and formal checks when there is a fast-to-slow transition in a clock domain crossing. In this blog, we will present the third and final step using a design’s testbench.
The next step in the verification process of fast-to-slow clock domain crossings is to do metastability-aware simulation on the whole design. When running a regular simulation test bench, there is no concept of what could happen to the design if there was metastability present in the data or control paths within the design. One of the key reasons for doing CDC checks is to ensure that metastability does not affect a design. After structural analysis ensures that all crossings do contain synchronizers, and formal analysis ensures that the pulse width and data are stable, a whole-chip metastability-aware simulation is needed to see if the design is still sensitive to metastability. Functional monitors and metastability checkers are shown in Figure 7. No changes are made to the design, and the necessary monitors and checkers are written in an auxiliary Verilog simulation test bench file. This auxiliary file is simply referred to by the original simulation test bench file to invoke the metastability checking. As a prerequisite, this step requires that the design have a detailed simulation test bench.
Figure 7 – Metastability aware simulation checks the tolerance of downstream logic to the presence of jitter in the data path through the use of functional monitors and CDC checkers.
Thursday, January 2nd, 2014
There were 5 key developments that stood out for me in 2013 and I have 4 predictions for 2014 I think would of interest to the EDACafe audience.
- We are now in the world of 8-core processors. Both the new Xbox One and the Sony PS4 gaming systems employ big 8-core AMD CPUs. And MediaTek has announced the MT6592, the first cell-phone chip that uses 8 ARM A-7 processors running simultaneously at 2GHz. I know, I know, you are asking yourself “do I really need 8-core functionality in my pocket?” Probably not this week, but soon you will wonder, “How did I live without it?”
- The 50th anniversary of Design Automation Industry was celebrated this year with a gala event in Silicon Valley to raise money for the EDA Oral History Project. The EDA Consortium’s “Back to the Future Event” had more than 250 in attendance and had a wonderful vibe. There was even a psychedelic time tunnel. Everyone I spoke to afterwards told me what a good time they had. Kudos go out to Kathyrn Kranen, Chair of EDAC, and to Bob Gardner and Jennifer Cermak, staff members who put it all together with a host of sponsors including Jill Jacobs from MOD Marketing. The industry is really founded and runs on the genius and talent of so many people. Click on the link to find out more: http://www.edac.org/events/back_to_the_future/presentation (more…)
Thursday, December 19th, 2013
May you enjoy health, happiness, and peace in this holiday season and through the coming year!
From the staff at
Thursday, December 19th, 2013
Andrew B. Kahng, Professor of CSE and ECE, Univ. of California at San Diego presented a paper on “The ITRS Design Technology and System Drivers Roadmap: Process and Status” at the 50th Design Automation Conference in Austin, TX. This important review of the technology challenges that are in front of the EDA industry and what is the current status is presented here below in this final posting of a blog series.
7. CONCLUDING THOUGHTS
The Design Chapter in the ITRS has for well over a decade defined technology requirements and design challenges for the EDA industry and the VLSI CAD research community. Design technology roadmaps for DFM, low-power design, 3D/TSV integration, More Than Moore, etc. are continually added to maintain relevance of the roadmap. Recent Design Cost and Low-Power Design models highlight the challenges of design productivity, software design cost, and power management in future SOC and MPU designs. At the same time, the System Drivers Chapter has provided models for key market drivers as well as basic chip parameters (layout density, clock frequency, power dissipation, etc.) that bind the ITRS together via the Overall Roadmap Technology Characteristics. The MPU driver model has evolved frequency and power attributes in response to disappearing microarchitectural knobs, emergence of power limits, and challenges of device leakage; further changes (adding uncore elements, evolution of MPU-PCC for micro-server, updated die area modeling) are likely in the near future. The past decade has also seen increased reliance on “design-based equivalent scaling” (e.g., methods for activity factor reduction without compromising throughput or performance) to continue the semiconductor value proposition, and rapidly growing involvement in cross-TWG issues ranging from variability limits to device requirements.
The future of design technology roadmapping, and of the Design TWG’s work in the ITRS, will be affected by a variety of technical, business and cultural factors.
- Past foundations of the ITRS seem increasingly shaky. For example, A-factors may no longer be constant across multiple technology nodes. Mx and poly pitches (i.e., horizontal vs. vertical densities) may scale at different rates. The fundamental assumption of 2× density scaling per node may be already long past; whether the industry can flourish with, e.g., 1.4× density scaling per node is an open question.
- Tremendous uncertainty with respect to patterning technology (e.g., timing of EUV, directed self-assembly), cost models (e.g., triple- and quadruple-patterning), device and interconnect structures and properties (tunnel FETs, resistive RAMs, drive vs. leakage currents), and high-value applications all present challenges to the roadmapping of design technology requirements.
- Fewer resources are available for ITRS activity even as the scope of the roadmap widens (MEMS, More Than Moore, new storage and switch elements, 3D integration) and the difficulty of the roadmapping task increases. Greater automation is needed to check consistency and impacts of proposed roadmap changes, a la the “Living ITRS” efforts of a decade ago .
- An oligopolistic EDA industry, along with continued consolidation and disaggregation in the semiconductor industry, as well as unwillingness to share competitive (as opposed to pre-competitive) data, (see footnote 1) means that leading companies more frequently “opt out” of roadmap participation. There is a risk of a “vicious cycle” of decreased roadmap participation and decreased roadmap value.
- Communication across supplier industries, across the design manufacturing interface, and across academia-industry boundaries is increasingly needed to optimize technology investments and maximize the returns from the roadmapping process. As the industry faces an explosion of post-CMOS, postoptical technology options, it seems appropriate to at least revisit the concept of “shared red bricks”.
Against this backdrop, there is some good news: Members of the design, EDA and research communities are willing to find common cause in the design technology roadmap. At the 2009 and 2010 EDA Roadmap Workshops , representatives from leading EDA companies, semiconductor companies, and research consortia commenced a dialogue to analyze needs and status of EDA roadmapping. See footnote 2. Other discussions sought new mechanisms by which more of the community could contribute to the design technology roadmap. And the really good news for EDA and VLSI CAD: If anything remains essential to the future of Moore’s Law scaling, it will be design technology, and design-based equivlent scaling.
Dr. Juan-Antonio Carballo has co-chaired the U.S. and International Design TWGs with me for the past decade, and has been particularly influential in the conception of the System Drivers Chapter as well as iNEMI and More Than Moore interactions. Dr. Kwangok Jeong developed and maintained the MPU, power, frequency and A-factor models during the critical years of 2007-2011, which saw many Design-PIDS interactions regarding roadmap for device power vs. performance. This paper would not exist without the help of UCSD Ph.D. students Tuck-Boon Chan, Siddhartha Nath, Wei-Ting Jonas Chan, and Ilgweon Kang. Many participants in the ITRS Design and System Drivers efforts, and in the overall ITRS effort, have contributed valuable insights and perspectives over the years. I also thank Dr. Sani Nassif (who has for years driven the DFM section of the Design Chapter) for organizing the special session which led to the writing of this paper.
- It is suboptimal for students at UCSD to “predict” designs and cell libraries that industry has already developed, or for students at Purdue to develop ab initio models for device structures that again have already been developed. Yet, these are the mechanisms by which core material and data is generated in the ITRS today.
- The 2009 workshop addressed such questions as “What would make an EDA roadmap more useful?”, “Which EDA areas lack most in roadmap efforts?”, and “Which EDA areas are behind what the roadmaps say?” The 2010 workshop then identified gaps in the EDA roadmap (system-level executable specification, designspace exploration and pathfinding, EDA scaling requirements in light of evolving computing platforms, power-driven design, and design for resilience), reached agreement on the nature of EDA, and identified challenges in filling in the EDA roadmap gaps (incremental design flows, new design for cost methodologies, and an expanded scope of EDA moving to system-level design).
 A. E. Caldwell, Y. Cao, A. B. Kahng, F. Koushanfar, H. Lu, I. L. Markov, M. R. Oliver, D. Stroobandt and D. Sylvester, “GTX: The MARCO GSRC Technology Exploration System”, Proc. DAC, 2000, pp. 693-698.
 EDA Roadmap Workshop at DAC 2010. http://vlsicad.ucsd.edu/EDARoadmapWorkshop/
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. DAC’13, May 29 – June 07 2013, Austin, TX, USA. Copyright 2013 ACM 978-1-4503-2071-9/13/05 …$15.00.
Thursday, December 12th, 2013
Real Intent had an excellent Q4 and 2014! Since our September newsletter we have announced a new release of Meridian Constraints (details below), another year of strong business growth, and seen many of you at trade shows in Silicon Valley, Japan, Israel, Germany and the United Kingdom. In the December newsletter, learn about what the future holds, and how smart debug delivered better RTL verification.
To see the Q4 news, year end summary and the new videos click here.
Thursday, December 12th, 2013
Andrew B. Kahng, Professor of CSE and ECE, Univ. of California at San Diego presented a paper on “The ITRS Design Technology and System Drivers Roadmap: Process and Status” at the 50th Design Automation Conference in Austin, TX. This important review of the technology challenges that are in front of the EDA industry and what is the current status is presented here below in this sixth part of a blog series.
6. DFM, VARIABILITY, RESILIENCE
Increasing process variability, mask cost, data size and lithography hardware limitations pose signiﬁcant design challenges across different abstraction levels. The ITRS Design Chapter ﬁrst introduced the design for manufacturing (DFM) section in 2005 to discuss DFM requirements and the corresponding solutions. DFM requirements can be broadly classiﬁed as (1) fundamental economic limitations, and (2) variability and lithography limitations. Requirements due to economic limitations focus on mask cost, which is a key limiter for SOC innovations coming from small companies and emerging-market entities. Requirements due to variability and lithography limitations include quantiﬁed bounds on the variability of supply voltage, threshold voltage, critical dimension, circuit performance and circuit power consumption.