Open side-bar Menu
 Real Talk

Archive for December, 2010

Hardware Emulation for Lowing Production Testing Costs

Monday, December 20th, 2010

The sooner you catch a fault, the cheaper it will be, or so the user surveys tell us.  These surveys, conducted by various data gathering services, are meant to determine the cost of pinpointing design faults during the creation of chips.  Each one proves conclusively that costs increase by a factor of 10 at each step in the development cycle. 

It’s hard to find a better example than the infamous Pentium bug dating back to 1994.  The cost to fix the bug that found its way inside thousands of PCs was more than a billion dollars because the design fault made its way into a manufactured product.  Talk about breaking the budget and tarnishing a stellar technical reputation!

Of course, EDA companies have long touted their design-for-testability (DFT) methodologies.  Thorough and exhaustive functional verification during the development cycle is still a good strategy and an economical way to find and remove design faults, though it’s becoming less practical.  Systems-on-chip (SoCs) are populated with arrays of cores, including CPUs and DSPs, embedded memories, IP peripheral blocks, custom logic and so on.  With all of this, functional verification becomes a major bottleneck before tapeout, reinforcing the industry-wide consensus that functional verification consumes in excess of 70 percent of the development cycle. 

And, that may not be enough!  When undertaking functional verification using HDL simulators, the trade-offs between the amount of testing and the allocated time for the task often leaves undetected faults inside the design.

Herein lays the conundrum.  Functional verification can detect faults early in the design cycle, reducing the cost of finding them.  And yet, a thorough job of cleaning a design would take such a long time, the cost would be over any reasonable budget.

A new generation of hardware emulators is changing all of this.  Unlike traditional emulators that cost small fortunes, limiting ownership and adoption to a few units at large companies with equally large budgets, these new functional verification systems are much more cost effective.  They’re also faster. 

These emulators, implemented on small footprints, are powered by the latest FPGAs and driven by robust software.  They are accessible to SoC engineers and embedded software developers and can be used throughout the design cycle.  Designs target a variety of fast-paced markets, including networking, communications, multi-media, graphics, computer and consumer.

An example is ZeBu from EVE.  It supports a comprehensive test environment to exhaustively exercise all functions of a design.  Its interactive debugging –– a prerogative of the software simulator –– enables a higher degree of verification/testing than possible with traditional software tools.

Design teams have finally found a means to uncover those nasty and difficult bugs, saving the budget and making management happy.  These new functional verification tools, such as emulation, offer orders of magnitude more testing than available using software tools but with the same financial investment.  Check the recent user surveys and see for yourself.

What do you need to know for effective CDC Analysis?

Friday, December 3rd, 2010

The complexity of clock architectures is growing with larger designs. Functionality that was traditionally distributed among multiple chips is now integrated into a single chip.   As a result, the number of clock domains is increasing.  Power management is a dominant factor that impacts clock architecture (gating, power domains, voltage scaling).   Designing for multiple functional modes adds to clock architecture complexity.  For example, all these issues add logic into the clock trees.    As a result it is becoming more complex to verify designs for glitch and metastability issues.

There are very few established standards/methodologies for managing clock architectures.  Even the few established standards such as UPF (Universal Power Format) for power management and synthesis for power don’t go far enough to be clock architecture-aware with respect to glitch, data stability and metastability issues.  For example, clock gating insertion is done without full awareness of asynchronous crossings.  In fact, there are a myriad of issues relating to asynchronous clock domains that don’t have established standards.  Some of these are:

  • Single bit synchronizers
  • Asynchronous FIFO’s
  • Handshake structures
  • Clock Gating
  • Re-convergence
  • Design practices to mitigate glitches in asynchronous crossings
  • Asynchronous/Synchronous resets crossing domains
  • Reset Gating

In order to manage the design, implementation and verification of clocks in a design, more members in the design team need to be “clock/reset architecture” and “clock/reset implementation” aware.   This awareness is necessary for verifying correct functionality of the clocks when using semi-automatic CDC analysis tools and/or manual processes such as design reviews.

The clock architecture needs to be understood to generate requirements for the clock/reset networks.  Design standards for implementation can be generated from these requirements.  The design standards drive verification strategy: what can be automated using CDC tools and what must be relegated to other methods.  An example of what cannot be verified by CDC tools is the selection of an invalid combination of clocks in functional mode.

The following components need to be considered with regard to how they affect clock/reset architecture:

  • Timing:  Static Timing Analysis & Clock Tree Synthesis
  • Mode Selection: Test/Functional Mode, Clock mode select (Multiple Functional Modes), Configuration registers
  • Power: Gating Control, Voltage Scaling
  • Testability: Clocks for Scan, Clocks for At-Speed, BIST, Lock-up latches
  • Quasi-static Domains

The clock/reset architecture specification needs to contain the following details in order to meet the requirements for design implementation and verification in the following manner:

– CDC Implementation Style and Design Practice

  1. Single Bit Sync
  2. Common Enable Sync (Data Bus)
  3. Fast-to-Slow Crossings (FIFO; gray-code, read-before-write, write-before-read)
  4. Multi-mode crossings (multiple frequency modes;  Data stability)
  5. Data Correlation (Handshake)
  6. Synchronizer cycle jitter management
  7. Re-Convergence management of control bit crossings
  8. Clock Gating management
  9. Internally generated reset management

– Clock Domain Specifications

  1. Synchronous Domains
  2. Asynchronous Domains
  3. Quasi-static Domains (very slow clocks )
  4. Exclusive Domains ( clocks that are active when other related domains are static such as configuration register writing)
  5. Resets and their Domains

– Functional Mode Configuration Specifications

  1. Mode Control Pins and logic states
  2. Configuration Registers settings
  3. For multiple functional modes, mode control settings

– Primary Input/Black Box Specifications

  1. Clock domains for the primary inputs
  2. Clock domains for black box outputs

-Design Initialization Specifications

  1. How to initialize the design (critical for CDC verification that requires formal verification)


The above specifications are critical to ensure an accurate setup for CDC analysis that will result in a complete and accurate analysis.   This will minimize the most frequent complaints about CDC analysis tools; noise (voluminous messages), false violations and incomplete analysis.   Also, by documenting the CDC specifications, all project engineers will be better equipped to review the validity of CDC analysis results.

Even with the best specifications, translating them to the constraints for the CDC tools needs a robust setup validation methodology to identify missing constraints.  Real Intent’s Meridian CDC tool has such a robust setup validation flow with supporting graphical debug/diagnosis to provide guidance on completeness and accuracy of constraint specifications.  Ease of setup has been cited as key considerations for many of our recent customers who have switched to Meridian CDC.

In summary, CDC analysis and verification is increasing in complexity.   The effectiveness of CDC analysis tools requires that designers have detailed knowledge of the design’s clock/reset architecture so that complete and accurate constraints can be provided to CDC tools and designers can meaningfully and efficiently review the validity of CDC analysis results.

A version of this article was previously published by Chip Design at

CST Webinar Series

Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy