Open side-bar Menu
 Real Talk

Archive for March, 2013

SoC Verification Can be Cold as Ice

Thursday, March 28th, 2013

As vice president of business development for Breker Verification Systems, I meet with loads of verification engineers and development teams and always walk away with new insights. Any market analyst who wants help to identify a new or emerging trend in chip design and verification should network with business managers like me. All too often, we’re watching a chip verification shipwreck on par with the sinking of the Titanic, leaving us “Cold as Ice,” as Foreigner intoned in 1977.

You’re cold as ice

You’re willing to sacrifice your chip

This little refrain was playing in my head recently as I was driving away from a painful meeting with a development team working on the verification of a complicated system-on-chip (SoC) design. All was not going well. While the SoC design looked flawless and taped out with no problems, early samples of the chip were not working as expected in some scenarios. The verification engineers weren’t “cold” because they didn’t care; they were close to the iceberg and didn’t realize it.

You never take advice

Someday you’ll pay the price

I know

This team, like so many others, got sucked up in a “stitch and ship” mentality that could sink its corporate ship like an iceberg. While the electronics industry has benefited from reusing blocks of intellectual property (IP), it’s not a panacea. An IP block with a well-defined function can be reused in multiple designs and shared among numerous development teams or companies. All too often, though, development teams assume that because the IP, fabric and memory subsystem have been tested individually, the entire flow will work as intended. After all, if each IP block has been tested and works, it might seem as if the software should be able to stitch them together into a production-worthy device. (more…)

Part Six: Clock and Reset Ubiquity – A CDC Perspective

Thursday, March 21st, 2013

C. The need for reset signals to be asynchronously asserted and synchronously de-asserted.

Although it appears that use of asynchronous resets is preferred due to the ability to reset a subsystem without an active clock edge, there is still a catch. Asynchronous resets are, by definition, asynchronous both during assertion and de-assertion of reset. The assertion, as discussed earlier, does not pose an issue as it is independent of the clock signal. However, the de-assertion is still subject to meeting reset recovery times. The reset recovery time is similar to a setup timing condition on a flip-flop; it defines the minimum amount of time between the de-assertion of reset and the next active clock edge.

Figure 9. Waveforms depicting reset recovery time

If the asynchronous reset is de-asserted near the active edge of the clock and violates the reset recovery time, it could cause the flip-flop to go metastable, resulting in potential loss of the reset value of the flip-flop. A non-deterministic reset value defeats the whole purpose of using a resettable flip-flop. Hence, a fully asynchronous reset is also not a viable reset solution for systems with multiple clock domains.

As described above, synchronous resets have issues during reset assertion and asynchronous resets have issues during reset de-assertion. To overcome these obstacles, an ideal solution is to combine the best of both worlds: use a scheme that involves asynchronous assertion yet synchronous de-assertion of reset. (more…)

The BIG Change in SoC Verification You Don’t Know About

Thursday, March 21st, 2013

Ed Sperling, Editor-in-Chief of System-Level Design recently did a follow-on video interview after his Experts At The Table: Verification Strategies roundtable.  Here below, you can read Ed’s introduction to the video interview and the question he posed to Pranav Ashar, CTO at Real Intent.  To hear Pranav’s answer, click on the embedded video (which starts at 3:56).

“When you think about the most complex SoCs that are going out the door these days, at 28 and 20nm, it’s a wonder that they still work.  A good part of the reason is that they are verified very effectively.  Verification traditionally has been 50% to 70% of the NRE that goes into designing these chips and that has not changed.  But, the size of the chips and the complexity has grown significantly.  So here to discuss what is going on in verification today we have:

  • Janick Bergeron, verification fellow at Synopsys
  • Harry Foster, chief verification scientist at Mentor Graphics
  • Pranav Ashar, chief technology officer at Real Intent
  • Rai Brinkmann, president and CEO of OneSpin Solutions
  • Tom Anderson, vice president of marketing at Breker Verification Systems

So Pranav, from your perspective what is the big change or big changes that have happened in verification in the past couple of years as we have rising complexity in a chip?” (more…)

Ascent Lint Rule of the Month: COMBO_NBA

Thursday, March 14th, 2013

One of the first things you learn about when modeling logic in Verilog is to avoid race conditions.  You can do this by coding clocked registers with non-blocking assignments. So why not make life simple, and use non-blocking assignments for combinational logic too?

Let’s back up a bit and review the basics:
A problem occurs when the target of one register assignment feeds into the assignment for the next register stage. Without some kind of delay, a value could ‘race’ from one assignment right through the next register stage in the same instant of simulation time.

always @(posedge clk)

bb = f1(aa);  // When clk rises, bb is determined by aa

always @(posedge clk)

cc = f2(bb);  // The same instant, cc could get the new result.. This is not what we want! (more…)

System-Level Design Experts At The Table: Verification Strategies – Part One

Thursday, March 14th, 2013

On February 28, 2013, Ed Sperling, Editor-in-Chief of System-Level Design sat down to discuss verification strategies and changes with Harry Foster, chief verification scientist at Mentor Graphics; Janick Bergeron, verification fellow at Synopsys; Pranav Ashar, CTO at Real Intent; Tom Anderson, vice president of marketing at Breker Verification Systems; and Raik Brinkmann, president and CEO of OneSpin Solutions. Part 1 of the discussion is presented below from the content at


Part Five: Clock and Reset Ubiquity – A CDC Perspective

Thursday, March 7th, 2013

An asynchronous reset control that crossed clock domains but was not synchronously de-asserted, causing a glitch in control lines to an FSM.

The scenario above is at the confluence of the following three design requirements, and resulted in a failure when one of them was not met:

A. The need for multiple clock domains in the design that can be independently reset.
B. The need to use flip-flops that are asynchronously reset.
C. The need for reset signals to be asynchronously asserted but synchronously de-asserted.

Let us delve deeper into each of these design requirements in order to understand the context of the failure.

A. Need for multiple clock domains in the design that can be independently reset.

In the event of failure, a hardware reset is a necessity to restore the system to a known initial state from which it can start functioning deterministically. Power-cycling a modem is a classic example of allowing enough time for a system reset to propagate to all sub-systems, some of which might be operating at different clock frequencies. From a verification standpoint, since each of these subsystems typically is designed and verified separately, the presence of a reset in each subsystem enables effective block-level verification by ensuring that the design is in a known state for simulation.

It is good design practice for every flip-flop in a design to be resettable. In order to extract higher performance in functional mode, there may be certain parts of the design (e.g. pipeline registers) which themselves are not resettable but whose upstream registers are. In such cases, the design takes more clock cycles to be put into a known state as the upstream reset values need to propagate down to these registers. Often, this is an acceptable tradeoff but one that the system designers need to be cognizant of when determining the reset strategy for the SoC.

Several benefits stem from the ability to independently reset subsystems, some of which are:

  1. Managing functional complexity of the system
  2. Avoiding the long latency of a system-wide reset in the event of a subsystem failure
  3. Being able to run simulations on a subsystem level prior to integration
S2C: FPGA Base prototyping- Download white paper

Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy