Open side-bar Menu
 Real Talk

Posts Tagged ‘soc verification’

New CDC Verification: Less Filling, Picture Perfect, and Tastes Great!

Thursday, October 9th, 2014

Real Intent will release our greatly extended Meridian CDC clock domain crossing software in November with new capabilities headlined by more hierarchical firepower and the launch of a user-configurable debugger.

The 2014.A edition announced last week (on my wife’s birthday),  will have 30% higher performance against the existing tool and a 40% smaller memory footprint. The formal analysis engine within Meridian has also been given a 10X boost in throughput.

In the YouTube video interview below, Ramesh Dewangan, vice-president of application engineering, points out that the bottom-up hierarchical flow is key to Meridian CDC’s giga-scale capacity (though the tool is equally capable of handling designs ‘flat’).

The hierarchical approach means that the complete design view of the SoC is available for CDC analysis at any time. There is no abstraction or any approximation that is used that has a potential to miss bugs. Being more specific, there is neither abstract modeling nor waivers.


ARM Fueling the SoC Revolution and Changing Verification Sign-off

Thursday, October 2nd, 2014

ARM TechCon was in Santa Clara this week and Real Intent was exhibiting at the event.  TechCon was enjoying its 10th anniversary and ARM was celebrating the fact that it is at the center of the System-on-Chip (SoC) revolution.

The SoC ecosystem spans the gamut of designs from high-end servers to low-power mobile consumer segments. A large and heterogeneous set of players (foundries, IP vendors, SoC integrators, etc.) has a stake in fostering the success of the ecosystem model. While the integrated device manufacturer (IDM) model has undeniable value in terms of bringing to bear large resources in tackling technology barriers, one could argue that the rapid-fire smartphone revolution we have experienced in the last five years owes in large part to the broad-based innovation enabled by the SoC ecosystem model. How are the changing dynamics of SoCs driving changes in verification requirements, tools and flows and thereby changing the timing sign-off paradigm?

Fundamentals of Clock Domain Crossing: Conclusion

Thursday, August 28th, 2014

In our last post in series, part 4, we looked at the costs associated with debugging and sign-off verification.  In this final posting, we propose a practical and efficient CDC verification methodology.

Template recognition vs. report quality trade-off

The first-generation CDC tools employed structural analysis as the primary verification technology. Given the lack of precision of this technology, users are often required to specify structural templates for verification. Given the size and complexity to today’s SOCs, this template specification becomes a cumbersome process where debugging cost is traded for setup cost. Also, the checking limitations imposed by templates may reduce the report volume, but they also increase the risk of missing errors. In general, template-based checking requires significant manual effort for effective utilization.


SoCcer: Defending your Digital Design

Thursday, August 14th, 2014

Weird things can happen during a presentation to a customer!

I was visiting a customer site giving an update on the latest release of our Ascent and Meridian products. It was taking place during the middle of the day, in a large meeting room, with more than 30 people in the audience. Everything seemed to be going smoothly.

Suddenly there was an uproar, with clapping and cheers coming from an adjacent break room. Immediately, everyone in my audience opened their laptops, and grinned or groaned at the football score.

The 2014 FIFA World Cup soccer championship game was in full swing!

As Germany scored at will against Brazil, I lost count of the reactions by the end of the match! The final score was a crushing 7-1.

It disturbed my presentation alright, but it also gave me some food for thought.

If I look at  SoC design as a SoCcer game, the bugs hiding in the design are like potential scores against us, the chip designers. We are defending our chip against bugs. Bugs could be related to various issues with design rules (bus contention), state machines (unreachable states, dead-codes), X-optimism (X propagating through x-sensitive constructs), clock domain crossing (re-convergence or glitch on asynchronous crossings), and so on.

Bugs can be found quickly, when the attack formation of our opponent is easy to see, or hard to find if the attack formation is very complex and well-disguised.

It is obvious that more goals will be scored against us if we are poorly prepared. The only way to avoid bugs (scores against us) is to build a good defense. What are some defenses we can deploy for successful chips?

We need to have design RTL that is free from design rule issues, free of deadlocks in its state machines, free from X-optimism and pessimism issues, and employs properly synchronized CDC for both data and resets and have proper timing constraints to go with it.

Can’t we simply rely on smart RTL design and verification engineers to prevent bugs? No, that’s only the first line of defense. We must have the proper tools and methodologies. Just like, having good players is not enough; you need a good defense strategy that the players will follow.

If you do not use proper tools and methodologies, you increase the risk of chip failure and a certain goal against the design team. That is like inviting penalty kick. Would you really want to leave you defense to the poor lone goal keeper? Wouldn’t you rather build methodology with multiple defense resources in play?

So what tools and methodologies are needed to prevent bugs? Here are some of the key needs:

  • RTL analysis (Linting) – to create RTL free of structural and semantic bugs
  • Clock domain crossing (CDC) verification – to detect and fix chip-killing CDC bugs
  • Functional intent analysis (also called auto-formal) – to detect and correct functional bugs well before the lengthy simulation cycle
  • X-propagation analysis – to reduce functional bugs due to unknowns X’s in the design and ensure correct power-on reset
  • Timing constraints verification – to reduce the implementation cycle time and prevent chip killer bugs due to bad exceptions

Proven EDA tools like Ascent Lint, Ascent IIV, Ascent XV, Meridian CDC and Meridian Constraints meet these needs effectively and keep bugs from crossing the mid-field of your design success.

Next time, you have no excuse for scores against you (i.e. bugs in the chip). You can defend and defend well using proper tools and methodologies.

Don’t let your chips be a defense-less victim like Brazil in that game against Germany! J

Executive Insight: On the Convergence of Design and Verification

Thursday, August 7th, 2014

This article was originally published on TechDesignForums and is reproduced here by permission.

Sometimes it’s useful to take an ongoing debate and flip it on its head. Recent discussion around the future of simulation has tended to concentrate on aspects best understood – and acted upon – by a verification engineer. Similarly, the debate surrounding hardware-software flow convergence has focused on differences between the two.

Pranav Ashar, CTO of Real Intent, has a good position from which to look across these silos. His company is seen as a verification specialist, particularly in areas such as lint, X-propagation and clock domain crossing. But talk to some of its users and you find they can be either design or verification engineers.

How Real Intent addresses some of today’s challenges – and how it got there – offer useful pointers on how to improve your own flow and meet emerging or increasingly complex tasks.


Fundamentals of Clock Domain Crossing Verification: Part Four

Thursday, July 31st, 2014

Last time we discussed practical considerations for designing CDC interfaces.  In this posting, we look at the costs associated with debugging and sign-off verification.

Design setup cost

Design setup starts with importing the design. With the increasing complexity of SOCs, designs include RTL and netlist blocks in a Verilog and VHDL mixed-language environment. In addition, functional setup is required for good quality of verification. A typical SOC has multiple modes of operation characterized by clocking schemes, reset sequences and mode controls. Functional setup requires the design to be set up in functionally valid modes for verification, by proper identification of clocks, resets and mode select pins. Bad setup can lead to poor quality of verification results.

Given the management complexity for the multitude of design tasks, it is highly desirable that there be a large overlap between setup requirements for different flows. For example, design compilation can be accomplished by processing the existing simulation scripts. Also, there is a large overlap between the functional setup requirements for CDC and that for static timing analysis. Hence, STA setup, based upon Synopsys Design Constraints (SDCs), can be leveraged for cost-effective functional setup.

Design constraints are usually either requirements or properties in your design. You use constraints to ensure that your design meets its performance goals and pin assignment requirements. Traditionally these are timing constraints but can include power, synthesis, and clocking. (more…)

Fundamentals of Clock Domain Crossing Verification: Part Three

Thursday, July 24th, 2014

Last time we looked at design principles and the design of CDC interfaces.  In this posting, we will look at practical considerations for designing CDC interfaces.

Verifying CDC interfaces

A typical SOC is made up of a large number of CDC interfaces. From the discussion above, CDC verification can be accomplished by executing the following steps in order:

  • Identification of CDC signals.
  • Classification of CDC signals as control and data.
  • Hazard/ glitch robustness of control signals.
  • Verification of single signal transition (gray coding) of control signals.
  • Verification of control stability (pulse-width requirement).
  • Verification of MCP operation (stability) of data signals.

All verification processes are iterative and achieve design quality by iteratively identifying design errors, debugging and fixing errors and re-running verification until no more errors are detected.


Fundamentals of Clock Domain Crossing Verification: Part Two

Thursday, July 17th, 2014

Last time we looked at how metastability is unavoidable and the nature of the clock domain crossing (CDC) problem.   This time we will look at design principles.

CDC design principles

Because metastability is unavoidable in CDC designs, the robust design of CDC interfaces is required to follow some strict design principles.

Metastability can be contained with “synchronizers” that prevent metastability effects from propagating into the design. Figure 9 shows the configuration of a double-flop synchronizer which minimizes the load on the metastable flop. The single fan-out protects against loss of correlation because the metastable signal does not fan out to multiple flops. The probability that metastability will last longer than time t is governed by the following equation:



Fundamentals of Clock Domain Crossing Verification: Part One

Thursday, July 10th, 2014

The increase in SOC designs is leading to the extensive use of asynchronous clock domains. The clock-domain-crossing (CDC) interfaces are required to follow strict design principles for reliable operation. Also, verification of proper CDC design is not possible using standard simulation and static timing-analysis (STA) techniques. As a result, CDC-verification tools have become essential in design flows.

A good understanding of the CDC problem requires an understanding of metastability and the associated design challenge.


When the input signal to a data latch changes within the setup-and-hold window around the transition of the latching clock, the latch output can become metastable at an intermediate voltage between logical zero and one. Figure 1 shows a simplified latch implementation. The metastable state is a very high-energy state as shown in Figure 2. Because of noise in the chip environment, this metastable voltage gets disturbed and eventually resolves to a logical value. The resolution time is dependent upon the load on the latch output and the gain through the feedback loop. It is impossible, however, to predict this logical value. Also, there is an inherent delay in the resolution of the metastable output as shown in the timing diagram of Figure 3. This logical and timing uncertainty introduces unreliable behavior in the design and, without proper protection, can cause it to fail in unpredictable ways.


Figure 1. A simplified latch.


S2C: FPGA Base prototyping- Download white paper

Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy