Open side-bar Menu
 Real Talk
Graham Bell
Graham Bell
Graham is VP of Marketing at Real Intent. He has over 20 years experience in the design automation industry. He has founded startups, brought Nassda to an IPO and previously was Sales and Marketing Director at Internet Business Systems, a web portal company. Graham has a Bachelor of Computer … More »

Fundamentals of Clock Domain Crossing: Conclusion

 
August 28th, 2014 by Graham Bell

In our last post in series, part 4, we looked at the costs associated with debugging and sign-off verification.  In this final posting, we propose a practical and efficient CDC verification methodology.

Template recognition vs. report quality trade-off

The first-generation CDC tools employed structural analysis as the primary verification technology. Given the lack of precision of this technology, users are often required to specify structural templates for verification. Given the size and complexity to today’s SOCs, this template specification becomes a cumbersome process where debugging cost is traded for setup cost. Also, the checking limitations imposed by templates may reduce the report volume, but they also increase the risk of missing errors. In general, template-based checking requires significant manual effort for effective utilization.

Read the rest of Fundamentals of Clock Domain Crossing: Conclusion

Video Keynote: New Methodologies Drive EDA Revenue Growth

 
August 21st, 2014 by Graham Bell

Wally Rhines from Mentor gave an excellent keynote at the 51st Design Automation Conference on how EDA grows by solving new problems.  In his short talk, he references an earlier keynote he gave back in 2004 and what has changed in the EDA industry since that time.

Here is a quick quote from his presentation: “Our capability in EDA today is largely focused on being able to verify that a chip does what it’s supposed to do. The problem of verifying that it doesn’t do anything it’s NOT supposed to do is a much more difficult one, a bigger one, but one for which governments and corporations would pay billions of dollars for to even partially solve.”

Where do you think future growth will come in EDA?

Read the rest of Video Keynote: New Methodologies Drive EDA Revenue Growth

Steve McQueen’s Mustang Explains Net Neutrality — Thursday, Aug. 21

 
August 19th, 2014 by Graham Bell

I became aware of the following Panel discussion taking place on Thursday, Aug. 21 in Palo Alto CA, and thought it would be of interest to the EDACafe audience.

Net Neutrality is all but certain to influence the patterns of data communication. Irrespective of its outcome, the viability of underlying infrastructure and economics is still an evolving discussion.

In this panel, the philosophical value chain of Net Neutrality is explored. As consumers, it is imperative on us to inspect the scalability of such policies towards our future requirements. For which, the flow of data across Content Delivery Networks, Wired and Wireless Operators, and Service Providers are critical to be understood. Conceptualizing a value chain and its components provides context for studying the broader impact. Essentially, fairness and value are what individuals, entrepreneurs and enterprises seek in sustaining the growing demands of data usage.

The goal of this event is hence in peeling back the layers of technology, usability and regulatory standards to better understand the fundamental forces at play.

For a quick background on the topic, watch Steve McQueen’s Mustang Explains Net Neutrality. Read the rest of Steve McQueen’s Mustang Explains Net Neutrality — Thursday, Aug. 21

SoCcer: Defending your Digital Design

 
August 14th, 2014 by Ramesh Dewangan

Weird things can happen during a presentation to a customer!

I was visiting a customer site giving an update on the latest release of our Ascent and Meridian products. It was taking place during the middle of the day, in a large meeting room, with more than 30 people in the audience. Everything seemed to be going smoothly.

Suddenly there was an uproar, with clapping and cheers coming from an adjacent break room. Immediately, everyone in my audience opened their laptops, and grinned or groaned at the football score.

The 2014 FIFA World Cup soccer championship game was in full swing!

As Germany scored at will against Brazil, I lost count of the reactions by the end of the match! The final score was a crushing 7-1.

It disturbed my presentation alright, but it also gave me some food for thought.

If I look at  SoC design as a SoCcer game, the bugs hiding in the design are like potential scores against us, the chip designers. We are defending our chip against bugs. Bugs could be related to various issues with design rules (bus contention), state machines (unreachable states, dead-codes), X-optimism (X propagating through x-sensitive constructs), clock domain crossing (re-convergence or glitch on asynchronous crossings), and so on.

Bugs can be found quickly, when the attack formation of our opponent is easy to see, or hard to find if the attack formation is very complex and well-disguised.

It is obvious that more goals will be scored against us if we are poorly prepared. The only way to avoid bugs (scores against us) is to build a good defense. What are some defenses we can deploy for successful chips?

We need to have design RTL that is free from design rule issues, free of deadlocks in its state machines, free from X-optimism and pessimism issues, and employs properly synchronized CDC for both data and resets and have proper timing constraints to go with it.

Can’t we simply rely on smart RTL design and verification engineers to prevent bugs? No, that’s only the first line of defense. We must have the proper tools and methodologies. Just like, having good players is not enough; you need a good defense strategy that the players will follow.

If you do not use proper tools and methodologies, you increase the risk of chip failure and a certain goal against the design team. That is like inviting penalty kick. Would you really want to leave you defense to the poor lone goal keeper? Wouldn’t you rather build methodology with multiple defense resources in play?

So what tools and methodologies are needed to prevent bugs? Here are some of the key needs:

  • RTL analysis (Linting) – to create RTL free of structural and semantic bugs
  • Clock domain crossing (CDC) verification – to detect and fix chip-killing CDC bugs
  • Functional intent analysis (also called auto-formal) – to detect and correct functional bugs well before the lengthy simulation cycle
  • X-propagation analysis – to reduce functional bugs due to unknowns X’s in the design and ensure correct power-on reset
  • Timing constraints verification – to reduce the implementation cycle time and prevent chip killer bugs due to bad exceptions

Proven EDA tools like Ascent Lint, Ascent IIV, Ascent XV, Meridian CDC and Meridian Constraints meet these needs effectively and keep bugs from crossing the mid-field of your design success.

Next time, you have no excuse for scores against you (i.e. bugs in the chip). You can defend and defend well using proper tools and methodologies.

Don’t let your chips be a defense-less victim like Brazil in that game against Germany! J

Executive Insight: On the Convergence of Design and Verification

 
August 7th, 2014 by Dr. Pranav Ashar

This article was originally published on TechDesignForums and is reproduced here by permission.

Sometimes it’s useful to take an ongoing debate and flip it on its head. Recent discussion around the future of simulation has tended to concentrate on aspects best understood – and acted upon – by a verification engineer. Similarly, the debate surrounding hardware-software flow convergence has focused on differences between the two.

Pranav Ashar, CTO of Real Intent, has a good position from which to look across these silos. His company is seen as a verification specialist, particularly in areas such as lint, X-propagation and clock domain crossing. But talk to some of its users and you find they can be either design or verification engineers.

How Real Intent addresses some of today’s challenges – and how it got there – offer useful pointers on how to improve your own flow and meet emerging or increasingly complex tasks.

Read the rest of Executive Insight: On the Convergence of Design and Verification

Fundamentals of Clock Domain Crossing Verification: Part Four

 
July 31st, 2014 by Graham Bell

Last time we discussed practical considerations for designing CDC interfaces.  In this posting, we look at the costs associated with debugging and sign-off verification.

Design setup cost

Design setup starts with importing the design. With the increasing complexity of SOCs, designs include RTL and netlist blocks in a Verilog and VHDL mixed-language environment. In addition, functional setup is required for good quality of verification. A typical SOC has multiple modes of operation characterized by clocking schemes, reset sequences and mode controls. Functional setup requires the design to be set up in functionally valid modes for verification, by proper identification of clocks, resets and mode select pins. Bad setup can lead to poor quality of verification results.

Given the management complexity for the multitude of design tasks, it is highly desirable that there be a large overlap between setup requirements for different flows. For example, design compilation can be accomplished by processing the existing simulation scripts. Also, there is a large overlap between the functional setup requirements for CDC and that for static timing analysis. Hence, STA setup, based upon Synopsys Design Constraints (SDCs), can be leveraged for cost-effective functional setup.

Design constraints are usually either requirements or properties in your design. You use constraints to ensure that your design meets its performance goals and pin assignment requirements. Traditionally these are timing constraints but can include power, synthesis, and clocking. Read the rest of Fundamentals of Clock Domain Crossing Verification: Part Four

Fundamentals of Clock Domain Crossing Verification: Part Three

 
July 24th, 2014 by Graham Bell

Last time we looked at design principles and the design of CDC interfaces.  In this posting, we will look at practical considerations for designing CDC interfaces.

Verifying CDC interfaces

A typical SOC is made up of a large number of CDC interfaces. From the discussion above, CDC verification can be accomplished by executing the following steps in order:

  • Identification of CDC signals.
  • Classification of CDC signals as control and data.
  • Hazard/ glitch robustness of control signals.
  • Verification of single signal transition (gray coding) of control signals.
  • Verification of control stability (pulse-width requirement).
  • Verification of MCP operation (stability) of data signals.

All verification processes are iterative and achieve design quality by iteratively identifying design errors, debugging and fixing errors and re-running verification until no more errors are detected.

Read the rest of Fundamentals of Clock Domain Crossing Verification: Part Three

Fundamentals of Clock Domain Crossing Verification: Part Two

 
July 17th, 2014 by Graham Bell

Last time we looked at how metastability is unavoidable and the nature of the clock domain crossing (CDC) problem.   This time we will look at design principles.

CDC design principles

Because metastability is unavoidable in CDC designs, the robust design of CDC interfaces is required to follow some strict design principles.

Metastability can be contained with “synchronizers” that prevent metastability effects from propagating into the design. Figure 9 shows the configuration of a double-flop synchronizer which minimizes the load on the metastable flop. The single fan-out protects against loss of correlation because the metastable signal does not fan out to multiple flops. The probability that metastability will last longer than time t is governed by the following equation:

Eqn1

Read the rest of Fundamentals of Clock Domain Crossing Verification: Part Two

Fundamentals of Clock Domain Crossing Verification: Part One

 
July 10th, 2014 by Graham Bell

The increase in SOC designs is leading to the extensive use of asynchronous clock domains. The clock-domain-crossing (CDC) interfaces are required to follow strict design principles for reliable operation. Also, verification of proper CDC design is not possible using standard simulation and static timing-analysis (STA) techniques. As a result, CDC-verification tools have become essential in design flows.

A good understanding of the CDC problem requires an understanding of metastability and the associated design challenge.

Metastability

When the input signal to a data latch changes within the setup-and-hold window around the transition of the latching clock, the latch output can become metastable at an intermediate voltage between logical zero and one. Figure 1 shows a simplified latch implementation. The metastable state is a very high-energy state as shown in Figure 2. Because of noise in the chip environment, this metastable voltage gets disturbed and eventually resolves to a logical value. The resolution time is dependent upon the load on the latch output and the gain through the feedback loop. It is impossible, however, to predict this logical value. Also, there is an inherent delay in the resolution of the metastable output as shown in the timing diagram of Figure 3. This logical and timing uncertainty introduces unreliable behavior in the design and, without proper protection, can cause it to fail in unpredictable ways.

Fig1
Figure 1. A simplified latch.

Read the rest of Fundamentals of Clock Domain Crossing Verification: Part One

Static Verification Leads to New Age of SoC Design

 
July 3rd, 2014 by Dr. Pranav Ashar

SoC companies are coming to rely on RTL sign-off of many verification objectives as a means to achieve a sensible division of labor between their RTL design team and their system-level verification team. Given the sign-off expectation, the verification of those objectives at the RT level must absolutely be comprehensive.

Increasingly, sign-off at the RTL level can be accomplished using static-verification technologies. Static verification stands on two pillars: Deep Semantic Analysis and Formal Methods. With the judicious synthesis of these two, the need for dynamic analysis (a euphemism for simulation) gets pushed to the margins. To be sure, dynamic analysis continues to have a role, but is increasingly as a backstop rather than the main thrust of the verification flow. Even where simulation is used, static methods play an important role in improving its efficacy.

Deep Semantic Analysis is about understanding the purpose or role of RTL structures (logic, flip-flops, state machines, etc.) in a design in the context of the verification objective being addressed. This type of intelligence is at the core of everything that Real Intent does, to the extent that it is even ingrained into the company’s name. Much of sign-off happens based just on the deep semantic intelligence in Real Intent’s tools without the invocation of classical formal analysis.

Read the rest of Static Verification Leads to New Age of SoC Design

CST Webinar Series



Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy