Open side-bar Menu
 Real Talk

Posts Tagged ‘formal’

CDC Verification of Fast-to-Slow Clocks – Part 2: Formal Checks

Thursday, January 21st, 2016

We continue the short blog series that addresses the issue of doing clock domain crossing analysis where the clocks differ in frequency. In Part 1, we ended the discussion noting that when there is a fast-to-slow transition, there is a possibility that a short duration control pulse may be completely missed by the receive domain and a formal analysis is required to discover if this is a potential problem. We will look at how formal analysis can verify this kind of transition.

A formal check also is required on a slow-to-fast data crossing with feedback. In such a circuit, as shown in Figure 4, an acknowledge signal coming from the receiving fast-clock domain to the transmitting slow-clock domain also requires a formal Pulse Width check. Although the control pulse (request) is going from slow to fast and does not need a formal pulse width check, the acknowledge pulse-width check is necessary because the acknowledge signal (the feedback circuit) is going from a fast to a slow clock and, in order for the acknowledge to be properly captured, the acknowledge pulse (transmitted from the receiving side) must be sufficiently wide to be captured (received on the transmitting side) by the slower clock domain of the transmitting side flops. Failure to check for this condition is the reason behind many a request/acknowledge circuit not working as expected. Note that feedback circuits in a fast-to-slow crossing are operating in a slow-to-fast mode and the acknowledge signal in such a circuit does not need to be pulse-width checked. In short, all fast-to-slow control signal transitions, whether connected in a feed-forward or a feedback manner need to be formally pulse-width checked to ensure integrity of the control aspect of the clock domain crossing. (more…)

Video: “Why A New Gate-level CDC Verification Solution?”

Thursday, November 19th, 2015

Recently, I interviewed Vikas Sachdeva, Sr. Technical Marketing Manager at Real Intent where we discusse why gate-level CDC verification is necessary, what are some of the failure modes that can occur, and why Meridian Physical CDC is the right tool to do gate-level sign-off. You can see the video below.  You can also find more information about Meridian Physical CDC here.

Click here to watch other Real Intent videos on our YouTube Channel.

My Impressions of DVCon USA 2015: Lies; Experts; Art or Science?

Thursday, March 12th, 2015

Last week I attended the Design and Verification Conference in San Jose.  It had been six years since my last visit to the conference.  Before then, I had attended five years in a row, so it was interesting to see what had changed in the industry.  I focused on test bench topics, so this blog records my impressions in that area.

First, my favorite paper was “Lies, Damned Lies, and Coverage” by Mark Litterick of Verilab, which won an Honorable Mention in the Best Paper category.  Mark explained common shortcomings of coverage models implemented as SystemVerilog covergroups.  For example, because a covergroup has its own sampling event, that may or may not be appropriate for the design.  If you sample when a value change does not matter for the design, the covergroup has counted a value as covered when in fact it really isn’t.  In the slides, Mark’s descriptions of common errors were pithy and, like any good observation, obvious only in retrospect.  More interestingly, he proposed correlating coverage events via the UCIS (Unified Coverage Interoperability Standard) to verify that they have the expected relationships.  For example, a particular covergroup bin count might be expected to be the same as the pass count of some cover property (in SystemVerilog Assertions) somewhere else, or perhaps as much as some block count in code coverage.  It struck me that some aspects of this must be verifiable using formal analysis. You can read the entire paper here and see the presentation slides here.

I was also impressed by the use of the C language in verification — not SystemC, but old-fashioned C itself.  Harry Foster of Mentor Graphics shared some results of his Verification Survey, and there were only two languages whose use had increased from year-to-year: SystemVerilog and C.  For example, there was a Cypress paper by David Crutchfield et al where configuration files were processed in C.  Why is this, I wondered?  Perhaps because SystemVerilog makes it easy via the Direct Programming Interface (DPI): you can call SystemVerilog functions from C and vice-versa.  Also, a lot of people know C.  I imagine if there were a Python DPI or Perl DPI, people would use those a lot as well! (more…)

Smarter Verification: Shift Mindset to Shift Left [Video]

Thursday, March 5th, 2015

The Design and Verification Conference Silicon Valley was held this week.  During Aart de Geus’ keynote, he shared how SoC verification is “shifting left”, so that debug starts earlier and results are delivered more quickly.   He identified a number of key technologies that have made this possible:

  • Static verification that uses a mix of specialized code analysis and formal technology which are must faster and more focused than traditional simulation
  • New third generation of analysis engines
  • Advancements in debug

Real Intent has also been talking about this new suite of technologies that improve the whole process of SoC verification.  Pranav Ashar, CTO at Real Intent wrote about these in a blog posted on the EETimes web-site.  Titled “Shifting Mindsets: Static Verification Transforms SoC Design at RT Level“, it introduces the idea of objective-driven verification:

We are at the dawn of a new age of digital verification for SoCs. A fundamental change is underway. We are moving away from a tool and technology approach — “I have a hammer, where are some nails?” — and toward a verification-objective mindset for design sign-off, such as “Does my design achieve reset in two cycles?”

Objective-driven verification at the RT level now is being accomplished using static-verification technologies. Static verification comprises deep semantic analysis (DSA) and formal methods. DSA is about understanding the purpose and intent of logic, flip-flops, state machines, etc. in a design, in the context of the verification objective being addressed. When this understanding is at the core of an EDA tool set, a major part of the sign-off process happens before the use or need of formal analysis. (more…)

SoCcer: Defending your Digital Design

Thursday, August 14th, 2014

Weird things can happen during a presentation to a customer!

I was visiting a customer site giving an update on the latest release of our Ascent and Meridian products. It was taking place during the middle of the day, in a large meeting room, with more than 30 people in the audience. Everything seemed to be going smoothly.

Suddenly there was an uproar, with clapping and cheers coming from an adjacent break room. Immediately, everyone in my audience opened their laptops, and grinned or groaned at the football score.

The 2014 FIFA World Cup soccer championship game was in full swing!

As Germany scored at will against Brazil, I lost count of the reactions by the end of the match! The final score was a crushing 7-1.

It disturbed my presentation alright, but it also gave me some food for thought.

If I look at  SoC design as a SoCcer game, the bugs hiding in the design are like potential scores against us, the chip designers. We are defending our chip against bugs. Bugs could be related to various issues with design rules (bus contention), state machines (unreachable states, dead-codes), X-optimism (X propagating through x-sensitive constructs), clock domain crossing (re-convergence or glitch on asynchronous crossings), and so on.

Bugs can be found quickly, when the attack formation of our opponent is easy to see, or hard to find if the attack formation is very complex and well-disguised.

It is obvious that more goals will be scored against us if we are poorly prepared. The only way to avoid bugs (scores against us) is to build a good defense. What are some defenses we can deploy for successful chips?

We need to have design RTL that is free from design rule issues, free of deadlocks in its state machines, free from X-optimism and pessimism issues, and employs properly synchronized CDC for both data and resets and have proper timing constraints to go with it.

Can’t we simply rely on smart RTL design and verification engineers to prevent bugs? No, that’s only the first line of defense. We must have the proper tools and methodologies. Just like, having good players is not enough; you need a good defense strategy that the players will follow.

If you do not use proper tools and methodologies, you increase the risk of chip failure and a certain goal against the design team. That is like inviting penalty kick. Would you really want to leave you defense to the poor lone goal keeper? Wouldn’t you rather build methodology with multiple defense resources in play?

So what tools and methodologies are needed to prevent bugs? Here are some of the key needs:

  • RTL analysis (Linting) – to create RTL free of structural and semantic bugs
  • Clock domain crossing (CDC) verification – to detect and fix chip-killing CDC bugs
  • Functional intent analysis (also called auto-formal) – to detect and correct functional bugs well before the lengthy simulation cycle
  • X-propagation analysis – to reduce functional bugs due to unknowns X’s in the design and ensure correct power-on reset
  • Timing constraints verification – to reduce the implementation cycle time and prevent chip killer bugs due to bad exceptions

Proven EDA tools like Ascent Lint, Ascent IIV, Ascent XV, Meridian CDC and Meridian Constraints meet these needs effectively and keep bugs from crossing the mid-field of your design success.

Next time, you have no excuse for scores against you (i.e. bugs in the chip). You can defend and defend well using proper tools and methodologies.

Don’t let your chips be a defense-less victim like Brazil in that game against Germany! J

S2C: FPGA Base prototyping- Download white paper

Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy