Open side-bar Menu
 Real Talk

Archive for March, 2010

Is Your CDC Tool of Sign-Off Quality?

Monday, March 29th, 2010

First generation tools historically evolved as extensions of simple linters and source checkers or by the application of basic formal engines in an attempt to solve the CDC problem. These tools generated voluminous reports and required designers to painfully plow through tens of thousands of less than informative messages for days and weeks to overcome the “signal to noise” problem. Such a methodology is prohibitive for any practical chip-level analysis. Even employing post-analysis filtering of design structures such as FIFOs and handshakes have proven to be a daunting and time-intensive task for designers trying to find real CDC violations.

While there are multiple vendors offering CDC analysis tools, some developed as extensions of lint engines,  and newer ones like Real Intent’s Meridian CDC designed from the ground up with a first principles understanding of CDC failure modes, any tool must ultimately provide value as a viable sign-off-quality tool for designers and project managers with easy setup and report-analysis capabilities, comprehensive checking, and practical execution times.

A sign-off quality tool must meet the following criteria:

  • Easy to setup and use
  • Comprehensive analysis of all asynchronous crossing issues while being tolerant to design styles
  • Manageable analysis results and easy exception handling
  • Practical run time performance with full SoC flow management

A sign-off-quality tool must include easy and true automated setup for CDC, recognition of all metastability control structures, and not be limited to coding styles/structures in order to recognize, for example, FIFO structures.  It is imperative for such recognition technology to work equally well at the netlist level in addition to the RT level.  With Meridian’s automatic CDC recognition technology, for example, all asynchronous FIFO structures are recognized at both RT and netlist levels.

A sign-off quality tool must be comprehensive in its analysis capability. It would include capability to catch all CDC and asynchronous crossing bugs while being tolerant of design practices. It would also be glitch aware and detect all glitch sources, perform pulse-width verification, perform complete cycle-jitter analysis on clock and data paths, and support free running clock analysis.

Sign-off quality tool must provide for manageable analysis results and exception handling.   Any limitations in the structural recognition of asynchronous control leads to in the analysis reports and makes the manual analysis of the report very time consuming.   Full SOC analysis should be performed in the order of hours and not days for typical design sizes (5 Million to 40 Million gates).  Because of transformations induced by timing-driven synthesis optimizations, and test-driven and power optimization-driven modifications of the clock structures, running at least structural CDC at the netlist level is a must. Running at least structural CDC analysis is becoming ever more important even for the post-layout stages of a design.  With Meridian’s technology, these capabilities are all a reality today with low noise due to Meridian’s complete asynchronous-control-structure recognition.

Finally, execution time must be manageable to the point that it can support quick feedback to the designer. Overnight runs are simply not practical for such analysis. Considering the fact that often CDC analysis is performed at the tail-end of the RTL signoff process when schedule pressures are greatest, using basic linters for CDC analysis generally leads to frustration. It is not uncommon for run-times to prevent designers from finishing complete full chip analysis.

So, if your CDC tool does not pass muster in any or all of the above issues as covered in this blog, please visit our web site home page ( or contact us for a consultation on how the Meridian CDC solution can make CDC signoff a reality.

DATE 2010 – There Was a Chill in the Air

Monday, March 22nd, 2010

There was a chill in the air; people were bundled up with layers of clothes as they walked at a fast clip to get to the convention center in Dresden, Germany on Tuesday 09 March 2010 to attend DATE 2010. The coat rack was filled with all of our coats, hats, gloves and scarves…but the reception by the DATE organizers was anything but cold…

As we arrived on Monday to sign in, we were greeted with “Cheers” and asked our names. Everything was in order, in its place and waiting for us…finding where to go first was our only challenge and that did not prove to be too hard to tackle.

The EDA vendors were well cared for; all of the details were attended to without a hitch. The booths were up and waiting for us when we arrived, everything running like a well oiled machine. The booth organizer was at our disposal, just in case we needed anything at all. The offer for help was not just a polite gesture but a genuine offer.

The conversation around this show was the fact that the technical tracks, with their intriguing subject matters, were the real draw for this show. The tracks were well attended and in some cases with standing-room only. I noticed people briskly walking from one track to the next. There was always a buzz of conversation that this show is truly morphing into a technical symposium.

One thing that was pleasantly surprising about this show was the fact that it was heavily attended by students. The students were not just looking for a job but came with their projects for products that they believe are needed now and in the future of EDA. The projects were well thought out, well researched and very well presented to the public. It was nice to see these students so eager to share their knowledge, but even more eager to learn from the EDA vendors.

There were fewer EDA companies as well as industry attendees at the show this year and the focus seemed to be more on Academia versus EDA companies. That said, the conversations with the show participants on the floor were purposeful, intimate and more in-depth than at most trade shows. As a result, attending the show had great value because you were given special attention, more time and were even given a special status…hence the trip was more than worthwhile. Personally, I liked that aspect of the show.

As a Vendor in the industry, I am hopeful that they can find a good balance that allows the EDA vendors, the Academia, Technical symposium and the User community to find purpose and value in joining DATE; I believe it is good for the industry, the vendors, the companies, and the individuals to have this balance. As a responsible individual and a person that cares about this industry, I shared my ideas with the DATE organizers…I think they have a good plan for next year.

When the show came to a close we packed up our things, bundled up in our winter attire and daringly went out into the cold air outside…but inside we will have warm fond memories of the days spent in Dresden.

Drowning in a Sea of Information

Monday, March 15th, 2010

We are a society inundated with information. At no previous time in history has so much data been available to such a wide population with access literally at their fingertips. But information abundance has not necessarily translated into increased actionable knowledge. It seems that we reach a point where the more information we have available, the harder it becomes to make use of it.

I was recently reading an old Scientific American article by Hal Varian, then Dean of the UC Berkeley School of Information Management and Systems1. In this document, Dr. Varian argues that the fundamental limits of human comprehension will prevent the realization of a future “information economy”. He quotes Nobel laureate economist Dr. Herbert A. Simon as saying, “What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention, and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.” Dr. Varian then proceeds to frame his argument that information itself is meaningless without “some way to locate, filter, organize and summarize it.”2

In the 15 years since this article was written, the expanse of information has continued to grow, and formed the basis for the Internet search engine market. Like many people, I have Google as my browser’s home page. This is how I deal with my desire to find relevant information among the vast quantities of data on the internet. It not only helps me when I know up front what I am looking for, but also to help me refine my understanding by highlighting the best representations of the type of information I want to find.

While Internet search engines have provided a step towards Dr. Varian’s vision of information organization, the Electronic Design Automation industry has lagged behind in this ability. Each year designs chase Moore’s Law and the complexity of Integrated Circuits has increased exponentially. But in many ways, EDA tools are still delivering data to design and verification engineers as if we were back in the days of schematic entry. Sure, there have been some advances, such as reporting data based upon design hierarchy instead of flat netlists, but the sheer size of designs produce volumes of data that cannot reasonably be parsed without hitting Dr. Simon’s “poverty of attention”.

One doesn’t need to look too far to see examples of this. Most engineers (especially in EDA) have had the experience of reporting a tool problem to an EDA company, only to have customer support point out a single line in a voluminous log file that explains exactly the reported issue. Does that mean engineers are more careless than previous generations? Not at all, but our predecessors rarely had to deal with log files in the tens of thousands of lines, and “summary reports” longer in length than the United States Constitution. It’s gotten bad enough that many tools now provide engineers with the ability to turn off warning messages they know they don’t care about. Engineers are busy people, with many conflicting requirements for their attention. EDA should be working more on a Google-style model of information delivery – finding information the engineer knows they want fast, and guiding them towards the best representations of cases with which they are not familiar.

At Real Intent, we’ve made information organization a top priority in all our tools. Because our focus is on automating much of the work required to run verification analysis, we take our responsibility for organizing the analysis information seriously. Our lint tools provide more meaningful warnings with less noise, our automatic formal tools classify failures by cause and effect, and our clock domain crossing analysis aims to highlight the source of problems, not the symptoms. Have we achieved the goal of a Google-like repository of information management? Not yet, but we continue to strive towards making it faster and more efficient for our customers to consume the volumes of data and prevent “attention poverty”. We welcome your feedback on how we are doing.


1. Varian, Hal R., “The Information Economy – How much will two bits be worth in the digital marketplace?”, Scientific American, September, 1995, p 200-201,

2. Ibid.

DVCon 2010: Awesomely on Target for Verification

Monday, March 8th, 2010

DVCon 2010 at the Doubletree Hotel in San Jose, California, was an important and successful event for all of us at Real Intent. We got to re-acquaint the design verification community with us and our products, and to learn more about the pressing needs of the industry. The Conference proved itself yet again to be The Forum for exchanging ideas and methodologies for increasing design verification productivity.

Attendance for the four-day conference, sponsored by Accellera, an industry consortium dedicated to the development and standardization of design and verification languages, was 625. It proved to be yet again to be the best conference for interacting with the EDA industry’s functional design and verification community.

At DVCon, our product demonstrations included the latest versions of the following software products:

  • Ascent for automatic formal, early functional verification, including lint
  • Meridian for comprehensive and precise Clock Domain Crossing (CDC) verification, and
  • PureTime for comprehensive Synopsys Design Constraint (SDC) validation, including glitch-aware timing exceptions verification.

We all enjoyed the DVCon Twitter Tower. Even now if your search Twitter for #dvcon or @dvcon, you will see pointers to blogs, commentary and opinions about the event and what was best at the event.

We concur with the commentary from DVCon’s Chair:

The conference was packed with valuable material all week,” commented Tom Fitzpatrick, DVCon General Chair. “More companies wanted to sponsor tutorials this year so we were able to accommodate the demand by adding a fourth day, giving attendees access to even more information and education. The exhibit halls and receptions were well-attended and the Twitter Tower added a social networking element to the traditional networking that is a mainstay of the conference. Many vendors also commented on the strength of the contacts they made during the week.”

Now, we are getting ready for DATE in Dresden, Germany next week and SNUG San Jose, California, at the Santa Clara Convention Center in March.

Verifying CDC Issues in the Presence of Clocks with Dynamically Changing Frequencies

Monday, March 1st, 2010

Emerging systems have three dimensions of complexity when it comes to making them CDC-safe. First, the number of asynchronous clock domains can range from 10s to as many as 100s for complex systems with many components. Second, the primary clock frequencies vary per component. It is not uncommon for the ratio between the fastest and the slowest clocks to be greater than 10.  Third, the clock frequencies themselves can change dynamically during the course of operation of the chip (for example, when switched from one mode to another for saving power). As a result, CDC verification becomes critical to ensure that metastability is not introduced in the design.

The first and second complexity dimensions above are common causes of failures but can be overcome by customizing the CDC interfaces for the given set of clocks and their fixed frequencies. The third dimension, on the other hand, requires very strict hand-shaking or FIFO-based synchronization schemes that work across the entire range of frequencies. We focus on the third dimension here and provide an example.

Let us look at the schematic shown in Fig 1. There are two clocks xmitClk (pink) and rcvClk (yellow). For simplicity, we assume all the flops in the design are posedge triggered. There is a data transfer (shown as DATA) between the xmitClk and the rcvClk domains that is controlled by the 3-flop toggle synchronizer (shown as SYNC). The toggle synchronizer has 2 flops that synchronize the control signal and a third flop to detect that a valid request signal is sent from the xmitClk domain. Notice that the output of the second flop is fed-back into the xmitClk domain (shown as FEEDBACK) to acknowledge that the DATA has been received. Once the FEEDBACK is seen, the xmitClk domain sends new DATA across the interface.

First, let us consider the case when the xmitClk domain is faster than the rcvClk domain by up to 2 times. Since they are relatively asynchronous, there could be a maximum of 2 posedges of xmitClk within any given cycle of rcvClk. With Real Intent’s Meridian formal analysis, we can determine that the above feedback structure is sufficient for the CDC interface to work correctly (irrespective of the duty-cycles and transition times of the clocks). This is because by the time the xmitClk domain receives the feedback signal, the rcvClk would have received the data.

Now consider the case when the xmitClk domain is faster than the rcvClk domain by 3 times. In other words, there can be 3 posedges of xmitClk within a given cycle of rcvClk. In this case, the xmitClk domain receives the feedback signal before the rcvClk has received the data. The xmitClk can immediately proceed to transfer new data which can corrupt the old data and/or cause metastability. The probability of failure increases rapidly as the xmitClk frequency increases. The interface can be made frequency-independent by taking the feedback signal from the 3rd flop of the toggle synchronizer instead of the 2nd flop (which guarantees that the rcvClk domain receives data prior to sending a feedback).

Without the fix, the CDC interface in the example clearly doesn’t work over the entire operating range of frequencies. Predictability becomes even worse when the clock frequencies are changed dynamically during the course of the chip operation (this is done in practice by applying appropriate software resets etc. carefully between the operating modes).

In order to ensure that the user does not miss any corner-case scenario, Real Intent has invented a new technology for verifying CDC interfaces with dynamic clock frequencies. It is available as the “free-running clocks” feature in the latest release of Real Intent’s Meridian (Meridian 3.0). By formally verifying the design over the entire frequency range in a single Meridian CDC run, we also simplify the otherwise cumbersome process of running the tool for each combination of operating frequencies. We make the case that a CDC tool is incomplete for modern chips without this ability to handle free-running clocks.

CST Webinar Series

Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy