Open side-bar Menu
 Real Talk
Jin Zhang, Director of Technical Marketing
Jin Zhang, Director of Technical Marketing
Ms. Zhang has over 12 years of experience working in the Electronic Design Automation (EDA) industry, driving the effort of bringing new products to market. At Real Intent, she is the Director of Technical Marketing. Prior to that, she has worked at Cadence Design Systems and Lattice Semiconductor … More »

Foundation for Success

February 21st, 2011 by Jin Zhang, Director of Technical Marketing

Real Intent has seen much success in the last few years despite the tough economic conditions facing the semiconductor and EDA (electronic design automation) industries during the recession. Real Intent’s revenue was up over 80% in 2010 and the size of the Real Intent team grew by 35% (and yes we are still looking for talented individuals to join our team!). This stellar growth was built upon a solid foundation. When talking to Russ Henke, contributing editor for EDACafe’s EDA Weekly, Prakash Narain, CEO of Real Intent, detailed the reasons for the success Real Intent had:

“It pays to learn hard lessons in life early. In the early years of Real Intent when we were focusing on harnessing the power of formal technology and automating it to make it easy to use for mass designers, we learned a lot about how to extract the real intent of the designs for complete and accurate analysis, how to optimize the formal analysis engines for high performance, how to simplify the user interface and reporting for maximum productivity, and how to architecture the tool flow to enable sign-off.

Over the years we have accumulated lots of knowhow and applied these techniques to different applications. For example, Meridian CDC has been used in production flows at dozens of companies and has helped tape out thousands of complex designs in the last 8 years. The ultimate goal for our customers is to be able to sign-off on CDC verification. Therefore we have adapted our philosophy and approach to CDC verification based on some basic principles:

  1. We require more thorough environment setup because this provides the necessary confidence that CDC analysis will be accurate and complete so our users can sign-off on CDC verification in the end; The “setup-lite” approach taken by our competitors can leave holes in CDC verification and nasty surprises late in the cycle; 

  2. We identify the root cause of CDC issues and only report that rather than all the symptoms of it so that users don’t have to read through a huge report with lots of noise. It is impossible to sign-off when one can’t even go through and examine each line in the report;
  3. Our structural and formal analysis are based on actual design principles to manage metastability rather than design templates, therefore Meridian CDC can handle more diverse designs styles;
  4. We offer a layered approach to CDC verification at multiple levels, starting from ensuring correct environment setup, to precise structural analysis, to customized formal analysis, and to CDC verification using simulation. This layered approach ensures all holes are covered to achieve CDC sign-off.

Extending all the knowledge learned to simpler applications such as linting and constraints management is a much easier task. Other companies which started with simple technology like linting and now are trying to extend their solutions to formal are having a much deeper learning curve and are faced with a much harder task to conquer. We benefit from climbing the tallest mountain early. Now we can stand high, see farther and go farther. I think that’s our strength”.

Real Intent is poised for continued success in the years to come. We thank our customers and industry partners for the support along the way. We will keep focusing on providing the best products and excellent customer support to contribute to the growth of the semiconductor and EDA industries and our society at large.

Fairs to Remember

February 8th, 2011 by Dr. Pranav Ashar, CTO

Real Intent participated in two events in Japan the last couple of weeks – EDSFair 2011 and Tokyo University Symposium. The followings are some of the highlights of the two events.

EDSFair 2011

EDSFair is a moderate size trade show for EDA companies held in Yokohama near Tokyo. Like DATE and DAC, it is accompanied by a parallel technical conference. It is an opportunity to network with electronic design companies in Tokyo, Osaka, Kyoto and some other nearby high-tech centers.

The show was held on Jan 27 and 28 (Thurs and Friday). The attendance was reasonable. In fact, the show looked quite busy post-lunch on Friday.

Real Intent got great traction at the show by exhibiting its leading edge software: Lint, Automatic Formal Verification, X-Verification, Clock Domain Crossing (CDC) verification, and Timings Constraints Management and Verification. In particular, the presentations on X-Verification and CDC verification were well received with many serious follow-ups.  Many attendees from large semiconductor companies seeking better solutions in the front-end verification space were very impressed with Real Intent’s high performance and high capacity Lint and CDC solutions which offer 10X improvement over competition. We got a lot of well qualified leads and it was a great show to start the year for Real Intent.

The next EDSFair will be held in October 2012 in conjunction with a semiconductor industry tradeshow so this was the last one held in the cold Japan winter. But it was well worth remembering.

Notes from Katsuhiko Sakano, General Manager of Real Intent K.K. in Japan



Tokyo University Symposium

The next stop after EDSFair was the “Advanced Design Methodology for VLSI Symposium” at Tokyo University graciously organized by Professor Masahiro Fujita. It was our privilege to participate. Tokyo University is a leading university in Japan and Prof. Fujita is a distinguished researcher in electronic design. 

The symposium brought together in one forum Real Intent with NextOp and SpringSoft. In my opinion, these three companies are the thought leaders today in advancing verification technology for the next generation of chips. 

NextOp presented its novel technology that finally makes available the automatic generation of assertions and functional coverage. SpringSoft presented new technologies for fast debug and verification closure. One of the ideas they presented had to do with mining the simulation output database in interesting ways for faster debug.

Real Intent gave the audience an  intuition and solutions for two verification problems that have become critical bottlenecks in the design flow: (1) the problem of X’s in simulation, and (2) the problem of verifying the humongous number of asynchronous interfaces on today’s chips. 

The program was led by a keynote by Maxeler Technologies on industrial-strength high-performance computing with an FPGA-based platform developed by the company.

All in all, the symposium was a very satisfying technical program that covered the state-of-art in high-end design, specification, implementation verification and debug. 

The audience consisted of faculty, students and electronic design professionals from local companies. Some of the companies in the audience were large design houses like Hitachi, Toshiba and Fujitsu as well as a number of smaller companies providing verification services, engineering recruitment and sales distribution. It was an excellent opportunity to network with the local professionals in terms of understanding their verification needs and projecting Real Intent as a key provider of enabling technologies for the verification of next-generation chips. 

The symposium finished on a high note with a drawing for an iPad. Appropriately, it was won by a student of Prof. Fujita’s.

EDA Innovation

January 31st, 2011 by Lauro Rizzatti - General Manager, EVE-USA

I recently came across this quote from Robert Noyce:  “Optimism is an essential ingredient for innovation.  How else can the individual welcome change over security, adventure over staying in safe places?” 

Noyce knew a thing or two about innovation and the alchemy to create it.  The “Mayor of Silicon Valley” co-founded Fairchild Semiconductor and Intel, and is credited, along with Jack Kilby, with inventing the integrated circuit.  He had both an impressive career and an impressive grasp on innovation.

Armed with this quote, Bob Noyce as a role model and a bit of innovative thinking, I went looking for innovation in EDA.  I’m happy to report that I found it, starting with many of the recipients of the Phil Kaufman Award.  Kaufman, who died in 1992 while on a business trip in Japan, was a creative and innovative force within the areas of hardware, software, semiconductors, EDA and computer architecture.  He was CEO of Quickturn Design Systems, now part of Cadence, and accelerated the use of emulation.  It’s easy to understand why a prestigious industry award carries his name.

The emulation and verification space is one segment of EDA that creates unlimited opportunities for innovative types.  The founders of my company EVE, for example, boldly redesigned the architecture of a hardware emulation platform and, in my humble opinion, transformed a market segment. 

Real Intent is another good example.  Formal verification is a hard and complicated problem.  That didn’t appear to deter Real Intent’s founders who pressed on and devised an innovative approach that makes the lives of many verification engineers much easier.

Entrepreneurial Rajeev Madhavan concluded in the late 1990s that synthesis needed to be linked with physical design.  He and his innovative team at Magma introduced the first physical synthesis and rocked the industry.  And, with Madhavan still at the helm, Magma is still innovating today.  Now, Oasys Design Systems’ team introduced a new synthesis methodology known as Chip Synthesis, enabling designers to synthesize full chips and not just blocks.  That technology, too, is rocking the industry.

Over in Alameda, Calif., Verific Design Automation has taken the mundane task of developing hardware description language parsers and elaborators and built it into a successful business.  In the meantime, these tools have become the industry’s de facto standard front-end software for just about every imaginable EDA and FPGA company.  This is innovative thinking at its greatest.

Of course, anyone who has been in EDA for a while can point to pockets of tremendous optimism and enthusiasm that resonates throughout the industry.  Who needs security or a safe place when there was a big adventure with an innovative and entrepreneurial big thinker just waiting for you in the Silicon Valley office complex next door?

We’re heading into DVCon later this month and DAC in June where we will see many more examples of creative thinking, enthusiasm and optimism in EDA.  I am looking forward to being wowed.

Top 3 Reasons Why Designers Switch to Meridian CDC from Real Intent

January 24th, 2011 by Rick Eram, Sales & Marketing VP

In a meeting last week with a potential customer, I jotted down the following notes on their experience using another company’s CDC (clock domain crossing) tool:

·        The designs were 300k – 2M gates with 10 clock domains

·        They had lots of issues reading the design in, and had to write a bunch of wrappers to get the VHDL through

·        It took 10-15 days to setup each module for CDC analysis

·        FIFOs were not recognized by the tool

·        Many useless messages were hiding real design problems

·        They found 4 bugs in the CDC tool itself

·        After a long struggle, they could not verify CDC successfully on any of their designs

·        That’s why they are talking to Real Intent

In fact, I hear this kind of story at every company I visit. If this sounds like the painful experience you have with your current CDC tool, read on, because you should know that CDC analysis can be much simpler, with the right solution! That’s what customers who have switched to Real Intent are telling us! Check out the latest newsletter to see our user survey results!

Here are the top 3 reasons why companies are switching to Meridian CDC:

1)      Ease of setup – Setup is a very time consuming step in our main competitor’s flow, taking almost 80% of the time as stated by customers. Unfortunately, when you have garbage going in, you get garbage out. So you have to spend a lot of time to setup the other tool to get somewhat meaningful info out. One of my engineer friends recently told me that you almost have to know the answer before setting up the tool in order to get the results – WOW!!! Productivity goes out the door.


Meridian CDC, based on Real Intent’s years of experience in understanding designers’ intent, can automatically extract the clock/reset/constant/static intent from the design or the SDC file to ensure proper setup. 90% of setup is done for you automatically! You’ll be getting real results from Meridian CDC while other engineers are still figuring out how to set up the competing CDC tool.


2)      Noise – This is primarily a consequence of poor setup. Since setup takes much painful work and lots of time with the other tool, often designers under time constraints have no choice but to forge ahead to the CDC analysis stage without complete setup so some progress and results can be shown to management. However, finding bugs in the mountain of erroneous messages is a formidable task. Many veteran CDC tool users have gone through tens of thousands of messages before giving up on the analysis altogether. This is a repetitive theme I hear in my meetings!

Why is Meridian CDC better? Because of the three underlying principles built in the tool: 1) Meridian CDC invests a lot of effort up front to automatically create the proper setup for users, so their manual effort is minimized. Users are much more willing to invest the remaining 10% effort to ensure complete setup; 2) Meridian CDC provides comprehensive feedback on user setup so refinement can be done easily; 3) Meridian CDC analysis is smarter in reporting root causes of problems, not the many symptoms of problems. As a result, quality and accuracy of results are easily achieved!

3)      Performance – Have you waited days in order to get CDC results? Wait no more! Meridian CDC is on average 10X faster than competition! Finish your project early and take a vacation!


4)      Coverage – Oops, this is the fourth one! Well, at least you might expect good coverage from our competition when they report tens of thousands of messages!  NOT SO. Aside from false-positives, they also have a great deal of false-negatives, or missed issues. There is nothing worse than a chip re-spin if you don’t catch a problem in simulation.

Meridian CDC offers a layered approach to CDC signoff to make sure every stone is turned in order to find sneaky CDC bugs and guarantee CDC-safe designs. Following Meridian CDC’s recommended methodology, you can rest assured that no CDC bugs will make it to silicon!

The bottom line – Doing CDC verification takes a Real CDC tool architected to do the job, not a linter adapted to do CDC work. Perhaps it was ok 8-10 years ago when a meager linter could do the job of finding possible clock crossings in a small design with a 10-20 clock domains. However, today’s multi-million gate designs may have 100+ clocks and several layers of hierarchy. Using a linter on these is like playing tennis with a ping-pong paddle. 

If it is painful setting up your CDC tool, if your CDC analysis takes a long time to finish, and if you are tired of weeding through tens of thousands of messages to find bugs, it is time to look at Meridian CDC! Many customers have done so successfully as evidenced by Real Intent’s rapid growth in 2010 (watch out for the press release coming out this week). So why not YOU?

Hot Topics, Hot Food, and Hot Prize

January 17th, 2011 by Jin Zhang, Director of Technical Marketing

February in Tokyo is one of the coldest months in a year with an average high of 48F and low of 40F. To warm things up, Real Intent teamed up with SpringSoft, NexTop and Maxeler Technology to offer a joint seminar at Tokyo University VLSI Design and Education Center on Feb. 2, 2011, on “Hot topics in high-performance designs and their functional verification & debug”. The seminar features technical discussions on problems and solutions in many hot verification areas by industry experts:

10:00am Keynote: Acceleration of Verification and Verification of Acceleration

Oskar Mence (CEO, Maxeler Technology)

Acceleration and Verification are mutually important components. Acceleration of individual computer applications via special hardware/software extensions “benefits” from verification, i.e. making sure that the accelerated application still produces the correct result for all relevant input patterns. At the same time verification can take a lot of time if there are very many such relevant inputs, and as a consequence acceleration is of key value. Maxeler provides acceleration solutions and we encounter a range of verification approaches, depending on the domain and people involved. In addition, the acceleration of key verification algorithms such as SAT will show an instance specific acceleration approach using FPGAs.


10:40am Presentation: Chasing X’s Between RTL and Gate Level Efficiently

Pranav Ashar (CTO, Real Intent)

Designers must ensure that their gate level netlist produces same results as RTL simulation results. X-propagation is a major cause of differences between gate level and RTL functionality.?It is a painful and time consuming process to identify X sources and chase their propagation from RTL to Gate. Logical equivalence checkers ignore X-propagation and gate level simulations are very slow. Such “X-Prop” issues often lead to dangerous masking of real bugs. This presentation explains the common sources of X’ s, shows how they can mask real bugs that affect functionality and why they are difficult to avoid. It also discusses the challenges that Real Intent overcame in developing a unique and efficient solution to assist designers in catching bugs caused by X propagation.



11:20am Presentation: Getting You Closer to Verification Closure

Bindesh Patel (Technology manager, SpringSoft)

Today’s leading-edge designs are verified by sophisticated and diverse verification environments, the complexity of which often rivals or exceeds that of the design itself. Despite advancements in the area of stimulus generation and coverage, existing techniques provide no comprehensive, objective measurement of the quality of your verification environment. They do not tell you how good your testbench is at propagating the effects of bugs to observable outputs or detecting the presence of bugs. The result is that decisions about when you are “done” verifying are often based on partial data or “gut feel” assessments. These shortcomings have led to the development of a new approach, known as Functional Qualification, which provides an objective measure of the quality of your verification environment and guidance on how to improve it. If used effectively, Functional Qualification can help you in the early stages of verification environment development. This seminar provides background information on mutation-based techniques – the technology behind Functional Qualification – and how they are applied to assess the quality of your verification environment. We’ll discuss the problems and weaknesses that Functional Qualification exposes and how they translate into fixes and improvements that give you more confidence in the effectiveness of your verification efforts.


2:10pm Presentation: Assertion Synthesis: Enabling Assertion-Based Verification For Simulation, Formal and Emulation Flows

Yunshan Zhu (CEO, Nextop)

Assertion-based verification (ABV) helps design and verification teams accelerate verification sign-off by enhancing RTL and test specifications with assertions and functional coverage properties. The effectiveness of ABV methodology has been limited by the manual process of creating adequate assertions. Assertion synthesis leverages RTL and testbench to automatically create high quality functional assertions and coverage properties, and therefore removes the bottleneck of ABV adoption. The synthesized properties can be seamlessly integrated in simulation, formal and emulation flows to find bugs, identify coverage holes and improve verification observability.


3pm Presentation: SystemVerilog Testbench – Innovative Efficiencies for Understanding Your Testbench Behavior

Bindesh Patel (Technology manager, SpringSoft)

The adoption of SystemVerilog as the core of a modern constrained-random verification environment is ever-increasing. The automation and sophisticated stimulus and checking capabilities are large reason why. The supporting standards libraries and methodologies that have emerged have made the case for adoption even stronger and all the major simulators now support the language nearly 100%. A major consideration in verification is debugging and naturally, debug tools have to extend and innovate around the language. Because the language is object-oriented and more software-like, the standard techniques that have helped with HDL-based debug no longer apply. For example, event-based signal dumping provides unlimited visibility into the behavior of an HDL-based environment; unfortunately, such straight-forward dumping is not exactly meaningful for SystemVerilog testbenches. Innovation is necessary. This seminar will discuss the use of message logging and how to leverage the transactional nature of OVM and UVM-based SystemVerilog testbenches to automatically record transaction data. We’ll show you how this data can be viewed in a waveform or a sequence diagram to give you a clearer picture of the functional behavior of the testbench. For more detailed visibility into the testbench execution, we will also discuss emerging technologies that will allow you to dump dynamic object data and view it in innovative ways was well as using this same data to drive other applications such as simulation-free virtual interactive capability.


3:40pm Presentation: What do you need to know for effective CDC verification?

Pranav Ashar (CTO, Real Intent)

The complexity of clock architecture is growing with larger designs. Functionality that was traditionally distributed among multiple chips is now integrated into a single chip. As a result, the number of clock domains is increasing and Clock domain crossing (CDC) verification has become increasingly important and complex. For the effectiveness of CDC analysis tools it is required that designers/verification engineers have good knowledge of a design’s clock/reset architecture so that complete and accurate constraints can be provided to CDC tools. This knowledge also helps designers/verification engineers understand CDC analysis results meaningfully and efficiently. This seminar discusses what designers/verification engineers need to know in order to perform effective CDC verification.


Demo and poster sessions will start at 4:20pm showcasing each company’s technology. A dinner reception with hot food will be served from 5 – 8pm. A hot prize drawing of an iPad will be conducted at the end.  Click here for more information and free registration. Hope to see you there, stay warm in Tokyo!

Satisfaction EDA Style!

January 10th, 2011 by Lauro Rizzatti - General Manager, EVE-USA

EVE’s founder and CEO Luc Burgun took home the spoils at DAC last June with his winning performance as an EDA360 Idol, the industry’s top talent show, during the Cadence/Denali Party.  Besting four other contestants, Luc delighted party goers by performing the Rolling Stones classic, “ (I Can’t Get No) Satisfaction,” with lyrics rewritten to appeal to DAC attendees. 

Luc had some fun with this.  His rewritten refrain laments, “I can’t get no satisfaction, I can’t get no bug reaction,” which makes you wonder if the lyrics played a significant role in his win.  After all, we’ve all heard verification engineers complain about the tools they have at hand and the amount of time verification takes out of the project budget. 

Let’s ask the judges.  “At the Denali Finale, all performers were exceptional,” says Judge Simon Davidmann, president and CEO of Imperas.  “Luc stood out for his stage presence, singing ability and a well-chosen song with lyrics everyone associated with EDA can relate to.  His guitar playing was pretty good, too.”

Judge Dennis Brophy, director of strategic business development at Mentor Graphics Corporation, weighs in with:  “Despite formidable competition, Luc Burgun showed us he really knows how to rock out.  His rendition of ‘Satisfaction’ told us that successful transactions are indeed the key to satisfaction!”

In another stanza, Luc sings, “When I’m drivin’ in my car, When EDA man comes on the radio, He’s tellin’ me more and more, About some useless simulation, Supposed to fire design acceleration.” Useless simulation?  Fire design acceleration?  Well, in the real world, we would never advocate that because each verification tool serves a purpose and works on a specific problem.  Real Intent’s verification solutions, for example, use innovative formal techniques in an easy-to-use methodology, solving critical problems with comprehensive error detection.

And, of course, Luc advocates the use of hardware emulation as a solution.  “Well, I’m doin’ billion cycles, And I’m tryin’ this and I’m trying that, And I’m tryin’ to find the weak bug kink, When boss says get emulation later next week, ‘Cause you see I’m on losing streak.”  After all, a new generation of hardware emulators, including EVE’s ZeBu, can handle a billion ASIC gates and offers flexible support for hardware verification, software development, and hardware/software co-verification across multiple SoC applications.  That should give some satisfaction!

In case you missed his performance, you can view it here:

Are you curious about the rewritten lyrics?  Here they are:

Satisfaction EDA Style


I can’t get no satisfaction
I can’t get no bug reaction

‘Cause I try and I try and I try and I try
I can’t get no, I can’t get no

When I’m drivin’ in my car

When EDA man comes on the radio
He’s tellin’ me more and more
About some useless simulation
Supposed to fire design acceleration
I can’t get no, oh no no no
Hey hey hey, that’s what I say

I can’t get no satisfaction
I can’t get no bug reaction

‘Cause I try and I try and I try and I try
I can’t get no, I can’t get no

When I’m workin’ my SoC

And Moore’s Law tells me
How fast my chips can be
But he can’t be a chip jock ‘cause he don’t use
The same ver’fication as me
I can’t get no, oh no no no
Hey hey hey, that’s what I say

I can’t get no satisfaction
I can’t get no bug reaction

‘Cause I try and I try and I try and I try
I can’t get no, I can’t get no

Well, I’m doin’ billion cycles
And I’m tryin’ this and I’m trying that

And I’m tryin’ to find the weak bug kink
When boss says get emulation later next week
‘Cause you see I’m on losing streak
I can’t get no, oh no no no
Hey hey hey, that’s what I say

I can’t get no, I can’t get no
I can’t get no satisfaction
No bug reaction, no satisfaction, no bug reaction

The King is Dead. Long Live the King!

January 3rd, 2011 by Dr. Pranav Ashar, CTO

The New Paradigm


Not long ago, functional simulation and static timing analysis was it for RTL verification. In fact, it was all that was needed because the inner-loop of computation and data-transfer on a chip was one synchronous block. As chip complexities grew and gate-level simulation became unviable, formal equivalence checking stepped in to pick up the slack with orders of magnitude improvement in productivity in comparing gate and RTL representations. But the paradigm remained the same even as the methods changed – verification still needed to cover only the functional input space as comprehensively and efficiently as possible.

Then, somehow, things changed under the hood. Computation on a chip got fragmented out of necessity and with significant consequences. An illustrative example of this trend is the multicore chip by Tilera, Inc. shown here, Inc. It is a 64-core processor with a number of high-speed interfaces integrated on chip.



Tile64 Processor Block Diagram

For one, it has become impractical to send a signal in one clock cycle from one end of the chip to another in one clock cycle, as well as to send the same clock to all parts of the chip with manageable and predictable skew. It is also energy inefficient and practically impossible to keep raising the clock frequency. Higher performance can increasingly only be achieved with application-specific cores or on-chip parallelism in processors. As a result, computation is being done increasingly in locally synchronous islands that communicate asynchronously with each other on chip. This was predicted some time ago, but is now truly coming to roost in the form of heterogeneous and homogeneous multicore chips. With fine-grain fragmentation, communication bandwidths and latencies between the computation islands have come under the design scanner, and protocols for transferring data and signaling between the islands are beginning to push the limits.


A second important change has been that energy and power optimization is now more aggressive than ever. Beyond parallelism-for-performance and custom cores, this trend has also brought once arcane design techniques into the mainstream. Each island runs at its optimal frequency, and dynamic control of clocks, clock frequencies and Vdd is now par for course.


Finally, chips are now true systems in that they integrate computation with real-world interfaces to peripherals, sensors, actuators, radios, and you name it. And, these interfaces must talk to the chip’s core logic at their own speeds and per their chosen protocols. Many of these interfaces are also pushing the performance limits of the core logic.


An apt analogy is that it is as if chips have transitioned from an orderly two-party political system to an Italian or Indian multi-party system in which the various parties must align with each other at periodic intervals to accomplish something and each party has its own chief whip to get the troops to toe the party line.


The implication of this trend on chip verification is that it has gotten messier – one can’t cleanly abstract timing from functional analysis any more, i.e., the functional space and the timing space must be explored together. Deterministic functional simulation with fixed clock frequencies and delays does not cover all failure modes, and static timing analysis neglects the dynamic and data dependent nature of interaction between clock domains in the presence of unrelated clocks and variability. We are still not in the world where we must timing-simulate everything, but the new complexity is daunting nevertheless.


The New Signoff Solution


In order to mitigate this complexity, it is essential that the verification tool first decipher design intent to localize the analysis requirements. This exercise also helps make debug more precise. To be sure, this is harder as optimizations get more aggressive – the boundary between computation and interface blurs and designers resort to ever more innovative techniques. Real Intent was prescient in predicting the new verification paradigm many years ago. After much experimentation and interaction with design companies, we have demonstrated that automatic and reliable capture of design intent is indeed viable for clock domain crossings.


The design intent step triages the design, finds many types of bugs, and sets up local analysis tasks and models (potentially with special algebras to capture the timing and variability effects) for further formal analysis and simulation. I call this the verification 4-step of intent extraction, formal analysis, simulation, all integrated into a systematic hierarchical approach of analysis and reporting for scalability.




We find from our customers that the special verification requirement for clock domain crossings is now an essential part of the signoff process for all chips. Similar customized signoff is also called for in other contexts like DFT and power optimization for which failures cannot reliably be caught with functional simulation. Effectively, the old paradigm of “functional simulation + static timing analysis” is obsolete and the sign-off flow today looks more like the figure shown above.






Hardware Emulation for Lowing Production Testing Costs

December 20th, 2010 by Lauro Rizzatti - General Manager, EVE-USA

The sooner you catch a fault, the cheaper it will be, or so the user surveys tell us.  These surveys, conducted by various data gathering services, are meant to determine the cost of pinpointing design faults during the creation of chips.  Each one proves conclusively that costs increase by a factor of 10 at each step in the development cycle. 

It’s hard to find a better example than the infamous Pentium bug dating back to 1994.  The cost to fix the bug that found its way inside thousands of PCs was more than a billion dollars because the design fault made its way into a manufactured product.  Talk about breaking the budget and tarnishing a stellar technical reputation!

Of course, EDA companies have long touted their design-for-testability (DFT) methodologies.  Thorough and exhaustive functional verification during the development cycle is still a good strategy and an economical way to find and remove design faults, though it’s becoming less practical.  Systems-on-chip (SoCs) are populated with arrays of cores, including CPUs and DSPs, embedded memories, IP peripheral blocks, custom logic and so on.  With all of this, functional verification becomes a major bottleneck before tapeout, reinforcing the industry-wide consensus that functional verification consumes in excess of 70 percent of the development cycle. 

And, that may not be enough!  When undertaking functional verification using HDL simulators, the trade-offs between the amount of testing and the allocated time for the task often leaves undetected faults inside the design.

Herein lays the conundrum.  Functional verification can detect faults early in the design cycle, reducing the cost of finding them.  And yet, a thorough job of cleaning a design would take such a long time, the cost would be over any reasonable budget.

A new generation of hardware emulators is changing all of this.  Unlike traditional emulators that cost small fortunes, limiting ownership and adoption to a few units at large companies with equally large budgets, these new functional verification systems are much more cost effective.  They’re also faster. 

These emulators, implemented on small footprints, are powered by the latest FPGAs and driven by robust software.  They are accessible to SoC engineers and embedded software developers and can be used throughout the design cycle.  Designs target a variety of fast-paced markets, including networking, communications, multi-media, graphics, computer and consumer.

An example is ZeBu from EVE.  It supports a comprehensive test environment to exhaustively exercise all functions of a design.  Its interactive debugging –– a prerogative of the software simulator –– enables a higher degree of verification/testing than possible with traditional software tools.

Design teams have finally found a means to uncover those nasty and difficult bugs, saving the budget and making management happy.  These new functional verification tools, such as emulation, offer orders of magnitude more testing than available using software tools but with the same financial investment.  Check the recent user surveys and see for yourself.

What do you need to know for effective CDC Analysis?

December 3rd, 2010 by Al Joseph Sr. Applications Consulting Engineer

The complexity of clock architectures is growing with larger designs. Functionality that was traditionally distributed among multiple chips is now integrated into a single chip.   As a result, the number of clock domains is increasing.  Power management is a dominant factor that impacts clock architecture (gating, power domains, voltage scaling).   Designing for multiple functional modes adds to clock architecture complexity.  For example, all these issues add logic into the clock trees.    As a result it is becoming more complex to verify designs for glitch and metastability issues.

There are very few established standards/methodologies for managing clock architectures.  Even the few established standards such as UPF (Universal Power Format) for power management and synthesis for power don’t go far enough to be clock architecture-aware with respect to glitch, data stability and metastability issues.  For example, clock gating insertion is done without full awareness of asynchronous crossings.  In fact, there are a myriad of issues relating to asynchronous clock domains that don’t have established standards.  Some of these are:

  • Single bit synchronizers
  • Asynchronous FIFO’s
  • Handshake structures
  • Clock Gating
  • Re-convergence
  • Design practices to mitigate glitches in asynchronous crossings
  • Asynchronous/Synchronous resets crossing domains
  • Reset Gating

In order to manage the design, implementation and verification of clocks in a design, more members in the design team need to be “clock/reset architecture” and “clock/reset implementation” aware.   This awareness is necessary for verifying correct functionality of the clocks when using semi-automatic CDC analysis tools and/or manual processes such as design reviews.

The clock architecture needs to be understood to generate requirements for the clock/reset networks.  Design standards for implementation can be generated from these requirements.  The design standards drive verification strategy: what can be automated using CDC tools and what must be relegated to other methods.  An example of what cannot be verified by CDC tools is the selection of an invalid combination of clocks in functional mode.

The following components need to be considered with regard to how they affect clock/reset architecture:

  • Timing:  Static Timing Analysis & Clock Tree Synthesis
  • Mode Selection: Test/Functional Mode, Clock mode select (Multiple Functional Modes), Configuration registers
  • Power: Gating Control, Voltage Scaling
  • Testability: Clocks for Scan, Clocks for At-Speed, BIST, Lock-up latches
  • Quasi-static Domains

The clock/reset architecture specification needs to contain the following details in order to meet the requirements for design implementation and verification in the following manner:

– CDC Implementation Style and Design Practice

  1. Single Bit Sync
  2. Common Enable Sync (Data Bus)
  3. Fast-to-Slow Crossings (FIFO; gray-code, read-before-write, write-before-read)
  4. Multi-mode crossings (multiple frequency modes;  Data stability)
  5. Data Correlation (Handshake)
  6. Synchronizer cycle jitter management
  7. Re-Convergence management of control bit crossings
  8. Clock Gating management
  9. Internally generated reset management

– Clock Domain Specifications

  1. Synchronous Domains
  2. Asynchronous Domains
  3. Quasi-static Domains (very slow clocks )
  4. Exclusive Domains ( clocks that are active when other related domains are static such as configuration register writing)
  5. Resets and their Domains

– Functional Mode Configuration Specifications

  1. Mode Control Pins and logic states
  2. Configuration Registers settings
  3. For multiple functional modes, mode control settings

– Primary Input/Black Box Specifications

  1. Clock domains for the primary inputs
  2. Clock domains for black box outputs

-Design Initialization Specifications

  1. How to initialize the design (critical for CDC verification that requires formal verification)


The above specifications are critical to ensure an accurate setup for CDC analysis that will result in a complete and accurate analysis.   This will minimize the most frequent complaints about CDC analysis tools; noise (voluminous messages), false violations and incomplete analysis.   Also, by documenting the CDC specifications, all project engineers will be better equipped to review the validity of CDC analysis results.

Even with the best specifications, translating them to the constraints for the CDC tools needs a robust setup validation methodology to identify missing constraints.  Real Intent’s Meridian CDC tool has such a robust setup validation flow with supporting graphical debug/diagnosis to provide guidance on completeness and accuracy of constraint specifications.  Ease of setup has been cited as key considerations for many of our recent customers who have switched to Meridian CDC.

In summary, CDC analysis and verification is increasing in complexity.   The effectiveness of CDC analysis tools requires that designers have detailed knowledge of the design’s clock/reset architecture so that complete and accurate constraints can be provided to CDC tools and designers can meaningfully and efficiently review the validity of CDC analysis results.

A version of this article was previously published by Chip Design at

The SoC Verification Gap

November 15th, 2010 by Mike Stellfox, Distinguished Engineer, Cadence

If you have been talking to anyone at Cadence, or others in the industry these days, I’m sure you have heard about the EDA360 vision.  If you are an engineer, you are probably saying – what is this “marketing fluff” and how does it help me.   Let me tell you what it means from my perspective, as somebody whose job it is to work with customers to understand the latest verification challenges and figure out what Cadence needs to do to address those challenges.  In short, think of EDA360 as a wake-up call, a heads-up that we understand there are big challenges our customers are facing in realizing SoCs/Systems, and this requires something far beyond EDA-business-as-usual.   At this point, those of you who may know me are probably saying to yourself – “Is Mike Stellfox really talking about this EDA360 stuff, has he sold out…?”   The answer is “no” I have not sold out and let me tell you why.

I have been spending a lot of time lately with engineering teams developing really big SoCs, and I’ve realized we have a significant challenge here – there is a HUGE gap in how SoCs are verified today vs. what is needed in order to have a scalable and efficient SoC development process.   The challenge of bringing a new SoC to market is exactly why a colleague of mine coined the phrase “time to integration”.  Today’s SoCs, are all about integration – integrating IP blocks, integrating analog content, and integrating more and more of the software stack.  While it is true that all of this integration work still includes design challenges, the bigger issues around improving time to integration are centered on improving the entire SoC verification process.   I have seen very few well-structured, methodology-based approaches to how customers are verifying their SoCs.  There are many ad-hoc processes and some internal tools and scripts that attempt to improve the situation, but when it comes to complex SoCs a much more structured and automated approach to verification is needed.   This opportunity to bring a more structured, methodology-based approach to integrating and verifying SoCs will likely need to be developed in a different way. I don’t think it will be feasible to simply understand the requirements and go back to Cadence R&D and ask them to develop some sort of silver bullet “SoC Verification Tool”.  It is going to require a different approach, one that requires tight collaboration with customers developing these complex SoCs. 

Within Cadence, we now have an organization known as the SoC and Systems Group, whose charter it is to define and drive solutions for improving SoC and System realization, where a significant focus will be on improving time to integration and verification of complex hardware and software systems.   Here are some of the key challenges I have seen with regard to integrating and verifying SoCs.

  • SoCs rely on several execution platforms in order to verify the integration, develop software, and verify that the application level use cases meet the requirements of the end system.  This includes a TLM-based Virtual Platform, RTL simulation, RTL Acceleration/Emulation, FPGA prototype, and links to the post-silicon environment.  It is a huge effort to develop and maintain the models and verification environments for each of these platforms, and it is not easy to reuse stimulus, checks, and coverage metrics across each platform. 
  • It is like trying to find a needle in a haystack to debug at the SoC level where the bug might be hidden somewhere in the hardware, software, or the verification environment.  SoC level debug is further complicated by the fact that it is often necessary to reproduce the bug on a different execution platform where there is much better debug visibility.
  • Today IP is not optimized for integration within the SoC.  There is a need to develop and deliver the verification content with the design IP in such a way that it is optimized for integration to reduce the time and effort for integration verification. 
  • Given the complexity of the software content for most SoCs, and all the ways the software might interact with the hardware, there is a need for better tools for automating the creation of software driven tests, and for debugging hardware and software together.
  • More and more analog content is being integrated into SoCs so there is a need to more thoroughly verify the integration between the digital and analog blocks by including reasonably accurate analog models in the IP and SoC verification environments.
  • In order to effectively manage a large-scale multi-geography SoC development project, there needs to be clear metrics and milestones for tracking and reporting the progress of all the SoC development activities. 

These are the core challenges that I see that need to be addressed to close the SoC verification gap.  Admittedly today the gap is rather wide, but I am confident with the right focus and a complete understanding of our customer’s needs, we will align with the much of what is behind the EDA360 vision and close this SoC verification gap in the coming years.

S2C: FPGA Base prototyping- Download white paper

Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy