Open side-bar Menu
 Real Talk

Archive for 2010

A Look at Transaction-Based Modeling

Monday, September 6th, 2010

A rather new methodology for system-on-chip (SoC) project teams is transaction-based modeling, a way to verify at the transaction level that a design will work as intended with standard interfaces, such as PCIe, and SystemVerilog-based testbenches. 


This methodology enables project teams to synthesize the processing-intensive protocols of a transaction-based verification environment into an emulation box, along with the design under test (DUT).  They can then accelerate large portions of the testbench with the DUT at in-circuit emulation (ICE) speeds.  Increasingly, this is done concurrently with directed and constrained random tests.  The adoption of this methodology has been accelerated by the advent of high-level synthesis from providers such as Bluespec, Forte Design Systems and EVE.


Today’s emulators look and act nothing like previous generations.  They are fast, allowing the project teams to simulate a design at high clock frequencies, and more affordable than ever.  For an emulator to be a complete solution, however, it must be able to effectively interact with designs without slowing them down.  This is where transaction-level modeling can help by providing checkers, monitors and data generators with throughput the DUT requires. 


Benefits of transaction-level modeling include speed and performance to handle bandwidth and latency.  For example, the latest generation emulators can stream data from a design and back at up to five million transactions per second.


Reuse is another benefit because emulation can separate protocol implementation from testbench generation in a way that testbenches can be assembled from building blocks. 


Various languages can be used to build transaction-based testbenches, including C, C++, SystemC or SystemVerilog with the Standard Co-Emulation Modeling Interface (SCE-MI) from Accellera.  Testbenches drive the data to register transfer level (RTL) design blocks. 


Project teams most frequently buy off-the-shelf transactors for common protocols and design their own for a unique interface or application.  Typically, a custom transactor for an interface is a Bus Functional Model (BFM) or Finite State Machine (FSM) written in Verilog register transfer level (RTL) code or behavioral SystemVerilog using a transactor compiler.  More often, project teams have a similar piece of code that can be converted into a transactor.


Project teams have reported numerous benefits of this emerging methodology, especially because they can develop tests faster than directed tests.  Moreover, they don’t need the in-depth knowledge of the SoC or protocol.  And, testbenches can be reused when the target standard is used in another design.


Pay a visit to any project team anywhere in the world and you’ll find that they implement a whole host of verification and test methodologies on an SoC design.  More and more, transaction-based modeling is gaining widespread acceptance on even the most complex of designs, shortening time to market and easing the project team’s anxiety.

The 10 Year Retooling Cycle

Monday, August 23rd, 2010

I still remember the enthusiastic talk around the 10-year EDA retooling cycle in 2000.  There was optimism fueled by the dot-com boom. Moore’s Law was in full force. Communications industry was in infancy, ready for innovative new products. Products were evolving quickly, pressuring designers to produce more and more in less time. This, in turn, was fueling an unprecedented demand for new and innovative EDA solutions.


Those were the days…  EDA startups were abundant. There were many trade shows, most notably DAC.  Hotels were sold out! The big 3 had huge parties, and oh yes, design engineers could learn of all the new developments over the week.  You really needed a good pair of walking shoes in those days… It was like going to a candy store!


From a methodology perspective, automation and re-use quickly became a big focus. Mixed signal designs, multiple clock domains and advanced power management schemes became the norm. Simulators did not have enough horsepower to test all aspects of a chip. Accelerators and emulators became more heavily used, but with them came additional issues.


Standards have evolved around key issues. The Verilog language evolved into SystemVerilog. Standards define good coding practices including re-use practices. LINT tools became more heavily utilized to improve the quality of the design and to ensure that re-use guidelines were followed.


It is now 2010. The big EDA companies have adopted an all inclusive volume sales model, putting the squeeze on the smaller companies that have to compete with their “free” software.  As a result, there are fewer EDA companies providing innovation. DAC is a much smaller show. And we don’t hear much about this 10 year re-tooling cycle.


But Moore’s law is still active, albeit at a slower pace.  Chip sizes continue to grow and complexity continues to increase.  The time to market pressures are as strong as before, if not worse. Verification continues to have key challenges that beg for automation. And, not surprisingly, the 10-years old software has slowly aged and is no longer meeting today’s design requirements.


Some lint tools run for 10s of hours on designs when it is possible to run in minutes.  Some CDC tools run for days when it is possible to run in hours.  Some rule checking tools produce 100s of thousands of warnings – the wasted debugging effort may add up to an army of engineers.  The confluence of clocking domains, power domains and DFT requirements have added significant pressure on design methodologies.


There may be fewer EDA companies these days but innovation is still going strong.  Products for the next 10-years are available and getting adopted. Precise Lint tools with blazing performance are available. Precise CDC tools make it possible to achieve reliable sign-off on today’s designs. New innovations are underway for solving complex issues such as X-Optimism and X-Pessimism in simulation.  Automatic Formal Analysis tools quickly improve design quality with minimal effort.  SDC tools ensure the effectiveness of time consuming STA efforts. The 10-year retooling cycle is in effect again.


So what tools are in your flow?  Are they current?  Are they working well?  Can your supplier respond to your needs?  Are you getting what you paid for?


You need today’s innovations to deal with tomorrow’s problems!

Hardware-Assisted Verification Usage Survey of DAC Attendees

Monday, August 2nd, 2010

Tradeshows and technical conferences serve as great places to survey the verification landscape and the Design Automation Conference in June was no exception.


EVE took the opportunity to poll visitors to its booth with a survey similar to the one used at EDSFair in Japan earlier in the year.  Interestingly enough, some of our findings in the DAC survey tracked with findings from EDSFair.  In some cases, they were widely dissimilar.


Our DAC attendees who took part in the survey included designers/engineers, managers, system architects, verification/validation engineers and EDA Tool Support or CAD managers.


Both sets of respondents noted that challenges are getting more complex as design teams merge hardware and software into systems on chip (SoCs).  The Verilog Hardware Description Language (HDL) wins out as the number one language for both ASIC and testbench design, with SystemVerilog a distant second.  DAC attendees ranked SystemC ahead of VHDL for ASIC design, while VHDL is used more than SystemC for testbench design.


Surprisingly, while more than 70% answered that they own between one and 100 simulation seats, 17% claimed to have more than 20 seats compared to only 12% between 100 and 200 seats.  Our conclusion is that very large farms are more popular than large ones.


Unlike their counterparts at EDSFair, DAC attendees are less than satisfied with their current verification flow.  Almost 70% of EDSFair attendees claim to be satisfied with their verification flow.


DAC attendees noted the same dissatisfaction for runtime performance and rated poorly the setup time for their verification flow.  Also, efficiency in catching corner cases and reusability was ranked less than satisfied to fairly satisfied in both categories.


When asked to rate the importance of various benefits of a hardware-assisted verification platform when making a purchasing decision, they chose runtime performance, followed by price as most important.  Visibility into the design and In-Circuit Emulation (ICE) came next.  Compilation performance, simulation acceleration and transaction-based design, while considered important, received lower grades than the other criteria.


While simulation acceleration doesn’t rank highly in purchasing criteria, those surveyed claimed that simulation acceleration is the mode they use most for their hardware-assisted verification platform.  ICE is listed as the second most used mode, and stand-alone emulation came in third.  Few use it for transaction-based emulation.  By comparison, the EDSFair survey revealed that transaction-based emulation was second after simulation acceleration and significantly more popular than stand-alone emulation and ICE.


The primary use for hardware-assisted verification is ASIC validation, with hardware/software co-verification a close second, a trend we also observed with EDSFair attendees and is most likely because of the move to include embedded software in SoCs. 

Emulation can be used for hardware/software co-verification because it works simultaneously to verify the correctness of both hardware and embedded software.  It can process quickly billions of verification cycles at high speeds.  Unlike older generations that were prohibitively expensive, pricing for today’ emulators is competitive, a key consideration for EDSFair and DAC attendees.


The news from Japan in January was positive and I projected that the widespread adoption of hardware/software co-verification would be good for EDA’s verification sector in 2010.  While the DAC survey didn’t offer up the encouraging signs, it did confirm that hardware/software co-verification is taking root.  At EVE, we consider that a plus for the hardware-assisted verification market segment.

Leadership with Authenticity

Monday, July 26th, 2010

An interesting title…What is leadership with Authenticity?

Well, let’s discover…first of all let’s break it down; we will start out by talking about leadership.

Is leadership telling people what they want to hear to keep them going the direction you think they should go? Or is leadership just taking flight and hoping that people follow?  Wikipedia defines it as “a process of social influence in which one person can enlist the aid and support of others in the accomplishment of a common task…”

Leadership is a big responsibility but it is also something that a person needs to do with finesse.  If everyone is going one direction and you decide to change the course dramatically, this can be very painful. 

I try to think of it as if you are steering a large passenger liner at full speed ahead. It takes a lot to turn the ship of that size and if you turn too abruptly there are huge consequences.  There would be chaos because people don’t know what is happening, what to do or what to expect. If people are in the wrong place it could be catastrophic for them. They would be unprepared and they might fall off the ship, along with valuable cargo… and then there goes your crew!

If this is the intention then you have accomplished the goal but usually it takes a lot to get the ship right again and sometimes it is impossible.  So remember, changing course abruptly is not a good practice when steering a ship or when running a business. 

Let’s bring Authenticity into the picture…What is Authenticity?

Again, I refer back to Wikipedia for a definition: It is “a particular way of dealing with the external world, being faithful to internal rather than external ideas.“

So authenticity means to uncover your true self.  “We live in a culture that is starving for authenticity.  We want our leaders, co-workers, friends, family members, and everyone else that we interact with to tell us the truth and to be themselves.  Most important, we want to have the personal freedom and confidence to say, do and be who we really are, without worrying about how we appear to others and what they might think or say about us.” (Mike Robbins)

Sadly, however, even though we may say we want to live in a way that is true to our deepest passions, beliefs, and desires, most of us don’t.   WHY? Starting at a very early age, we are taught by our parents, spouses, teachers, friends, co-workers, politicians and the media that it’s more important to be liked and to fit in than it is to be who we truly are.  In addition, many of us assume that who we are is not good enough and therefore we’re constantly trying to fix ourselves or to act like others who we think are better than us.

Oscar Wilde…a famous author and poet said… “Be yourself, everyone else is already taken.”  To me this summarizes authenticity.

Bringing the two together is an art and a process that you develop along the way.  I believe that the most successful leaders are the ones that are authentic.  We are all unique and so our styles differ but if the basic foundation is Authenticity or being Real, that is a fantastic start.  How you go about enlisting the aid and support of others is more effective when you do it in your style.  Have fun!  Lead with Authenticity.

Clock Domain Verification Challenges: How Real Intent is Solving Them

Monday, July 19th, 2010

With chip-design risk at worrying levels, a verification methodology based on just linting and simulation does not cut it. Real Intent has demonstrated that identifying specific sources of verification complexity and deploying automatic customized technologies to tackle them surgically has benefit. Automatic and customized don’t go together at first glance. Whereas automatic deals with maximizing productivity in setup, analysis and debug, customized ensures comprehensiveness. That’s the challenge for clock-domain verification as well as for the plethora of other failure modes in modern chips. Clock-domain verification is certainly a case in point. Its complexity has grown tremendously:

Signal crossings between asynchronous clock domains: The number of asynchronous domains approaches 100 for high-end SOCs optimizing performance or power. The chip is too large to distribute the same clock to all parts. Also, an SOC is more a collection of sub-components, each with its own clock. Given the large number of domains and crossings, the myriad protocols for implementing the crossings, and the corresponding large number of failure modes, writing templates to cover all scenarios is very expensive. Template-based linting on such chips with millions of gates is very slow – takes days. Additionally, the report from a template-based analysis is so voluminous as to challenge the ability of the team to analyze it manually, causing real failures to be overlooked.

Widely disparate and dynamic clock frequencies: Analyzing for data-integrity and loss in crossings under all scenarios is non-trivial and beyond linting alone.

Proliferation of gated clocks: Power management and mode-specific gated clocks are now common, introducing a manifold verification problem. (1) Clock setup must be correct for meaningful verification. Detailed setup analysis highlights errors in clock distribution or the environment spec. (2) Functionally verify the designs with gated clocks. (3) The variety of gated clock implementations creates a variety of glitching possibilities. Clock glitches are very hard to diagnose. You want to know about this possibility as early as possible. Given the variety of gated-clock types and glitching modes, a template-based approach is a recipe for productivity loss and slow analysis.

Reset distribution: Power-up reset is much more complex now to optimize for power and routing. Full verification of the reset setup prior to subsequent analysis is essential.

Timing optimization: Optimizations like retiming may violate design principles causing glitch potential at the gate-level even if there was none in RTL. Glitch analysis must be an integral part of verification and the tool must operate on RTL as well as gates. Template methods make it harder since multiple templates may be required to support RTL and gate as well as mixed languages.

Clock distribution: Previously 2nd-order issues like clock jitter in data/control transfers have more impact in DSM. Even synchronous crossings must now be designed carefully and verified comprehensively.

Full-chip analysis: Speed, scalability, precision and redundancy-control become key considerations in full chip analysis with many hierarchy levels and 100 million gates.

Real chip respins are revealing: (1) Asynchronous reset-control crossing clock domains but not synchronously de-asserted, caused a glitch in control lines to an FSM. (2) Improper FIFO-protocol controlling an asynchronous data crossing caused read-before-write and functional failure. (3) Reconvergence of non-gray-coded synced control signals to an FSM caused cycle jitter and an incorrect transition. (4) Glitch in a logic cone on an asynchronous crossing path that was latched into the destination domain corrupting captured data. (5) Gating logic inserted by power-management tools resulted in clock glitch.

CDC verification is not solved adequately by simulation or linting. It has become a true showstopper and an effective solution is a must have.  Real Intent’s approach understands the failure modes from first-principles to develop symbiotic structural and formal methods to cover them comprehensively and precisely. Structural and formal methods combine to check the clock & reset setup, metastability errors, glitching, data integrity / loss and signal de-correlation. This approach allows us to auto-infer designer intent and checks for the crossing or clock/reset distribution. As a result, our structural analysis runs 10x faster and does not require the designer to develop templates. Formal methods analyze for failures under all scenarios efficiently and comprehensively without a laborious enumeration of scenarios. For example our free-running-clock feature checks for data-loss for all frequency ratios. We complete the solution with an automatic link to simulation that models metastability and adds checks in the testbench. These solutions are offered in Real Intent’s Meridian product family.

Building Strong Foundations

Monday, July 12th, 2010

I recently joined Real Intent with over 10 years of experience developing and supporting assertion-based methodologies and have seen the technology move from research toward the mainstream.   Formal technologies have proven to have a lot of value for functional verification and for coverage, but having to learn evolving assertion languages and techniques has slowed the adoption.  I like Real Intent’s approach of automating the verification effort.

 In the very early stages of design, linting is a basic step. Lint checkers for HDL have been around for some time, and continue to become more sophisticated.  AscentTM Lint runs very fast because the checks are all static.  The user can easily configure what checks are desired.

In the next stage, also early in the process but after linting, Real Intent has what is my favorite tool – Implied Intent Verifier (IIV).  They have adapted formal verification techniques to automatically detect issues that can result in bugs that might be difficult to trigger and detect in simulation.  Think of this as automatically generated assertions. Formal verification without having to write assertions!  It is all automatic.  IIV goes beyond static linting to detect bugs that require sequential analysis.

An example of a significant IIV check is the one for state machine deadlocks. Deadlocks are the type of symptom that foreshadow bugs that can result in product recalls if not found. Finding them often depends on whether the testbench author thinks to test the scenario.  IIV provides detection of deadlock in one FSM and between two FSMs, without the need to write any testbench or assertions.  For example,



This is the classic example of two state machines that are waiting on one another.  In this case a single-state deadlock (SSD) is reported for both state machines and the deadlocked state is state 00.  This is because state machine A is waiting on a signal from state machine B and vice-a-versa.

Many other errors are also reported that have the same root cause.  One of the unique features of IIV is that it distinguishes secondary failures. The report focuses your effort on the root cause of a failure, in this case the SSD, and you can ignore the secondary failures.

While this example was very simple for the purpose of illustration, you can imagine a similar scenario in protocols. Take for example, a peer-to-peer handshake where both request to transmit at the same time, causing them to both go to a state where they are waiting for an acknowledge signal from their peer.  This would be a fundamental state machine design issue.  Simulations would pass unless the corner case where both request simultaneously is tested. As shown in the simple example above, this can also happen as the result of a simple typo.

You can get a fast start in functional verification by exploiting the verification features provided in Real Intent’s tool suite.  Common bugs are quickly and automatically weeded out, building a strong foundation for the real work of verifying your specific design intent. Check out Real Intent’s complete product line at .






Celebrating Freedom from Verification

Monday, July 5th, 2010

Happy Fourth of July!  If you’re celebrating Independence Day today, chances are you have the time to do so because of a set of tools that freed you from the drudgery of endless verification cycles.

Yes, let’s give thanks as an industry to the plethora of commercial tools that reduce the amount of time consumed by laborious verification tasks.  They take many forms today, from hardware emulation and formal verification to simulation and acceleration, to name just a few.  All have been developed to reduce the verification portion of the design cycle –– purported to be in the range of 70% –– and to lessen the burden you carry.

Each year, the verification challenge gets worse as SoC design sizes and complexity increase, stressing and periodically breaking existing design flows.  New data shows that the average design size is now exceeding 10-million ASIC-equivalent gates ––  don’t get me started on what’s an ASIC-equivalent gate, I’ll save that for another post –– with individual blocks running between two- and six-million ASIC-equivalent gates.

Exercising each and every one of those gates by an old rule of thumb would require a number of cycles equivalent to the square number of gates.  That is close to a quadrillion cycles –– yes, that’s a one followed by fifteen zeros.  That’s a lot of verification cycles and a lot of headaches.

And, lest we forget, the time-to-market push continues unabated.

How do we cope with this triple challenge of gates, cycles and time to market and tame the tiger?  Only functional verification can thoroughly debug a design before silicon availability, if you have the time to do it. 

Maybe not all is lost.  Exhaustive functional verification carried out via a RTL simulator is no longer a practical or viable alternative because of its abysmal performance –– they are just too slow to fully analyze and verify larger chips.  And, almost all of today’s chips are large and getting larger.

Emulation serves as a neat solution to the runtime problems that afflict these 25-year old logic simulators.  They are used to identify bugs and can alleviate the functional verification bottleneck by executing at megahertz speeds.  They accelerate the time needed to develop and validate hardware or embedded software design within the constantly shrinking schedule.  Emulators improve the product quality by increasing the level of testing of a design to meet the quality standards expected in today’s feature-rich electronics devices. 

You can forget whatever you may have heard about the older “big box” emulators.  New generations of modern hardware emulators fit in small footprint chasses and deliver execution speeds close to real time, making them useful as in‑circuit test vehicles.  While their runtime performance is impressive, they are far less expensive, easier to use and flexible enough for the current SoC project or the next one.

Even with these tools, verification continues to be a time-consuming process and often the bottleneck, but many of them have given you the freedom to enjoy the day off.  Celebrate the holiday and let freedom ring!

My DAC Journey: Past, Present and Future

Monday, June 28th, 2010


I have a unique perspective on DAC since I have attended DAC in many different capacities over the last 15 years: as a poor student, a lucky customer, an excited vendor participant, an independent consultant, a free spirit and a hard working vendor organizer.  The following log describes the many DACs that I have attended and my impressions.

·1995 (San Francisco):  My first DAC as a graduate student. Our research group (Zhi Wang and me, led by Prof. Malgorzata-Chrzanowska Jeske) from Portland State University won the Design Automation Conference Scholarship Awards for our project “Fine-Grain Locally-Connected FPGAs; Synthesis and Architecture”. It was an exciting event for me since I had been in the U.S. for only one year. Being able to participate in the academic sessions and meeting with other researchers were simply fantastic!


·1996 (Las Vegas): As a CAD engineer from Lattice Semiconductor Corporation.  As a customer of EDA tools, I was treated to my very first expensive Sushi dinner by sales people of a vendor. The tradeshow floor was exciting and overwhelming. All the exhibitors, presentations, giveaways, magician shows stimulated all my senses. My colleague won a nice telescope in a drawing. Wow, it was amazing!


·1998 (San Francisco), 1999 (New Orleans), 2000(LA): As a core competency applications engineer from Cadence Design Systems. Those were good years at Cadence when the parties were lots of fun. I worked mainly in the suite to launch Cadence’s new equivalence checker. We were busy but I heard the floor traffic was down.


·2001 (Las Vegas): As a lead applications engineer from Real Intent. It was a very memorable DAC for me because Real Intent was a young startup at that time. We got a lot of attention from all kinds of people trying to learn about our “Intent Driven Verification” technology.


·2003 (Anaheim): As a free spirit.  I took time off after having my first daughter Makana. Without any obligations, I had a great time seeing old friends and keeping up with new development in the industry.


·2006 (San Francisco): As a new PhD graduate. I presented my research paper “Symmetry detection for large Boolean functions using simulation, satisfiability and circuit representation”, co-authored with Alan Mishchenko, Prof. Bob Brayton and Prof. Jeske. I also presented at the PhD forum on my thesis “Computing functional properties and network flexibilities for logic synthesis and verification”. I spent most of my time in academic sessions noticing the change of hot topics between years.


·2007 (San Diego): As an independent consultant. I was there to scout the market and see what’s new.


·2008 (Anaheim), 2009 (San Francisco): As a technical marketing manager for Real Intent. The product that I am responsible for, Meridian CDC, Real Intent’s flagship asynchronous clock domain crossing verification tool, got great attraction at these events. I remember talking nonstop for hours showcasing Meridian CDC’s advanced capabilities.



This year at Anaheim, I attended DAC as the director of technical marketing for Real Intent. This is the first time that I have been involved in orchestrating all the behind-the-scene work a vendor has to do to participate at DAC. I am struck by:

1.      How expensive it is to participate for DAC. Besides the huge cost of having a space at DAC, the cost of designing and building the booth, the cost of transporting the booth to and from the convention center, the cost of installing and dismantling the booth, and the cost related to staff travel add up very quickly. Some of the costs are so outrageous that I am surprised we all put up with these every year: $90 per hour for floor union labor from 8am to 4:30pm and $150 per hour overtime?  $270 to vacuum a 900 SF area? $50 for a gallon of coffee with $25 delivery charge? Why do the smart people in EDA pay so much money for so little service?


2.      The amount of time and effort needed to organize all activities. A successful tradeshow is a concerted effort  involving many groups of people: R&D to develop the new big thing to showcase at DAC, Sales to line up customer meetings, Marketing to create a theme and associated art work, update product literature, create product presentation and demonstration, Media to tell the public what will happen, Booth design firm to design a booth with a prominent presence while saving cost, Promotional company to select giveaways and DAC attire, Logistics firm for transportation to and from the convention center and within the convention center, Union labor for booth installation & dismantling (their lack of efficiency drove us nuts), Hotel for staff, and many more. After doing all these, I now have great appreciation of people who organize trade shows. There are a million details, tons of work.

The hard work paid off. Real Intent had a good show. We had many qualified people coming through our booth checking out our technologies. People all liked our stylish booth design with wavy frosty panels and 3 different shirt colors (red, green and purple).  We often got asked about the different shirt colors as people walked in our booth, and we proudly pointed to the colors of our 3 product families: Ascent, Meridian and PureTime.



DAC released preliminary attendance number for this year: full conference 1554, exhibit attendees 3444, exhibitors and guests 2557. The total number of participants 7555 is on par with last year’s total of 7996 [1]. However throughout the years, most people would say that the number of companies exhibiting and the attendance have gone down from the good days. The following are some of the factors that have contributed to this trend:

·With the high cost and huge amount of work involved, smaller companies may reduce presence or pull out;

·With the other smaller regional tradeshows, e.g. DVCon and SNUG, potential customers have less of a need to travel to DAC to meet all the vendors;  

·With the advancement of internet, all companies have extensive web presence so information can be accessible at the finger tips of potential customers. The need for people to gather information from the traditional tradeshows is somewhat reduced;

·The economy has definitely played a role in the trend we are seeing with DAC.


These make me ponder what value DAC brings and where the future lies. What are the goals for exhibitors and customers at DAC going forward?  And should DAC consider going virtual like FPGA Summit?

My answers coming from all the prospective that I had over the years are:

·DAC is a very unique event in that it is for both academic researchers and end users. It bridges the gap between academic researches and EDA tools. No other venue can conveniently bring the two together as DAC does.

·Though overall attendance has reduced, the key decision makers who attended the shows have not changed. The quality of conversation has definitely improved.

·Despite the cost and effort involved, DAC offers a window for potential customers to gauge the financial health of a company and get to know all the hard working technologists behind the scene. It is also a great opportunity for R&D to hear customer’s problems and issues first hand. This level of interaction and communication can’t be achieved elsewhere.

·As Real Intent grows geographically, every year I meet new people for the first time whom I have worked with over skype and email. It is exciting to get to know my coworkers a bit more personally.

·Besides, DAC is an opportunity to connect with old acquaintance. After all, our industry is a very small world.


If I could offer any suggestions for the future, I would recommend DAC to adopt SNUG’s approach with its recent designer community expo (DCE). All the booths are designed and setup for the vendors. All we had to do was to provide booth graphics. I know this removes the unique look & feel for vendors, but it was such an easy event for us to attend and the results were awesome. After all, it is the people and technology users care about mostly.


I certainly believe DAC will stay for many years to come, I will see you in San Diego!


[1] 47th DAC Announces Preliminary Attendance Numbers

Based on the math from years past, the definition of total attendees include conference attendance and exhibit attendance. Last year the total was 5299. This year it should be 4998 (1554+3444), a merely 6% drop. The total number of 6001 given in the press release included exhibitors, not full conference attendees. If we compare the total participants, which include all three categories, then last year it was 7996, again only slightly more than this year’s 7555. Am I missing something?


Verifying Today’s Large Chips

Friday, June 18th, 2010

Today’s chips are pushing the verification envelope with their size, integrated system-level functionality, and the nano-scale-driven bubbling up of previously second-order considerations. Also, diminishing returns from geometry-shrinks force designers into ever more aggressive control optimizations for timing and power, and manufacture-test considerations require fancier DFT structures on chip. The visible manifestation of these effects has been an increase in the variety of failure modes.

For example, new designs contain multiple clocks necessitated by a combination of clock-skew considerations and the diverse clocking requirements of SOC components. Consequently, failures from improper domain crossings are more common today. Similarly, low-power design techniques like clock and Vdd gating are now used more widely, creating new failure modes. Each new failure mode requires an additional verification step.

A key consideration in the design of verification tools and flows in the face of this challenge is that the many new verification steps are sequential and intertwined. It is the number of these iterative steps to the final working chip that kills productivity. In one pass of the verification flow, one must debug the clock domain interactions and timing constraints before full-chip functionality is verified, which, in turn, must be debugged before power management and DFT structures are verified. Any design fix for some failure mode requires that the entire pass be repeated – for example changes to functionality or a design resynthesis can perturb clock-domain crossings or timing constraints.

The more you postpone verification, the longer each step will be because it must analyze more of the design and, crucially, the manual debug process is less local to the failure location. Verification complexity grows exponentially with design size and the number of verification steps is greater for modern chips. Consequently, verifying later in the design cycle causes a substantial increase in the time to a working chip. Late-stage verification also forces more of the design to be reanalyzed post bug-fix than is truly necessary.

An intuitive solution is to verify early and to distribute the verification across design modules. With this, we achieve the dual goal of reducing the latency of each verification step and reduce the impact of sequentiality. By the time the design enters the later stages, the bugs that could have been found earlier should have been fixed and verification must focus on truly full-chip failures. Consequently, each late-stage verification step will be shorter; the number of bugs found will be fewer; and fewer passes of the multi-step verification flow will be required.

Since early verification is the purview of designers, such tools must follow three important guidelines:

-          Maximize automation

-          Apply simulation and formal methods surgically for specific failure modes so that the analysis time is commensurate with the emphasis on design rather than verification

-          Always return actionable information to identify and diagnose failures and better understand the design

Real Intent products enable early verification for key failure modes. Its Ascent family finds bugs in control-dominated logic without the need for assertions or testbenches. It performs sequential formal analysis to identify deep bugs requiring many clock cycles to manifest as symptoms. MeridianCDC finds bugs in clock and domain crossing implementations. MeridianDFT does testability analysis and finds bugs in the implementation of DFT structures. Finally, PureTime finds bugs related to improper timing constraints. The adoption of these early verification tools is essential today for designing working chips in an acceptable amount of time.


You Got Questions, We Got Answers

Monday, June 14th, 2010

Have you ever worried about:

  • Missing real bugs in a 10,000-line verification report?
  • Whether your design will function as intended?
  • Why there are RTL and netlist simulation mismatches?
  • When you can sign-off on clock domain crossing verification?
  • Whether your RTL has enough test coverage?
  • If you design constraints are correct?

DAC is an excellent time to connect with EDA vendors to get our concerns and questions answered!

Our team at Real Intent has worked very hard to create a comfortable space at DAC (booth 722) where you can come and meet with true technologists and attend Expert Corner Lectures to learn about the latest technology innovation in X-prop and CDC verification.

Real Intent’s automatic formal verification solutions are known for its solid analysis engines, superior performance and low noise report.  Seeing is believing, come and check out our product demos showcasing our latest technologies at DAC. You will also walk away with some real cool gadgets!

See you at DAC!

Expert Corner Lectures
Monday June 14 & Tuesday June 15, 2010, 4pm – 5pm
Real Intent booth # 722
Topic: “Efficient and Practical Prevention of X-Related Bugs

Abstract: It is painful and time consuming to identify X sources and chase their propagation between RTL and Gate representations. Such “X-Prop” issues often lead to a dangerous masking of real bugs. No clear solution has existed thus far to address this problem effectively. This lecture explains the common sources of X’s and shows how they can cause functional bugs. It then discusses the challenges that Real Intent has overcome in developing an efficient solution to assist designers in catching bugs caused by X propagation and ensuring X-robust designs.

Monday June 14 & Tuesday June 15, 2010, 5pm – 6pm
Real Intent booth # 722
Topic: “Achieve 100% CDC Signoff with Advanced CDC Verification

Abstract: Today’s SOCs have a multitude of components working with different clock-domains running at varying speeds. You have done CDC verification on your blocks, but how will you know you are done? This lecture highlights the advanced technologies that Real Intent has developed to help achieve 100% CDC Sign-off.

To register for both lectures, please visit

S2C: FPGA Base prototyping- Download white paper

Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy