Open side-bar Menu
 Real Talk
Carol Hallett, VP of World Wide Sales, Real Intent
Carol Hallett, VP of World Wide Sales, Real Intent
Carol Hallett is the Vice President of World Wide Sales and Marketing for Real Intent. Prior to Real Intent, Carol was the Vice President at Tharas, which was acquired by EVE. Prior to Tharas she was at Phoenix Technologies, Mentor Graphics and Tera Systems in sales roles. She began her career at … More »

Leadership with Authenticity

July 26th, 2010 by Carol Hallett, VP of World Wide Sales, Real Intent

An interesting title…What is leadership with Authenticity?

Well, let’s discover…first of all let’s break it down; we will start out by talking about leadership.

Is leadership telling people what they want to hear to keep them going the direction you think they should go? Or is leadership just taking flight and hoping that people follow?  Wikipedia defines it as “a process of social influence in which one person can enlist the aid and support of others in the accomplishment of a common task…”

Leadership is a big responsibility but it is also something that a person needs to do with finesse.  If everyone is going one direction and you decide to change the course dramatically, this can be very painful. 

I try to think of it as if you are steering a large passenger liner at full speed ahead. It takes a lot to turn the ship of that size and if you turn too abruptly there are huge consequences.  There would be chaos because people don’t know what is happening, what to do or what to expect. If people are in the wrong place it could be catastrophic for them. They would be unprepared and they might fall off the ship, along with valuable cargo… and then there goes your crew!

If this is the intention then you have accomplished the goal but usually it takes a lot to get the ship right again and sometimes it is impossible.  So remember, changing course abruptly is not a good practice when steering a ship or when running a business. 

Let’s bring Authenticity into the picture…What is Authenticity?

Again, I refer back to Wikipedia for a definition: It is “a particular way of dealing with the external world, being faithful to internal rather than external ideas.“

So authenticity means to uncover your true self.  “We live in a culture that is starving for authenticity.  We want our leaders, co-workers, friends, family members, and everyone else that we interact with to tell us the truth and to be themselves.  Most important, we want to have the personal freedom and confidence to say, do and be who we really are, without worrying about how we appear to others and what they might think or say about us.” (Mike Robbins)

Sadly, however, even though we may say we want to live in a way that is true to our deepest passions, beliefs, and desires, most of us don’t.   WHY? Starting at a very early age, we are taught by our parents, spouses, teachers, friends, co-workers, politicians and the media that it’s more important to be liked and to fit in than it is to be who we truly are.  In addition, many of us assume that who we are is not good enough and therefore we’re constantly trying to fix ourselves or to act like others who we think are better than us.

Oscar Wilde…a famous author and poet said… “Be yourself, everyone else is already taken.”  To me this summarizes authenticity.

Bringing the two together is an art and a process that you develop along the way.  I believe that the most successful leaders are the ones that are authentic.  We are all unique and so our styles differ but if the basic foundation is Authenticity or being Real, that is a fantastic start.  How you go about enlisting the aid and support of others is more effective when you do it in your style.  Have fun!  Lead with Authenticity.

Clock Domain Verification Challenges: How Real Intent is Solving Them

July 19th, 2010 by Dr. Pranav Ashar, CTO

With chip-design risk at worrying levels, a verification methodology based on just linting and simulation does not cut it. Real Intent has demonstrated that identifying specific sources of verification complexity and deploying automatic customized technologies to tackle them surgically has benefit. Automatic and customized don’t go together at first glance. Whereas automatic deals with maximizing productivity in setup, analysis and debug, customized ensures comprehensiveness. That’s the challenge for clock-domain verification as well as for the plethora of other failure modes in modern chips. Clock-domain verification is certainly a case in point. Its complexity has grown tremendously:

Signal crossings between asynchronous clock domains: The number of asynchronous domains approaches 100 for high-end SOCs optimizing performance or power. The chip is too large to distribute the same clock to all parts. Also, an SOC is more a collection of sub-components, each with its own clock. Given the large number of domains and crossings, the myriad protocols for implementing the crossings, and the corresponding large number of failure modes, writing templates to cover all scenarios is very expensive. Template-based linting on such chips with millions of gates is very slow – takes days. Additionally, the report from a template-based analysis is so voluminous as to challenge the ability of the team to analyze it manually, causing real failures to be overlooked.

Widely disparate and dynamic clock frequencies: Analyzing for data-integrity and loss in crossings under all scenarios is non-trivial and beyond linting alone.

Proliferation of gated clocks: Power management and mode-specific gated clocks are now common, introducing a manifold verification problem. (1) Clock setup must be correct for meaningful verification. Detailed setup analysis highlights errors in clock distribution or the environment spec. (2) Functionally verify the designs with gated clocks. (3) The variety of gated clock implementations creates a variety of glitching possibilities. Clock glitches are very hard to diagnose. You want to know about this possibility as early as possible. Given the variety of gated-clock types and glitching modes, a template-based approach is a recipe for productivity loss and slow analysis.

Reset distribution: Power-up reset is much more complex now to optimize for power and routing. Full verification of the reset setup prior to subsequent analysis is essential.

Timing optimization: Optimizations like retiming may violate design principles causing glitch potential at the gate-level even if there was none in RTL. Glitch analysis must be an integral part of verification and the tool must operate on RTL as well as gates. Template methods make it harder since multiple templates may be required to support RTL and gate as well as mixed languages.

Clock distribution: Previously 2nd-order issues like clock jitter in data/control transfers have more impact in DSM. Even synchronous crossings must now be designed carefully and verified comprehensively.

Full-chip analysis: Speed, scalability, precision and redundancy-control become key considerations in full chip analysis with many hierarchy levels and 100 million gates.

Real chip respins are revealing: (1) Asynchronous reset-control crossing clock domains but not synchronously de-asserted, caused a glitch in control lines to an FSM. (2) Improper FIFO-protocol controlling an asynchronous data crossing caused read-before-write and functional failure. (3) Reconvergence of non-gray-coded synced control signals to an FSM caused cycle jitter and an incorrect transition. (4) Glitch in a logic cone on an asynchronous crossing path that was latched into the destination domain corrupting captured data. (5) Gating logic inserted by power-management tools resulted in clock glitch.

CDC verification is not solved adequately by simulation or linting. It has become a true showstopper and an effective solution is a must have.  Real Intent’s approach understands the failure modes from first-principles to develop symbiotic structural and formal methods to cover them comprehensively and precisely. Structural and formal methods combine to check the clock & reset setup, metastability errors, glitching, data integrity / loss and signal de-correlation. This approach allows us to auto-infer designer intent and checks for the crossing or clock/reset distribution. As a result, our structural analysis runs 10x faster and does not require the designer to develop templates. Formal methods analyze for failures under all scenarios efficiently and comprehensively without a laborious enumeration of scenarios. For example our free-running-clock feature checks for data-loss for all frequency ratios. We complete the solution with an automatic link to simulation that models metastability and adds checks in the testbench. These solutions are offered in Real Intent’s Meridian product family.

Building Strong Foundations

July 12th, 2010 by Lisa Piper, Senior Technical Marketing Manager at Real Intent

I recently joined Real Intent with over 10 years of experience developing and supporting assertion-based methodologies and have seen the technology move from research toward the mainstream.   Formal technologies have proven to have a lot of value for functional verification and for coverage, but having to learn evolving assertion languages and techniques has slowed the adoption.  I like Real Intent’s approach of automating the verification effort.

 In the very early stages of design, linting is a basic step. Lint checkers for HDL have been around for some time, and continue to become more sophisticated.  AscentTM Lint runs very fast because the checks are all static.  The user can easily configure what checks are desired.

In the next stage, also early in the process but after linting, Real Intent has what is my favorite tool – Implied Intent Verifier (IIV).  They have adapted formal verification techniques to automatically detect issues that can result in bugs that might be difficult to trigger and detect in simulation.  Think of this as automatically generated assertions. Formal verification without having to write assertions!  It is all automatic.  IIV goes beyond static linting to detect bugs that require sequential analysis.

An example of a significant IIV check is the one for state machine deadlocks. Deadlocks are the type of symptom that foreshadow bugs that can result in product recalls if not found. Finding them often depends on whether the testbench author thinks to test the scenario.  IIV provides detection of deadlock in one FSM and between two FSMs, without the need to write any testbench or assertions.  For example,



This is the classic example of two state machines that are waiting on one another.  In this case a single-state deadlock (SSD) is reported for both state machines and the deadlocked state is state 00.  This is because state machine A is waiting on a signal from state machine B and vice-a-versa.

Many other errors are also reported that have the same root cause.  One of the unique features of IIV is that it distinguishes secondary failures. The report focuses your effort on the root cause of a failure, in this case the SSD, and you can ignore the secondary failures.

While this example was very simple for the purpose of illustration, you can imagine a similar scenario in protocols. Take for example, a peer-to-peer handshake where both request to transmit at the same time, causing them to both go to a state where they are waiting for an acknowledge signal from their peer.  This would be a fundamental state machine design issue.  Simulations would pass unless the corner case where both request simultaneously is tested. As shown in the simple example above, this can also happen as the result of a simple typo.

You can get a fast start in functional verification by exploiting the verification features provided in Real Intent’s tool suite.  Common bugs are quickly and automatically weeded out, building a strong foundation for the real work of verifying your specific design intent. Check out Real Intent’s complete product line at .






Celebrating Freedom from Verification

July 5th, 2010 by Lauro Rizzatti - General Manager, EVE-USA

Happy Fourth of July!  If you’re celebrating Independence Day today, chances are you have the time to do so because of a set of tools that freed you from the drudgery of endless verification cycles.

Yes, let’s give thanks as an industry to the plethora of commercial tools that reduce the amount of time consumed by laborious verification tasks.  They take many forms today, from hardware emulation and formal verification to simulation and acceleration, to name just a few.  All have been developed to reduce the verification portion of the design cycle –– purported to be in the range of 70% –– and to lessen the burden you carry.

Each year, the verification challenge gets worse as SoC design sizes and complexity increase, stressing and periodically breaking existing design flows.  New data shows that the average design size is now exceeding 10-million ASIC-equivalent gates ––  don’t get me started on what’s an ASIC-equivalent gate, I’ll save that for another post –– with individual blocks running between two- and six-million ASIC-equivalent gates.

Exercising each and every one of those gates by an old rule of thumb would require a number of cycles equivalent to the square number of gates.  That is close to a quadrillion cycles –– yes, that’s a one followed by fifteen zeros.  That’s a lot of verification cycles and a lot of headaches.

And, lest we forget, the time-to-market push continues unabated.

How do we cope with this triple challenge of gates, cycles and time to market and tame the tiger?  Only functional verification can thoroughly debug a design before silicon availability, if you have the time to do it. 

Maybe not all is lost.  Exhaustive functional verification carried out via a RTL simulator is no longer a practical or viable alternative because of its abysmal performance –– they are just too slow to fully analyze and verify larger chips.  And, almost all of today’s chips are large and getting larger.

Emulation serves as a neat solution to the runtime problems that afflict these 25-year old logic simulators.  They are used to identify bugs and can alleviate the functional verification bottleneck by executing at megahertz speeds.  They accelerate the time needed to develop and validate hardware or embedded software design within the constantly shrinking schedule.  Emulators improve the product quality by increasing the level of testing of a design to meet the quality standards expected in today’s feature-rich electronics devices. 

You can forget whatever you may have heard about the older “big box” emulators.  New generations of modern hardware emulators fit in small footprint chasses and deliver execution speeds close to real time, making them useful as in‑circuit test vehicles.  While their runtime performance is impressive, they are far less expensive, easier to use and flexible enough for the current SoC project or the next one.

Even with these tools, verification continues to be a time-consuming process and often the bottleneck, but many of them have given you the freedom to enjoy the day off.  Celebrate the holiday and let freedom ring!

My DAC Journey: Past, Present and Future

June 28th, 2010 by Jin Zhang, Director of Technical Marketing


I have a unique perspective on DAC since I have attended DAC in many different capacities over the last 15 years: as a poor student, a lucky customer, an excited vendor participant, an independent consultant, a free spirit and a hard working vendor organizer.  The following log describes the many DACs that I have attended and my impressions.

·1995 (San Francisco):  My first DAC as a graduate student. Our research group (Zhi Wang and me, led by Prof. Malgorzata-Chrzanowska Jeske) from Portland State University won the Design Automation Conference Scholarship Awards for our project “Fine-Grain Locally-Connected FPGAs; Synthesis and Architecture”. It was an exciting event for me since I had been in the U.S. for only one year. Being able to participate in the academic sessions and meeting with other researchers were simply fantastic!


·1996 (Las Vegas): As a CAD engineer from Lattice Semiconductor Corporation.  As a customer of EDA tools, I was treated to my very first expensive Sushi dinner by sales people of a vendor. The tradeshow floor was exciting and overwhelming. All the exhibitors, presentations, giveaways, magician shows stimulated all my senses. My colleague won a nice telescope in a drawing. Wow, it was amazing!


·1998 (San Francisco), 1999 (New Orleans), 2000(LA): As a core competency applications engineer from Cadence Design Systems. Those were good years at Cadence when the parties were lots of fun. I worked mainly in the suite to launch Cadence’s new equivalence checker. We were busy but I heard the floor traffic was down.


·2001 (Las Vegas): As a lead applications engineer from Real Intent. It was a very memorable DAC for me because Real Intent was a young startup at that time. We got a lot of attention from all kinds of people trying to learn about our “Intent Driven Verification” technology.


·2003 (Anaheim): As a free spirit.  I took time off after having my first daughter Makana. Without any obligations, I had a great time seeing old friends and keeping up with new development in the industry.


·2006 (San Francisco): As a new PhD graduate. I presented my research paper “Symmetry detection for large Boolean functions using simulation, satisfiability and circuit representation”, co-authored with Alan Mishchenko, Prof. Bob Brayton and Prof. Jeske. I also presented at the PhD forum on my thesis “Computing functional properties and network flexibilities for logic synthesis and verification”. I spent most of my time in academic sessions noticing the change of hot topics between years.


·2007 (San Diego): As an independent consultant. I was there to scout the market and see what’s new.


·2008 (Anaheim), 2009 (San Francisco): As a technical marketing manager for Real Intent. The product that I am responsible for, Meridian CDC, Real Intent’s flagship asynchronous clock domain crossing verification tool, got great attraction at these events. I remember talking nonstop for hours showcasing Meridian CDC’s advanced capabilities.



This year at Anaheim, I attended DAC as the director of technical marketing for Real Intent. This is the first time that I have been involved in orchestrating all the behind-the-scene work a vendor has to do to participate at DAC. I am struck by:

1.      How expensive it is to participate for DAC. Besides the huge cost of having a space at DAC, the cost of designing and building the booth, the cost of transporting the booth to and from the convention center, the cost of installing and dismantling the booth, and the cost related to staff travel add up very quickly. Some of the costs are so outrageous that I am surprised we all put up with these every year: $90 per hour for floor union labor from 8am to 4:30pm and $150 per hour overtime?  $270 to vacuum a 900 SF area? $50 for a gallon of coffee with $25 delivery charge? Why do the smart people in EDA pay so much money for so little service?


2.      The amount of time and effort needed to organize all activities. A successful tradeshow is a concerted effort  involving many groups of people: R&D to develop the new big thing to showcase at DAC, Sales to line up customer meetings, Marketing to create a theme and associated art work, update product literature, create product presentation and demonstration, Media to tell the public what will happen, Booth design firm to design a booth with a prominent presence while saving cost, Promotional company to select giveaways and DAC attire, Logistics firm for transportation to and from the convention center and within the convention center, Union labor for booth installation & dismantling (their lack of efficiency drove us nuts), Hotel for staff, and many more. After doing all these, I now have great appreciation of people who organize trade shows. There are a million details, tons of work.

The hard work paid off. Real Intent had a good show. We had many qualified people coming through our booth checking out our technologies. People all liked our stylish booth design with wavy frosty panels and 3 different shirt colors (red, green and purple).  We often got asked about the different shirt colors as people walked in our booth, and we proudly pointed to the colors of our 3 product families: Ascent, Meridian and PureTime.



DAC released preliminary attendance number for this year: full conference 1554, exhibit attendees 3444, exhibitors and guests 2557. The total number of participants 7555 is on par with last year’s total of 7996 [1]. However throughout the years, most people would say that the number of companies exhibiting and the attendance have gone down from the good days. The following are some of the factors that have contributed to this trend:

·With the high cost and huge amount of work involved, smaller companies may reduce presence or pull out;

·With the other smaller regional tradeshows, e.g. DVCon and SNUG, potential customers have less of a need to travel to DAC to meet all the vendors;  

·With the advancement of internet, all companies have extensive web presence so information can be accessible at the finger tips of potential customers. The need for people to gather information from the traditional tradeshows is somewhat reduced;

·The economy has definitely played a role in the trend we are seeing with DAC.


These make me ponder what value DAC brings and where the future lies. What are the goals for exhibitors and customers at DAC going forward?  And should DAC consider going virtual like FPGA Summit?

My answers coming from all the prospective that I had over the years are:

·DAC is a very unique event in that it is for both academic researchers and end users. It bridges the gap between academic researches and EDA tools. No other venue can conveniently bring the two together as DAC does.

·Though overall attendance has reduced, the key decision makers who attended the shows have not changed. The quality of conversation has definitely improved.

·Despite the cost and effort involved, DAC offers a window for potential customers to gauge the financial health of a company and get to know all the hard working technologists behind the scene. It is also a great opportunity for R&D to hear customer’s problems and issues first hand. This level of interaction and communication can’t be achieved elsewhere.

·As Real Intent grows geographically, every year I meet new people for the first time whom I have worked with over skype and email. It is exciting to get to know my coworkers a bit more personally.

·Besides, DAC is an opportunity to connect with old acquaintance. After all, our industry is a very small world.


If I could offer any suggestions for the future, I would recommend DAC to adopt SNUG’s approach with its recent designer community expo (DCE). All the booths are designed and setup for the vendors. All we had to do was to provide booth graphics. I know this removes the unique look & feel for vendors, but it was such an easy event for us to attend and the results were awesome. After all, it is the people and technology users care about mostly.


I certainly believe DAC will stay for many years to come, I will see you in San Diego!


[1] 47th DAC Announces Preliminary Attendance Numbers

Based on the math from years past, the definition of total attendees include conference attendance and exhibit attendance. Last year the total was 5299. This year it should be 4998 (1554+3444), a merely 6% drop. The total number of 6001 given in the press release included exhibitors, not full conference attendees. If we compare the total participants, which include all three categories, then last year it was 7996, again only slightly more than this year’s 7555. Am I missing something?


Verifying Today’s Large Chips

June 18th, 2010 by Dr. Pranav Ashar, CTO

Today’s chips are pushing the verification envelope with their size, integrated system-level functionality, and the nano-scale-driven bubbling up of previously second-order considerations. Also, diminishing returns from geometry-shrinks force designers into ever more aggressive control optimizations for timing and power, and manufacture-test considerations require fancier DFT structures on chip. The visible manifestation of these effects has been an increase in the variety of failure modes.

For example, new designs contain multiple clocks necessitated by a combination of clock-skew considerations and the diverse clocking requirements of SOC components. Consequently, failures from improper domain crossings are more common today. Similarly, low-power design techniques like clock and Vdd gating are now used more widely, creating new failure modes. Each new failure mode requires an additional verification step.

A key consideration in the design of verification tools and flows in the face of this challenge is that the many new verification steps are sequential and intertwined. It is the number of these iterative steps to the final working chip that kills productivity. In one pass of the verification flow, one must debug the clock domain interactions and timing constraints before full-chip functionality is verified, which, in turn, must be debugged before power management and DFT structures are verified. Any design fix for some failure mode requires that the entire pass be repeated – for example changes to functionality or a design resynthesis can perturb clock-domain crossings or timing constraints.

The more you postpone verification, the longer each step will be because it must analyze more of the design and, crucially, the manual debug process is less local to the failure location. Verification complexity grows exponentially with design size and the number of verification steps is greater for modern chips. Consequently, verifying later in the design cycle causes a substantial increase in the time to a working chip. Late-stage verification also forces more of the design to be reanalyzed post bug-fix than is truly necessary.

An intuitive solution is to verify early and to distribute the verification across design modules. With this, we achieve the dual goal of reducing the latency of each verification step and reduce the impact of sequentiality. By the time the design enters the later stages, the bugs that could have been found earlier should have been fixed and verification must focus on truly full-chip failures. Consequently, each late-stage verification step will be shorter; the number of bugs found will be fewer; and fewer passes of the multi-step verification flow will be required.

Since early verification is the purview of designers, such tools must follow three important guidelines:

-          Maximize automation

-          Apply simulation and formal methods surgically for specific failure modes so that the analysis time is commensurate with the emphasis on design rather than verification

-          Always return actionable information to identify and diagnose failures and better understand the design

Real Intent products enable early verification for key failure modes. Its Ascent family finds bugs in control-dominated logic without the need for assertions or testbenches. It performs sequential formal analysis to identify deep bugs requiring many clock cycles to manifest as symptoms. MeridianCDC finds bugs in clock and domain crossing implementations. MeridianDFT does testability analysis and finds bugs in the implementation of DFT structures. Finally, PureTime finds bugs related to improper timing constraints. The adoption of these early verification tools is essential today for designing working chips in an acceptable amount of time.


You Got Questions, We Got Answers

June 14th, 2010 by Jin Zhang, Director of Technical Marketing

Have you ever worried about:

  • Missing real bugs in a 10,000-line verification report?
  • Whether your design will function as intended?
  • Why there are RTL and netlist simulation mismatches?
  • When you can sign-off on clock domain crossing verification?
  • Whether your RTL has enough test coverage?
  • If you design constraints are correct?

DAC is an excellent time to connect with EDA vendors to get our concerns and questions answered!

Our team at Real Intent has worked very hard to create a comfortable space at DAC (booth 722) where you can come and meet with true technologists and attend Expert Corner Lectures to learn about the latest technology innovation in X-prop and CDC verification.

Real Intent’s automatic formal verification solutions are known for its solid analysis engines, superior performance and low noise report.  Seeing is believing, come and check out our product demos showcasing our latest technologies at DAC. You will also walk away with some real cool gadgets!

See you at DAC!

Expert Corner Lectures
Monday June 14 & Tuesday June 15, 2010, 4pm – 5pm
Real Intent booth # 722
Topic: “Efficient and Practical Prevention of X-Related Bugs

Abstract: It is painful and time consuming to identify X sources and chase their propagation between RTL and Gate representations. Such “X-Prop” issues often lead to a dangerous masking of real bugs. No clear solution has existed thus far to address this problem effectively. This lecture explains the common sources of X’s and shows how they can cause functional bugs. It then discusses the challenges that Real Intent has overcome in developing an efficient solution to assist designers in catching bugs caused by X propagation and ensuring X-robust designs.

Monday June 14 & Tuesday June 15, 2010, 5pm – 6pm
Real Intent booth # 722
Topic: “Achieve 100% CDC Signoff with Advanced CDC Verification

Abstract: Today’s SOCs have a multitude of components working with different clock-domains running at varying speeds. You have done CDC verification on your blocks, but how will you know you are done? This lecture highlights the advanced technologies that Real Intent has developed to help achieve 100% CDC Sign-off.

To register for both lectures, please visit

Will 70 Remain the Verification Number?

June 7th, 2010 by Lauro Rizzatti - General Manager, EVE-USA

It’s that time of year again.  The design automation community is about to descend on Anaheim for the yearly conference.  The build up of anticipation, the buzz and the extra effort preparing for our booth have me pondering the topic of verification.

With verification consuming 70% of the design cycle, will 70% of the exhibitors at DAC this year offer tools to solve the verification challenge?  We will see.  While the percentage may not reach 70, I am confident that many companies will offer a variety of new, old or repackaged techniques, methodologies and tools for a verification engineer’s consumption.

With an abundance of options and choices, could the verification tool categories make up 70% of the EDA tools category?  Well, that is our space, hardware emulation, and Real Intent’s in the formal verification area.  Add acceleration, assertions, debug, prototyping, simulation, testbench generation, TLM models, validation, functional qualification, static verification and the list is growing, but not quite overtaking the rest of the field.

Next are the attendees at this hallowed event.  One can’t help but wonder if 70% of attendees are verification engineers, given the mammoth effort to verify that a chip will work as intended.  Will 70% come from the U.S. or will we see some attendees from Europe, Asia and the rest of the world, as well?  What’s more, of this group, are they spending 70% of their time on the exhibit floor researching verification solutions and new technologies?  Or, for that matter, 70% of their CAD budget on verification tools?

And, lest we forget, does verification account for 70% of the yearly EDA revenue?  Not according to the EDA Consortium.  In 2009, Computer Aided Engineering (CAE) contributions to the EDA worldwide revenues were in the ballpark of 40%, which includes IC Physical Design and Verification, PCB and MCM, Semiconductor IP Products and Tools, and Services.  Within CAE, by adding all forms of verification, such as logic, formal, timing, analog and ESL, that number exceeds 70%.

Even if you’re not a verification engineer, verification must matter as SoC design sizes and complexity continue to outwit even the most sophisticated EDA design flow.  After all, the average design size is about 10-million ASIC gates, with individual blocks running between two- and four-million ASIC gates.  And, the push to get products to market is only increasing.

As DAC kicks off next week in Anaheim, the question is whether a company on the exhibit floor will have the breakthrough verification tool to crack the 70% barrier.  Many will have software and hardware that will help to reduce the insidious verification challenges.  Emulation, for instance, is emerging as a tool for debugging hardware and for testing the integration of hardware and software within SoCs ahead of first silicon.  Stop by EVE’s booth (#510) during DAC to see a range of hardware/software co-verification solutions, including super fast emulation.  You’ll walk away with greater understanding of ways to reduce the time consumed doing verification, a handy reusable tote bag, and chances to win one of two iPADs or one $100 visa check card. Stop by Real Intent booth 722 to see how Real Intent’s solutions bridge the verification gap in Lint, CDC, SDC, DFT and X-Prop verification. They are giving away some real good looking and useful carabiner flashlights and carabiner watches.

A Model for Justifying More EDA Tools

May 31st, 2010 by Tets Maniwa, Editor in Chief for M&E Tech

One of the overwhelming issues facing the EDA community is the need and desire to increase total sales. One of the greatest hurdles in the ongoing chase to get more seats is the inability to convert the design software budget dollars into new seat licenses. Although most large companies have more than adequate dollars budgeted for software, less than a quarter of the dollars represent new tool acquisitions. The balance of the funds are for maintenance, training, and management functions like parceling out the limited number of seats available.

The inherent value of EDA tools is to provide more automation to the design task, thereby increasing the individual engineer’s productivity. As an example of the value of a tool, design for test tools reduce the time for test development and are able to improve fault coverage over manual methods in the test to over 90 percent of all faults. The tool leads to better test coverage of the design resulting in a higher probability of catching the rare or random errors that make the system fail. So the tools simultaneously reduce engineering time and improve test quality by enhancing internal node observability and controllability. As an added benefit, the window to the internal nodes makes the system debug and integration much easier, due to the availability of the internal state data at the time of failure. So here an additional tool not only improves the risk-performance equation in its intended department, but also aids another group in performing the debugging work.

The EDAC work on ROI justification does a good job of addressing the investment parts of the equation. (See the presentation on the EDAC web page The problems with the standard financial models for return on investment (ROI), however, include the lack of a sense of time (ROI equals the average return divided by average investment) and the total lack of connection with the issues that most concern the engineering managers. The managers are most concerned with risk reduction, overall productivity, and net increases in total dollar sales, whereas the standard ROI measures only look at changes in the direct outputs from the investment. The greatest problem in approaching the issue from an investment perspective is the need to quantify the results from a change before the fact.

The EDAC analysis does a very good job of displaying the effects of delays in product release on costs and revenues, but suffers in this regard, because it requires the quantification of risk factors and clear estimates of productivity changes. These are exactly the values that people want to measure, but are also the most difficult values to determine.

In addition, the direct outputs for new tool acquisitions are changes in productivity, a metric that the engineering community abhors because it implies the design task is a quantifiable, fixed process and not the exercise in creativity and skill in design that the engineers say it is. Therefore, the attempts to assign weighting values in the financial analysis to adjust the productivity creates a conflict for the person who will be reporting the numbers. A dramatic increase in productivity implies a large part of what the engineer does can be replaced by a piece of software. A small increase or a decrease in productivity implies the tool is not of great value. Neither of these results is desirable for the EDA community or for the engineer reporting the numbers.

One reason that the financial model breaks down in the ASIC world if that the return on investment depends on more than just the engineering department’s efforts. External factors like market position, pricing, profitability, and product features are all part of the return portion of the equation, but these factors are not in the control of the EDA tool purchase decision maker.  The overall history of ASICs has been, unfortunately, that although over 90 percent of all ASICs pass customer specifications on the first pass, less than half go into production. If a new product doesn’t go into production, the return on investment becomes a negative value that has no real relation to the measurement parameters of productivity.

Another reason that the basic financial models break down is the need to factor in some adjustment for risk. The relative productivity changes, as difficult as they are to measure, are much easier to quantify than risk reduction, because the level of risk may have no correlation to any dollar amounts. The addition of a tool may increase the risk due to the down time to learn the tool, or may cause a large enough change in the overall design methodology to expose other missing links in the tool chain. On the other hand, an incremental tool change can reduce the risk by enabling a more complete exploration of the design space, thereby ensuring a successful product design.  The risk reduction and productivity improvement are probably the most difficult parameters to quantify in assessing the value of a new tool, and the traditional financial analyses only point out the inability to predict a virtually unmeasurable future result.

New model

As an attempt to address some of the other issues in the valuation of tools, here is a simplified model that combines the traditional financial items like return on investment with some concepts from time to market analyses. The traditional inputs for ROI are the costs for the tools and the savings (in time and money) as a result of the tools. The new model also incorporates the estimated reduction in end-item unit volume and ASP for every month the product release is delayed from the best case schedule. Despite the statement that productivity and risk are hard to quantify, the model generates an ROI number as well as provides a means to evaluate a number of scenarios to bound the relative risk.

The model is in an Excel workbook with three worksheets. The assumptions and variables are entered into the first table called “Inputs”. This  passes the data to another worksheet for cost, ROI, and productivity analysis. The final sheet shows the time-to-market effects of the tools purchase, in terms of total design costs, size of market, and product sales. The effects of new tool purchases shows up in the “Impacts” worksheet, where relatively small changes in product development time have significant affect on the company’s sales numbers. The number of variables for contributions to the bottom line are too complex for a general analysis, but are easily available for more detailed analysis within the company doing the design.

All of the inputs for the analysis are available on the first page, and are the details you will need to get from the customer. The values are linked into the following sheets as variables in fairly simple equations. The pages are protected only to keep the formula intact. If you find a better algorithm for the cost/benefit evaluation, please feel free to modify the spreadsheet, by turning protection off and making your changes.

Note the “Costs “ page shows fairly small changes in productivity and a negative ROI for most cases. This is the problem with the traditional measurements, one can’t always find much in the way of good news in productivity or ROI for a standard analysis. If a new tool makes a sufficiently large change in productivity, the ROI eventually goes positive.

By combining the costs data and the effects on the total product life revenues, the model provides a means of identifying the total influence a tool purchase makes on the company’s revenues. In the “Impacts” worksheet, we observe the effects of tool purchases on the release of the target IC. By adjusting costs and delays, a user can also get an estimate for the end-of-life function, which is the  cross-over point in a late introduction where revenue goes below some threshold value.

For some scenarios, this cross-over point is before the design is completed, and therefore is a useful early indicator that a design program should be stopped early, rather than expending resources on a money-losing proposition. If the

EDA tool can help a company recover from this situation, then the tool truly is of much higher value to the user than just the change in productivity or some ROI. The value of the tool might be the salvation of a company.

Mind the Verification Gap

May 24th, 2010 by Rick Eram, Sales & Marketing VP

Would you ever use a wrench to tighten a Philips screw? Or hammer a square peg into a round hole?

Chip design today has become more of verification task than design. Designers spend more than 50% of their time trying to come up with ways to verify their designs or, worse yet, someone else’s design. Despite the change in the nature of the design work, designers keep using the same old design tools, hammering away trying to close the design and verification Gap. Must you not Mind The Gap?

Over the past decade or so, designs have transitioned from code writing to IP and code verification. Most designers today are tasked with taking a piece of IP designed by someone else who may not be even around in the company, or a design so old that the original designer does not even remember the details, or even IP your company bought from a third party and try to make it satisfy the spec. All is well until you realize that the changes you made to the code have left many holes in the functionality which are not covered by the original vectors you got with IP/design. In turn, the changes resulted in many unintended consequences that you could not have predicted based on the IP/design spec. The issues only magnify once you put all the IP blocks together.

Well that’s exactly what happens when you try to hammer a Philips screw into place. Step back and take a good look at the techniques you use today! Are you still using the same simulation methods? Are you still relying on LEC to catch some of the problems? Are you tossing the verification work over the wall to the verification folks and calling it the day – that’s their problem (until it comes back to you with an embarrassing bug!)?

Over the last decade design teams have added linting to their flow. EDA vendors extended linting to cover even more exotic checks. The tools helped the managers to become a design IRS and gain a little more visibility into the quality of the design. But, neither did the verification tasks did get any easier, nor did the design quality improve by what was promised. Most designers used these tools only as a check list. The unintended consequence was the amount of extra work deciphering linter reports. The problem is that this activity often has low ROI because of the noise, the difficulty in setup and managing yet another set of files and results.

Even though designers are finding themselves doing more verification work than design, the tool of choice is still basically a big hammer (i.e. the simulator). Linters so far have helped managers more than the designers in the trenches.

It is perhaps time for more finesse and a bit of strategy. Next-generation tools can help designers better strategize their work, and better targeting their simulations. With targeted simulation and functionally checking the design on the fly, designers can now look deeper into design and make sure they did not overlook potential bugs.

What tools can help in this process?  Is it time to rethink strategies and retool? Perhaps it is time to address the Design and Verification Gap. This means marrying verification and design activities together, and starting verification essentially right at the outset. Perhaps it is also time to go beyond traditional simulation, linting and traditional verification techniques. Verification essentially needs to move hand-in-hand with the design. Early verification will not only increase productivity and ROI, but it will also focus designers to cover as many functionality scenarios as possible. Next-generation tools must also incorporate a simple setup along with super fast analysis runtimes to incrementally check the design, help designer target simulation, debug the design on the fly, and to provide feedback on the potential holes left in the design as a result of recoding or other changes. 

As your designs grow and you include more IP, your verification tasks will certainly grow. Be sure to Mind the Verification Gap.

CST Webinar Series

Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy