Open side-bar Menu
 Real Talk

Archive for 2010

Hardware Emulation for Lowing Production Testing Costs

Monday, December 20th, 2010

The sooner you catch a fault, the cheaper it will be, or so the user surveys tell us.  These surveys, conducted by various data gathering services, are meant to determine the cost of pinpointing design faults during the creation of chips.  Each one proves conclusively that costs increase by a factor of 10 at each step in the development cycle. 

It’s hard to find a better example than the infamous Pentium bug dating back to 1994.  The cost to fix the bug that found its way inside thousands of PCs was more than a billion dollars because the design fault made its way into a manufactured product.  Talk about breaking the budget and tarnishing a stellar technical reputation!

Of course, EDA companies have long touted their design-for-testability (DFT) methodologies.  Thorough and exhaustive functional verification during the development cycle is still a good strategy and an economical way to find and remove design faults, though it’s becoming less practical.  Systems-on-chip (SoCs) are populated with arrays of cores, including CPUs and DSPs, embedded memories, IP peripheral blocks, custom logic and so on.  With all of this, functional verification becomes a major bottleneck before tapeout, reinforcing the industry-wide consensus that functional verification consumes in excess of 70 percent of the development cycle. 

And, that may not be enough!  When undertaking functional verification using HDL simulators, the trade-offs between the amount of testing and the allocated time for the task often leaves undetected faults inside the design.

Herein lays the conundrum.  Functional verification can detect faults early in the design cycle, reducing the cost of finding them.  And yet, a thorough job of cleaning a design would take such a long time, the cost would be over any reasonable budget.

A new generation of hardware emulators is changing all of this.  Unlike traditional emulators that cost small fortunes, limiting ownership and adoption to a few units at large companies with equally large budgets, these new functional verification systems are much more cost effective.  They’re also faster. 

These emulators, implemented on small footprints, are powered by the latest FPGAs and driven by robust software.  They are accessible to SoC engineers and embedded software developers and can be used throughout the design cycle.  Designs target a variety of fast-paced markets, including networking, communications, multi-media, graphics, computer and consumer.

An example is ZeBu from EVE.  It supports a comprehensive test environment to exhaustively exercise all functions of a design.  Its interactive debugging –– a prerogative of the software simulator –– enables a higher degree of verification/testing than possible with traditional software tools.

Design teams have finally found a means to uncover those nasty and difficult bugs, saving the budget and making management happy.  These new functional verification tools, such as emulation, offer orders of magnitude more testing than available using software tools but with the same financial investment.  Check the recent user surveys and see for yourself.

What do you need to know for effective CDC Analysis?

Friday, December 3rd, 2010

The complexity of clock architectures is growing with larger designs. Functionality that was traditionally distributed among multiple chips is now integrated into a single chip.   As a result, the number of clock domains is increasing.  Power management is a dominant factor that impacts clock architecture (gating, power domains, voltage scaling).   Designing for multiple functional modes adds to clock architecture complexity.  For example, all these issues add logic into the clock trees.    As a result it is becoming more complex to verify designs for glitch and metastability issues.

There are very few established standards/methodologies for managing clock architectures.  Even the few established standards such as UPF (Universal Power Format) for power management and synthesis for power don’t go far enough to be clock architecture-aware with respect to glitch, data stability and metastability issues.  For example, clock gating insertion is done without full awareness of asynchronous crossings.  In fact, there are a myriad of issues relating to asynchronous clock domains that don’t have established standards.  Some of these are:

  • Single bit synchronizers
  • Asynchronous FIFO’s
  • Handshake structures
  • Clock Gating
  • Re-convergence
  • Design practices to mitigate glitches in asynchronous crossings
  • Asynchronous/Synchronous resets crossing domains
  • Reset Gating

In order to manage the design, implementation and verification of clocks in a design, more members in the design team need to be “clock/reset architecture” and “clock/reset implementation” aware.   This awareness is necessary for verifying correct functionality of the clocks when using semi-automatic CDC analysis tools and/or manual processes such as design reviews.

The clock architecture needs to be understood to generate requirements for the clock/reset networks.  Design standards for implementation can be generated from these requirements.  The design standards drive verification strategy: what can be automated using CDC tools and what must be relegated to other methods.  An example of what cannot be verified by CDC tools is the selection of an invalid combination of clocks in functional mode.

The following components need to be considered with regard to how they affect clock/reset architecture:

  • Timing:  Static Timing Analysis & Clock Tree Synthesis
  • Mode Selection: Test/Functional Mode, Clock mode select (Multiple Functional Modes), Configuration registers
  • Power: Gating Control, Voltage Scaling
  • Testability: Clocks for Scan, Clocks for At-Speed, BIST, Lock-up latches
  • Quasi-static Domains

The clock/reset architecture specification needs to contain the following details in order to meet the requirements for design implementation and verification in the following manner:

– CDC Implementation Style and Design Practice

  1. Single Bit Sync
  2. Common Enable Sync (Data Bus)
  3. Fast-to-Slow Crossings (FIFO; gray-code, read-before-write, write-before-read)
  4. Multi-mode crossings (multiple frequency modes;  Data stability)
  5. Data Correlation (Handshake)
  6. Synchronizer cycle jitter management
  7. Re-Convergence management of control bit crossings
  8. Clock Gating management
  9. Internally generated reset management

– Clock Domain Specifications

  1. Synchronous Domains
  2. Asynchronous Domains
  3. Quasi-static Domains (very slow clocks )
  4. Exclusive Domains ( clocks that are active when other related domains are static such as configuration register writing)
  5. Resets and their Domains

– Functional Mode Configuration Specifications

  1. Mode Control Pins and logic states
  2. Configuration Registers settings
  3. For multiple functional modes, mode control settings

– Primary Input/Black Box Specifications

  1. Clock domains for the primary inputs
  2. Clock domains for black box outputs

-Design Initialization Specifications

  1. How to initialize the design (critical for CDC verification that requires formal verification)


The above specifications are critical to ensure an accurate setup for CDC analysis that will result in a complete and accurate analysis.   This will minimize the most frequent complaints about CDC analysis tools; noise (voluminous messages), false violations and incomplete analysis.   Also, by documenting the CDC specifications, all project engineers will be better equipped to review the validity of CDC analysis results.

Even with the best specifications, translating them to the constraints for the CDC tools needs a robust setup validation methodology to identify missing constraints.  Real Intent’s Meridian CDC tool has such a robust setup validation flow with supporting graphical debug/diagnosis to provide guidance on completeness and accuracy of constraint specifications.  Ease of setup has been cited as key considerations for many of our recent customers who have switched to Meridian CDC.

In summary, CDC analysis and verification is increasing in complexity.   The effectiveness of CDC analysis tools requires that designers have detailed knowledge of the design’s clock/reset architecture so that complete and accurate constraints can be provided to CDC tools and designers can meaningfully and efficiently review the validity of CDC analysis results.

A version of this article was previously published by Chip Design at

The SoC Verification Gap

Monday, November 15th, 2010

If you have been talking to anyone at Cadence, or others in the industry these days, I’m sure you have heard about the EDA360 vision.  If you are an engineer, you are probably saying – what is this “marketing fluff” and how does it help me.   Let me tell you what it means from my perspective, as somebody whose job it is to work with customers to understand the latest verification challenges and figure out what Cadence needs to do to address those challenges.  In short, think of EDA360 as a wake-up call, a heads-up that we understand there are big challenges our customers are facing in realizing SoCs/Systems, and this requires something far beyond EDA-business-as-usual.   At this point, those of you who may know me are probably saying to yourself – “Is Mike Stellfox really talking about this EDA360 stuff, has he sold out…?”   The answer is “no” I have not sold out and let me tell you why.

I have been spending a lot of time lately with engineering teams developing really big SoCs, and I’ve realized we have a significant challenge here – there is a HUGE gap in how SoCs are verified today vs. what is needed in order to have a scalable and efficient SoC development process.   The challenge of bringing a new SoC to market is exactly why a colleague of mine coined the phrase “time to integration”.  Today’s SoCs, are all about integration – integrating IP blocks, integrating analog content, and integrating more and more of the software stack.  While it is true that all of this integration work still includes design challenges, the bigger issues around improving time to integration are centered on improving the entire SoC verification process.   I have seen very few well-structured, methodology-based approaches to how customers are verifying their SoCs.  There are many ad-hoc processes and some internal tools and scripts that attempt to improve the situation, but when it comes to complex SoCs a much more structured and automated approach to verification is needed.   This opportunity to bring a more structured, methodology-based approach to integrating and verifying SoCs will likely need to be developed in a different way. I don’t think it will be feasible to simply understand the requirements and go back to Cadence R&D and ask them to develop some sort of silver bullet “SoC Verification Tool”.  It is going to require a different approach, one that requires tight collaboration with customers developing these complex SoCs. 

Within Cadence, we now have an organization known as the SoC and Systems Group, whose charter it is to define and drive solutions for improving SoC and System realization, where a significant focus will be on improving time to integration and verification of complex hardware and software systems.   Here are some of the key challenges I have seen with regard to integrating and verifying SoCs.

  • SoCs rely on several execution platforms in order to verify the integration, develop software, and verify that the application level use cases meet the requirements of the end system.  This includes a TLM-based Virtual Platform, RTL simulation, RTL Acceleration/Emulation, FPGA prototype, and links to the post-silicon environment.  It is a huge effort to develop and maintain the models and verification environments for each of these platforms, and it is not easy to reuse stimulus, checks, and coverage metrics across each platform. 
  • It is like trying to find a needle in a haystack to debug at the SoC level where the bug might be hidden somewhere in the hardware, software, or the verification environment.  SoC level debug is further complicated by the fact that it is often necessary to reproduce the bug on a different execution platform where there is much better debug visibility.
  • Today IP is not optimized for integration within the SoC.  There is a need to develop and deliver the verification content with the design IP in such a way that it is optimized for integration to reduce the time and effort for integration verification. 
  • Given the complexity of the software content for most SoCs, and all the ways the software might interact with the hardware, there is a need for better tools for automating the creation of software driven tests, and for debugging hardware and software together.
  • More and more analog content is being integrated into SoCs so there is a need to more thoroughly verify the integration between the digital and analog blocks by including reasonably accurate analog models in the IP and SoC verification environments.
  • In order to effectively manage a large-scale multi-geography SoC development project, there needs to be clear metrics and milestones for tracking and reporting the progress of all the SoC development activities. 

These are the core challenges that I see that need to be addressed to close the SoC verification gap.  Admittedly today the gap is rather wide, but I am confident with the right focus and a complete understanding of our customer’s needs, we will align with the much of what is behind the EDA360 vision and close this SoC verification gap in the coming years.

Building Relationships Between EDA and Semiconductor Ventures

Monday, November 8th, 2010

Let’s change topics this month from various approaches to solving the verification challenge to ways to support semiconductor startups.  For this, I turn to EVE’s Vice President of Sales Ron Burns.  He has written a compelling article that will appear in the December issue of GSA Forum on the importance of building relationships between EDA companies and semiconductor startups.  He advocates finding new ways to partner with these new ventures because our synergistic industries have a shared destiny.

From his perspective and one shared throughout EVE, today’s fabless startups are in search of external partners willing to work with them and help them become successful.  These partners need to have a solid understanding of the challenges facing a startup, from funding, product development and globalized teams to core competencies and time to market.  And, they need to be ready to offer assistance and support, as appropriate.  In fact, with some creativity, an EDA company can be a collaborative partner able to help a startup navigate some of these challenges. 

He writes convincing about the need to understand the process.  What he means is this:  Before engaging, the most pressing consideration is to establish a goal for working with a startup.  Thinking in terms of a shared destiny helps clarify short-term revenue objectives and the lifetime value of the relationship, with regard to both cost of sales and support, and technology adoption. 

EDA business models have always had some elasticity, and discounting software or loaner programs are common.  Another is a flexible payment model based on projects and key milestones, with the understanding that the EDA company will help the startup reach these milestones.  EVE, for example, offers a remote access model or peak-load onsite rentals.

Intellectual property (IP) is becoming more and more of an issue and many larger EDA companies are building portfolios useful for a new venture.  While that makes great strategic sense, the startup may not have the resources to buy it. 

There are other options, including several offered by EVE.  Our scalable emulation system supports a wide range of verification components that include transactors, memory models and speed-rate adapters.  The transactor catalog comprises, among others, PCIe, USB, FireWire, Ethernet, AHB, AXI, TLM 2.0, Video, HDMI, I2C, I2S and JTAG components.  The Memory Model catalog includes virtually all popular types of memories, such as DDR, DDR2, DDR3, GDDR5, mobile and flash parts.  A Speed-rate Adapter catalog offers PCI, USB and a multi-media card with a complete set of video protocols. 

We may license any verification component for the duration of the project and offer a program to startups where they can mix in and out certain pieces for the duration of the project under the same license.

The possibilities for EDA companies to build supportive relationships with startups are endless, especially for creative companies willing to do something different.  The strategy is to align with the company’s goals and the EDA company’s tools, services and IP.

Watch your email in box next month for your link to the GSA Forum article by Ron Burns and be sure to let us know if you agree that the EDA and semiconductor ventures have a shared destiny. 

Thoughts on Assertion Based Verification (ABV)

Monday, November 1st, 2010

Verification approaches or methodologies that increase the probability of designs being correct the first time and can be easily integrated into the existing functional verification flow will find a ready market. If in addition this technology reduces both verification time and cost it will be a major winner.

Assertion Based Verification (ABV) is in that class of technology. However, ABV has not been widely adapted because of the time, cost and difficulty of deployment. The predominate language used today for coding assertions is SystemVerilog Assertion (SVA) language. The difficulty of the SystemVerilog Assertion (SVA) language plus other factors has limited the use of assertions to simple versus more useful and powerful SVAs.

According to Harry Foster (Verification methodologist in Mentor Graphics) in his paper Assertion-Based Verification: Industry Myths to Realities(Invited Tutorial…2008)

“… is a myth that ABV is main stream technology.” ……”what differentiates a successful team from an unsuccessful team is process and adoption of new verification methods. Unsuccessful teams tend to approach development in an ad hoc fashion, while successful teams employ a more mature level of methodology that is systematic”. ……

Implementing ABV is somewhat of a “chicken and egg” situation. The industry accepts that ABV can reduce debug time by 50% and there is no question relative to its “goodness” for increasing first time design success. However, the implementation phase today is so open-ended and difficult that only “baby steps” have been taken. Even these small steps are useful; however, the industry hasn’t come close to attaining the full benefits of ABV.

A good analogy to ABV is exercise. Everyone knows the benefits are major and any level of exercise is good. Most of us have taken “baby steps” towards exercise. However, to attain the full benefits requires, in addition to believing in the benefits, a way to reduce the open ended nature. This can be done by creating an exercise program or approach that works within the user’s life style and becomes part of their weekly routine. This typically requires help in the form of books, trainers, equipment, etc. to get started and maintain the motivation for continuing the program and reap the benefits.

Clearly the opportunity exists to provide help for implementing ABV. The methodologies are already in place. We at Zocalo believe that automation plus metrics that takes away or reduces the open ended nature of the technology and works within the users flow will motivate projects to start and continue using ABV, thereby reaping the full benefits of implementation.

To access a (no registration required and free) Assertions white paper, please visit

A version of this article was previously published earlier this month at

Who is the master who is the slave?

Monday, October 25th, 2010

I was recently involved in a panel of EDA vendors and EDA users. Several EDA vendors were present and the discussion was rather heated. The designers made the argument that tools don’t address the difficulties and challenges of today’s designs. The point was that designers have to deal with the EDA tool shortcomings on daily basis.

The vendors in their own defense had a simple argument; designers need to understand the limitations of the tools and design to the capability of the existing tools. Designers need more training or they don’t fully appreciate the technology was other reasons cited. Maybe the customer is not always right! Follow up discussions with the designers made it clear that they were disappointed. They said the tools don’t work well together, and despite many attempts and claims of integration, the tools are at best a bunch of point tools strung together with many holes in the tool chain. The use models are patched up with scripts and handy work of designers. The chips have grown significantly, yet the tools have changed incrementally at best. Many tools in the flow are based on 10-20 year old technologies. Let’s “sell what we have” mentality dominates a good number of EDA vendors whom are typically run by “industry veterans” who have forgotten about the innovative part of this business and don’t spend the time to understand their customer needs. Instead these “veterans” focus on slide ware and selling processes, wasting their time and designer time, raising cost and risk and not addressing the real problem at the end. They try to convince the customers with fancy slide ware that their approach of fix and patch will address the design challenges, not even understanding what the customer does on daily basis and where they get stuck.  The roadmap is often not much more than a repeating the symptoms gathered from designer not detailed understanding of the problem.   So the vendor band-aids the symptoms and misses the mark on the root cause. So where is the innovation? The business that was born from innovation is dominated by recycling and rehashing decades old technologies. What a pity.

Well it is matter of simple economics. Changes to the flows can be costly and can potentially cause significant down time risk for design teams. On the other hand on EDA side the cost of tool development is high and even with best tools the company may miss the mark adding to risk and delayed revenues!  So what is the problem? How one fixes the broken model?

If you ask any experienced designers they can quickly point out a bunch of issues with tools they use today. These designer issues are often rooted at EDA tool provider’s basic understanding of the problem and the tasks performed by designers. Designers often notice that EDA tools are designed by people who don’t understand SOC design.  The software folks don’t always translate hardware related issues to right applications. This simple fact causes many issues and problems with the tools completeness and effectiveness. All is nothing but added cost and risk of development and deployment of  EDA tools.

Generally once an EDA vendor finds a formula that works they stick to it and try to milk it for what it’s worth.  The tools often address a portion of the designer needs. Then the vendors apply the same engine in more ways than imaginable to solve mutually exclusive problems faced by designer. The result is discontinuity, noise, inaccuracy and lack of interoperability causing nothing but pain for designers. The design flows are littered with such tools. Bottom line the vendor must understand the most detailed issues customer is faced with before designing tools. That’s when innovation takes place. We are all familiar with tools that share nothing but a name, “The Brand” and not much more. Either the integration is nothing but a few slides, or the problem space is so far off between various options of the tools that despite marketing tricks it is impossible to integrate. Nothing but a marketing ploy. Tool vendors are interested in solving big problem, some of which are not even a problem from designer perspective. Sometimes simple observations of what bottlenecks designers deal with on daily basis will reveal a wealth of information and opportunities to improve the tools often missed by tool vendors.  A simple change a small innovation in tool implementation can have a huge impact on the designer satisfaction.

Back to the original question, who is the master and who is the slave? If the tool designers understand the problems faced by the designers in detail and get beyond superficial problem statement slides, then the goal of building better SOCs can be met at lower cost and risk. Bottom line the chip has to be designed and that’s what makes our world to go around. Efficiency and accuracy comes at a cost and if the gains are shared on both tool and design side, the result is higher quality chips, better processes, as well as lower risk. The designer needs to be aware of innovation and recognize slap and patch approach compared to tools designed based on sound engineering fundamental. This saves everyone the cost and increases profits.

Economics of Verification

Monday, October 11th, 2010


In the light of the ups and downs of the world economy, it is interesting to review and see how the principles of economics work in the IC design industry, in particular with respect to verification. How much do the day to day decisions people make in doing design and verification reflect the principles of economics?  In this blog, we will look at three micro economic principles and see how we can make the best choice by following these principles.

The 1st Principle – People face tradeoffs

The resources of our planet are scarce; therefore nobody can have all they want. Everybody has to face tradeoffs in making a decision. In today’s economy, the tradeoffs and choices one has to make can be particularly important.

Most managers in the IC design industry have been in a very tough situation the last few years. They have been faced with increased complexity of designs, reduced staff, tighter budgets, shortened project schedules and greater pressures from the market to perform. The choices they have to make under these constraints are challenging.

For example, at a higher level, managers may need to decide on:

  1. With a reduced staff, how many people should I put on the design team vs. the verification team? Or does one person do both jobs?
  2. With a shortened project schedule, which part of the design and verification cycle can be shortened?
  3. With a tighter budget, what kind of EDA tool investments will bring the best ROI?

At a lower level, decisions related to verification could be:

  1. Given that verification takes 70% of the whole design cycle, what technology can help reduce the verification bottleneck?
  2. How much verification can we afford to perform on the block-level vs. the system-level?
  3. How much verification is enough to deliver confidence?

Failure to make the right choices in these decisions could potentially lead to lower quality of product, loss of profit or even bankruptcy in the current economic climate.

To best assess the tradeoffs in making these decisions, one should look at the opportunity cost involved. That brings us to principle number 2.

The 2nd Principle – The cost of something is what you give up to get it

In evaluating each choice, the rule is to see which choice has the least opportunity cost. Opportunity cost is simply what you must give up (the next best alternative) in order to get what you want. For example, you have 2 hours of free time. You could either watch a movie or take a nap. The opportunity cost for taking a nap is the enjoyment from the movie you would have otherwise had. Similarly, the opportunity cost for watching the movie is the much needed rest you would have gotten otherwise. The decision comes down to what is most important to you. It is worth noting that opportunity cost is often hard to measure and depends very much on the individual and situation involved. Nonetheless, opportunity cost is useful when evaluating the cost and benefit of choices, and the choice to go with should be the one with the least opportunity cost.

Given that verification takes 70% of the design cycle and 60% chip re-spins are due to logical/functional errors (Trends in ASIC Prototyping), it is important to invest in technology that can improve verification confidence and reduce the overall verification cycle. We will use the following hypothetical scenario to illustrate how opportunity cost comes into play in the decision making process.

The project manager at company ABC is deciding between buying more simulation licenses to do more system-level verification vs. adding automatic functional verification software to their methodology for more block-level verification. Their current verification methodology is such that limited block-level simulation is performed by designers, due to the effort involved in creating block-level testbenches. Most verification is done at the system-level by verification engineers. The company recently had a chip re-spin due to a functional error found in silicon. The project manager sees the need for more verification at both the block-level and the system-level. However, due to limited budget, they can only make investment in one area. To make the best decision, they must evaluate the verification ROI at the block-level vs. the system-level and go with the option with the most benefit, i.e. the least opportunity cost.

More and more companies are seeing the benefit of block-level verification using automatic functional verification tools. These tools operate with no testbench, therefore requiring little time and effort to setup and run. They employ formal technology to exhaustively verify the RTL blocks to catch bugs such as unreachable states, single or pair wise state deadlocks, dead codes, and synthesis pragma violations in the designs. By performing this kind of verification early in the design cycle, finding and fixing bugs becomes easier. This improves the overall quality of the RTL design before system-level verification begins, as a result, reduces the verification requirement at the system level. It is estimated that employing automatic functional verification tools can catch 50% of the design bugs early while saving 15% of the overall project cycle. This is the opportunity cost that company ABC would have to forgo if the project manager goes with more simulation at the system level.

Similarly, additional simulation at the system level could also lead to improved verification confidence. However, most things face the law of diminishing returns (also called the law of increasing opportunity cost). For example, in a production system with fixed and variable inputs (such as equipment and labor), beyond some point, each additional unit of variable input yields less and less output. The law holds for increased level of simulation. The benefit of additional simulation at the system-level is not as pronounced because significant system-level simulation is already in the current methodology. Therefore, the opportunity cost for investing in functional verification tool is less and it is where the decision should be.

With this decision made, the next question is how much verification should be done at the block level. To answer this question, we need to examine principle number 3.

The 3rd Principle – “How much” is the decision made at the margin

Some decisions in life involve either-or choices, like the one we did earlier. Some decisions involve “how much” choices, which require analysis at the margin. One needs to look at the marginal cost and marginal benefit and find the equilibrium to derive at the optimum solution. Marginal cost is the additional cost imposed when performing one more unit of an activity. Similarly, marginal benefit is the additional benefit received when performing one more unit of an activity. The point where marginal cost and marginal benefit cross is when we achieve the most efficiency.

Following our hypothetical scenario, suppose the following table shows the marginal cost and marginal benefit for an additional week of the automatic functional verification performed at the block level. It is easy to understand that fewer bugs will be found as time progresses. The “marginal benefit ($)” is calculated by the product of the number of bugs found per week and the cost to find one bug at the system level (assuming $200 in our analysis). The marginal cost is simply the salary cost to have the designer perform block-level verification. Comparing the marginal benefit and marginal cost, it is easy to see that the optimal amount of block level verification lies between week 3 and 4.

Weeks of Block Level Verification Total Bugs Found Bugs Found Per Week Marginal Benefit ($) Total Cost Marginal Cost ($)
1 25 25 $5000 $1200 $1200
2 40 15 $3000 $2400 $1200
3 50 10 $2000 $3600 $1200
4 55 5 $1000 $4800 $1200
5 58 3 $600 $6000 $1200



Even though we may not know these micro economics principles explicitly, most of our everyday decision makings are done by implicitly evaluating the opportunity cost and doing marginal analysis. By understanding these principles, one can form a clear framework and plug in real numbers to base the decisions upon. This is more important in our current economic condition, because a bad decision could lead to some very undesirable consequences.


Hardware-Assisted Verification Tackles Verification Bottleneck

Monday, October 4th, 2010

An often-repeated industry mantra is that verification takes up about 70 percent of the development cycle, making it the most time-consuming piece of chip design today.  Every indication that we’ve seen over the past 10 years confirms this number.  And while a host of software-based verification tools have been deployed to tackle the verification bottleneck, design teams are turning to hardware-assisted verification platforms to accelerate hardware debugging and software test and integration.  As a result, they’re often successful at reducing their verification budget and beating time-to-market pressures.

Let’s examine this move toward hardware-assisted verification.

Software development can’t wait for working silicon, which means that design teams need to a fail-safe way to verify that their chips will work as intend as they run embedded software.  All the while, they’re grappling with shortened development cycles and designs that reach billions of application specific integrated circuit (ASIC) gates and millions of lines of code. 

 This means that a design team needs to create a working prototype for software development as early as possible and before the end of the hardware design cycle.  The prototype must fit into the general hardware design flow or the design team risks extending the design cycle.

More and more, hardware-assisted verification platforms are used to simultaneously validate hardware and software and, generally, fall into either emulation or field programmable gate array (FPGA) prototyping categories. 

Emulation has had a reputation for offering large capacity and good hardware debug capabilities, but is reputed to be slow, expensive and poorly suited for validating embedded software.  Conversely, FPGA prototypes are cheaper and faster, but do not have hardware debug capabilities and take longer to build and test. 

Many design teams with a large budgets use both approaches.

That’s changing with the latest generation of hardware-assisted verification platforms able to offer features and benefits of both.  Suppliers of these platforms have combined speed for embedded software validation with hardware visibility and debug, giving design teams a way to verify hardware and software as a fully operational embedded system.

One popular emulator based on an FPGA architecture is used for simultaneous hardware and embedded software verification.  It has the speed to validate embedded software and the ability to provide full internal signal visibility for effective hardware debug.

In general, ASIC prototypes require manual code changes for FPGA implementation, followed by logic synthesis and manual partitioning across multiple FPGAs, then place and route.  Designers repeat these steps each time the design is changed, making the prototype ineffective for hardware verification.  This latest generation emulator automatically completes these steps without modifying the original system-on-chip (SoC) source code.  It handles complex clock processing, memory generation, multiplier/ALU logic, bus resolution and multiple-data-rate (XDR) wrapper generation. 

Further, it can compile incremental changes to either the testbench or design under test (DUT).  And, it uses the same hardware and models across the design cycle, making it a single platform for hardware and software verification.

Hardware-based verification platforms are giving design teams a way to break the verification bottleneck and reduce the verification budget.  They’re finding that they can now use a single platform to handle hardware/software architectural tradeoff analysis, hardware debug, hardware regression, software integration and embedded software validation.  Now, that’s a mantra worth repeating.

Excitement in Electronics

Monday, September 27th, 2010

The year was 1972; I had just graduated from High School.  It was decided that I should be working…I was not sure what I was supposed to do for work.  I picked up a newspaper and there was a big article that National Semiconductor was hiring.  I decided to get a job there.  I was not sure what they did but they were hiring, I needed a job so it seemed like a fit to me. 

I went into the lobby of the main building (at that time there was only 3) and asked for a job application.  The receptionist gave me one and I sat down in the chair to fill it out.  There were lots of people coming and going through the lobby.  One gentleman came up to me and asked me what I was doing.  I answered, “Filling out an application for a job”.  He asked me why, I said, “to get a job” (thinking this was a trick question).   He looked puzzled and said, “Why, you already work here”!  I assured him that I did not but I wanted to.  He smiled and said, “Well, your twin works here then, come with me” he continued, “you just got yourself a job”.

That is how I got into Electronics.

On my first day on the job my boss introduced me to the girl that he thought looked so much like me.  She had long straight hair (we all did back then), was my size and build but she was much prettier than me.  I was very grateful that she worked there and I thanked her for helping me to get my first job.  Needless to say we were fast friends and like most twins, inseparable.

I worked at National Semiconductor for 8 years.  National Semiconductor was great about education.  They sent me to Electrical Engineering classes; I was the only girl there.  My bosses wanted me to be an engineer.  The best part was that most of the classes were held right there on their premises.  I could take college classes at work and get college credits and paid for it at the same time.  I loved the classes because they were well organized, well taught and I could usually relate them back to the work I was doing…so it made it very interesting.

When I first started at National I worked in the test area on swing shift and ran a TAC tester.  The goal was to get as many units tested as possible…oh, a goal.  Cool, I can do that!  Each night I tested more units than the night before.  I streamlined the input and output of the machine so that I never let the machine stop.  I organized the paperwork so that it was completed as the parts were being tested.  I learned how to fix my machine so that I did not need to wait for maintenance if my machine went down.  I did preventive maintenance on my machine so that it was working better than any of the other machines on the line.  Everyone hated me; I kept increasing their quotas because I could do more.  Soon I was made lead of the area and I taught everyone else to be more productive.

National was a wonderful place to work.  Each time, I got bored or wanted to learn something new, there was always that opportunity.  After a while I did not have to petition for jobs, I had managers coming to me to ask me to help with a new department, organize a production flow or train others to be more effective.  I worked in Masking, Diffusion, Design, Engineering, and Mask Making and got to be an expeditor, which was fabulous…, it matched my personality…a runner!  As an expeditor, I needed to produce a new product (for example: the very first Ladies LED watch was made by me and I still have it in my jewelry box), fast and without a production line.  So I needed to come up with the flow to produce the item, get time on different lines so that I could do the work and not interrupt their production flow…while at the same time making my schedule.  I met with the product line managers, made a deal with them to use their machines and a time schedule as to when I would need them…and made it all fit my product schedule.  Then I ran from one production line to another to meet or exceed my target…I loved it.

The other memorable position that I had was offered to me by Pierre Lamond.  Pierre was the Executive VP of R&D at the time.  He had heard about me and was pulling together a team of people to open up the “Bubble Memory” production line.  He asked me if I wanted to join.  I said YES, of course!

I had no idea what to expect.  I left my current job and department without question and on Monday morning went to HR to find out where I should report.  They told me the room number.  I thought it was odd because I knew this building very well and the room number she had given me was an empty part of the building.  When I arrived…I was right, it was empty.  The team that was assembled started to arrive and then Pierre came in.  He said that he had chosen us to build the line from the floor up and he meant it literally.  We were in a room with no walls.  We drew up the plans for the production line, met with vendors to get the right equipment, worked with the plumbers, electricians etc to build out the space as per our specifications.  When needed we went to Sears to buy tools, pipe whatever to keep the project on schedule.  Then one day we were able to run our first wafer through the line…it was really an exciting time.

Achieving Six Sigma Quality for IC Design

Friday, September 17th, 2010

The manufacturing industry saw significant improvement in quality within the last few decades due to the implementation of Lean Manufacturing process and Six Sigma quality control measures.

Lean Manufacturing, also called Just-in-time (JIT), was pioneered by Toyota to reduce non value added waste in the manufacturing process through continuous improvement and producing only when needed with minimum inventory of raw materials and finished goods. Six Sigma is a well known, data driven set of standards that use in-depth statistical metrics to eliminate defects and achieve exceptional quality at all levels of the supply chain. Lean Manufacturing and Six Sigma quality (Lean Six Sigma) have merged in theory and practice [1]. This new paradigm requires each employee to assume responsibility for the quality of their own work. To create higher quality, defects need to detected and fixed at the source. Quality is built and assured at each step in the process rather than through inspection at the end. Adoption of Lean Six Sigma in production resulted in the high quality of goods and services that we all enjoy today.

These same principals and philosophy can be directly integrated into the IC design industry to improve the quality of chips. Defects discovered in silicon at the end of the manufacturing process are costly, inefficient and wasteful. Instead, bugs should be detected at the RTL source where they are created. The traditional way of designers writing the HDL code, performing minimum amount of verification and throwing it over the wall to the verification team is the ultimate cause of poor quality, long project cycle and wasted money for investors and stock holders alike. It is time the IC design industry adopts the Lean Six Sigma philosophy to build quality design from the very beginning.

There are a couple of reasons that account for the divide between design and verification. First is the notion that it is better to have another pair of eyes to examine and verify the HDL design rather than trusting the designers who write the RTL. The second is the low verification ROI achieved by using the traditional simulation technique to perform block level verification. A lot of time and effort is needed to create the verification infrastructure, thus negating the productivity gains from early verification.

The first factor requires a change of attitude, as what happened in the manufacturing industry. People need to be made responsible and accountable for the quality of their own work. Detecting failures at the source cost the least amount of time, money and effort. Quality can only improve when individuals are held responsible and results are measurable.

The second factor can be eliminated with the advancement of formal verification technology. Formal verification requires no testbench, therefore reducing the requirement on building verification infrastructure; it performs exhaustive analysis and can often catch corner case bugs that are hard to find through simulation. Debugging at this stage is more efficient because of the intimate knowledge the designer has of the code, the limited scope of logic involved and the fact that formal tools show the source of the problem through error traces. Using these tools early in the design flow can detect bugs at the source and thus significantly improve the design quality.

There are two types of formal functional verification tools in the market. The first one is automatic functional verification. Automatic functional verification tools take the RTL design alone and perform exhaustive formal analysis to catch design bugs that result in symptoms such as dead code, single and pair-wise state machine deadlock etc. This significantly improves the quality of the design with zero effort, offering the best verification ROI.

Another type of formal functional verification is property verification (also called model checking). Designers write assertions in the RTL to describe the constraints of the environment and desired behavior of the block. Property verification tools perform exhaustive formal analysis to detect situations that violate the desired design behavior. It produces error traces to show the sequence of events that lead to the violations. Designers can debug and fix the errors easily because verification is performed within limited scope at the block level.

If every design team adopts these early functional verification (EFV) tools in the design stage and creates accountable measure to make designers responsible for the quality of their own code, we will see significant improvement in design quality as we have seen in the manufacturing industry. This in turn leads to reduced project cycle, saved investment and even competitive advantage in the market place. Achieving Six Sigma quality in IC design is possible with early functional verification.

[1] F. Jacobs, R. Chase, N. Aquilano, Operations & Supply Management, 12th Edition, McGraw-Hill.

Verific: SystemVerilog & VHDL Parsers
TrueCircuits: UltraPLL

Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy