Open side-bar Menu
 Real Talk

Archive for October, 2010

Who is the master who is the slave?

Monday, October 25th, 2010

I was recently involved in a panel of EDA vendors and EDA users. Several EDA vendors were present and the discussion was rather heated. The designers made the argument that tools don’t address the difficulties and challenges of today’s designs. The point was that designers have to deal with the EDA tool shortcomings on daily basis.

The vendors in their own defense had a simple argument; designers need to understand the limitations of the tools and design to the capability of the existing tools. Designers need more training or they don’t fully appreciate the technology was other reasons cited. Maybe the customer is not always right! Follow up discussions with the designers made it clear that they were disappointed. They said the tools don’t work well together, and despite many attempts and claims of integration, the tools are at best a bunch of point tools strung together with many holes in the tool chain. The use models are patched up with scripts and handy work of designers. The chips have grown significantly, yet the tools have changed incrementally at best. Many tools in the flow are based on 10-20 year old technologies. Let’s “sell what we have” mentality dominates a good number of EDA vendors whom are typically run by “industry veterans” who have forgotten about the innovative part of this business and don’t spend the time to understand their customer needs. Instead these “veterans” focus on slide ware and selling processes, wasting their time and designer time, raising cost and risk and not addressing the real problem at the end. They try to convince the customers with fancy slide ware that their approach of fix and patch will address the design challenges, not even understanding what the customer does on daily basis and where they get stuck.  The roadmap is often not much more than a repeating the symptoms gathered from designer not detailed understanding of the problem.   So the vendor band-aids the symptoms and misses the mark on the root cause. So where is the innovation? The business that was born from innovation is dominated by recycling and rehashing decades old technologies. What a pity.

Well it is matter of simple economics. Changes to the flows can be costly and can potentially cause significant down time risk for design teams. On the other hand on EDA side the cost of tool development is high and even with best tools the company may miss the mark adding to risk and delayed revenues!  So what is the problem? How one fixes the broken model?

If you ask any experienced designers they can quickly point out a bunch of issues with tools they use today. These designer issues are often rooted at EDA tool provider’s basic understanding of the problem and the tasks performed by designers. Designers often notice that EDA tools are designed by people who don’t understand SOC design.  The software folks don’t always translate hardware related issues to right applications. This simple fact causes many issues and problems with the tools completeness and effectiveness. All is nothing but added cost and risk of development and deployment of  EDA tools.

Generally once an EDA vendor finds a formula that works they stick to it and try to milk it for what it’s worth.  The tools often address a portion of the designer needs. Then the vendors apply the same engine in more ways than imaginable to solve mutually exclusive problems faced by designer. The result is discontinuity, noise, inaccuracy and lack of interoperability causing nothing but pain for designers. The design flows are littered with such tools. Bottom line the vendor must understand the most detailed issues customer is faced with before designing tools. That’s when innovation takes place. We are all familiar with tools that share nothing but a name, “The Brand” and not much more. Either the integration is nothing but a few slides, or the problem space is so far off between various options of the tools that despite marketing tricks it is impossible to integrate. Nothing but a marketing ploy. Tool vendors are interested in solving big problem, some of which are not even a problem from designer perspective. Sometimes simple observations of what bottlenecks designers deal with on daily basis will reveal a wealth of information and opportunities to improve the tools often missed by tool vendors.  A simple change a small innovation in tool implementation can have a huge impact on the designer satisfaction.

Back to the original question, who is the master and who is the slave? If the tool designers understand the problems faced by the designers in detail and get beyond superficial problem statement slides, then the goal of building better SOCs can be met at lower cost and risk. Bottom line the chip has to be designed and that’s what makes our world to go around. Efficiency and accuracy comes at a cost and if the gains are shared on both tool and design side, the result is higher quality chips, better processes, as well as lower risk. The designer needs to be aware of innovation and recognize slap and patch approach compared to tools designed based on sound engineering fundamental. This saves everyone the cost and increases profits.

Economics of Verification

Monday, October 11th, 2010


In the light of the ups and downs of the world economy, it is interesting to review and see how the principles of economics work in the IC design industry, in particular with respect to verification. How much do the day to day decisions people make in doing design and verification reflect the principles of economics?  In this blog, we will look at three micro economic principles and see how we can make the best choice by following these principles.

The 1st Principle – People face tradeoffs

The resources of our planet are scarce; therefore nobody can have all they want. Everybody has to face tradeoffs in making a decision. In today’s economy, the tradeoffs and choices one has to make can be particularly important.

Most managers in the IC design industry have been in a very tough situation the last few years. They have been faced with increased complexity of designs, reduced staff, tighter budgets, shortened project schedules and greater pressures from the market to perform. The choices they have to make under these constraints are challenging.

For example, at a higher level, managers may need to decide on:

  1. With a reduced staff, how many people should I put on the design team vs. the verification team? Or does one person do both jobs?
  2. With a shortened project schedule, which part of the design and verification cycle can be shortened?
  3. With a tighter budget, what kind of EDA tool investments will bring the best ROI?

At a lower level, decisions related to verification could be:

  1. Given that verification takes 70% of the whole design cycle, what technology can help reduce the verification bottleneck?
  2. How much verification can we afford to perform on the block-level vs. the system-level?
  3. How much verification is enough to deliver confidence?

Failure to make the right choices in these decisions could potentially lead to lower quality of product, loss of profit or even bankruptcy in the current economic climate.

To best assess the tradeoffs in making these decisions, one should look at the opportunity cost involved. That brings us to principle number 2.

The 2nd Principle – The cost of something is what you give up to get it

In evaluating each choice, the rule is to see which choice has the least opportunity cost. Opportunity cost is simply what you must give up (the next best alternative) in order to get what you want. For example, you have 2 hours of free time. You could either watch a movie or take a nap. The opportunity cost for taking a nap is the enjoyment from the movie you would have otherwise had. Similarly, the opportunity cost for watching the movie is the much needed rest you would have gotten otherwise. The decision comes down to what is most important to you. It is worth noting that opportunity cost is often hard to measure and depends very much on the individual and situation involved. Nonetheless, opportunity cost is useful when evaluating the cost and benefit of choices, and the choice to go with should be the one with the least opportunity cost.

Given that verification takes 70% of the design cycle and 60% chip re-spins are due to logical/functional errors (Trends in ASIC Prototyping), it is important to invest in technology that can improve verification confidence and reduce the overall verification cycle. We will use the following hypothetical scenario to illustrate how opportunity cost comes into play in the decision making process.

The project manager at company ABC is deciding between buying more simulation licenses to do more system-level verification vs. adding automatic functional verification software to their methodology for more block-level verification. Their current verification methodology is such that limited block-level simulation is performed by designers, due to the effort involved in creating block-level testbenches. Most verification is done at the system-level by verification engineers. The company recently had a chip re-spin due to a functional error found in silicon. The project manager sees the need for more verification at both the block-level and the system-level. However, due to limited budget, they can only make investment in one area. To make the best decision, they must evaluate the verification ROI at the block-level vs. the system-level and go with the option with the most benefit, i.e. the least opportunity cost.

More and more companies are seeing the benefit of block-level verification using automatic functional verification tools. These tools operate with no testbench, therefore requiring little time and effort to setup and run. They employ formal technology to exhaustively verify the RTL blocks to catch bugs such as unreachable states, single or pair wise state deadlocks, dead codes, and synthesis pragma violations in the designs. By performing this kind of verification early in the design cycle, finding and fixing bugs becomes easier. This improves the overall quality of the RTL design before system-level verification begins, as a result, reduces the verification requirement at the system level. It is estimated that employing automatic functional verification tools can catch 50% of the design bugs early while saving 15% of the overall project cycle. This is the opportunity cost that company ABC would have to forgo if the project manager goes with more simulation at the system level.

Similarly, additional simulation at the system level could also lead to improved verification confidence. However, most things face the law of diminishing returns (also called the law of increasing opportunity cost). For example, in a production system with fixed and variable inputs (such as equipment and labor), beyond some point, each additional unit of variable input yields less and less output. The law holds for increased level of simulation. The benefit of additional simulation at the system-level is not as pronounced because significant system-level simulation is already in the current methodology. Therefore, the opportunity cost for investing in functional verification tool is less and it is where the decision should be.

With this decision made, the next question is how much verification should be done at the block level. To answer this question, we need to examine principle number 3.

The 3rd Principle – “How much” is the decision made at the margin

Some decisions in life involve either-or choices, like the one we did earlier. Some decisions involve “how much” choices, which require analysis at the margin. One needs to look at the marginal cost and marginal benefit and find the equilibrium to derive at the optimum solution. Marginal cost is the additional cost imposed when performing one more unit of an activity. Similarly, marginal benefit is the additional benefit received when performing one more unit of an activity. The point where marginal cost and marginal benefit cross is when we achieve the most efficiency.

Following our hypothetical scenario, suppose the following table shows the marginal cost and marginal benefit for an additional week of the automatic functional verification performed at the block level. It is easy to understand that fewer bugs will be found as time progresses. The “marginal benefit ($)” is calculated by the product of the number of bugs found per week and the cost to find one bug at the system level (assuming $200 in our analysis). The marginal cost is simply the salary cost to have the designer perform block-level verification. Comparing the marginal benefit and marginal cost, it is easy to see that the optimal amount of block level verification lies between week 3 and 4.

Weeks of Block Level Verification Total Bugs Found Bugs Found Per Week Marginal Benefit ($) Total Cost Marginal Cost ($)
1 25 25 $5000 $1200 $1200
2 40 15 $3000 $2400 $1200
3 50 10 $2000 $3600 $1200
4 55 5 $1000 $4800 $1200
5 58 3 $600 $6000 $1200



Even though we may not know these micro economics principles explicitly, most of our everyday decision makings are done by implicitly evaluating the opportunity cost and doing marginal analysis. By understanding these principles, one can form a clear framework and plug in real numbers to base the decisions upon. This is more important in our current economic condition, because a bad decision could lead to some very undesirable consequences.


Hardware-Assisted Verification Tackles Verification Bottleneck

Monday, October 4th, 2010

An often-repeated industry mantra is that verification takes up about 70 percent of the development cycle, making it the most time-consuming piece of chip design today.  Every indication that we’ve seen over the past 10 years confirms this number.  And while a host of software-based verification tools have been deployed to tackle the verification bottleneck, design teams are turning to hardware-assisted verification platforms to accelerate hardware debugging and software test and integration.  As a result, they’re often successful at reducing their verification budget and beating time-to-market pressures.

Let’s examine this move toward hardware-assisted verification.

Software development can’t wait for working silicon, which means that design teams need to a fail-safe way to verify that their chips will work as intend as they run embedded software.  All the while, they’re grappling with shortened development cycles and designs that reach billions of application specific integrated circuit (ASIC) gates and millions of lines of code. 

 This means that a design team needs to create a working prototype for software development as early as possible and before the end of the hardware design cycle.  The prototype must fit into the general hardware design flow or the design team risks extending the design cycle.

More and more, hardware-assisted verification platforms are used to simultaneously validate hardware and software and, generally, fall into either emulation or field programmable gate array (FPGA) prototyping categories. 

Emulation has had a reputation for offering large capacity and good hardware debug capabilities, but is reputed to be slow, expensive and poorly suited for validating embedded software.  Conversely, FPGA prototypes are cheaper and faster, but do not have hardware debug capabilities and take longer to build and test. 

Many design teams with a large budgets use both approaches.

That’s changing with the latest generation of hardware-assisted verification platforms able to offer features and benefits of both.  Suppliers of these platforms have combined speed for embedded software validation with hardware visibility and debug, giving design teams a way to verify hardware and software as a fully operational embedded system.

One popular emulator based on an FPGA architecture is used for simultaneous hardware and embedded software verification.  It has the speed to validate embedded software and the ability to provide full internal signal visibility for effective hardware debug.

In general, ASIC prototypes require manual code changes for FPGA implementation, followed by logic synthesis and manual partitioning across multiple FPGAs, then place and route.  Designers repeat these steps each time the design is changed, making the prototype ineffective for hardware verification.  This latest generation emulator automatically completes these steps without modifying the original system-on-chip (SoC) source code.  It handles complex clock processing, memory generation, multiplier/ALU logic, bus resolution and multiple-data-rate (XDR) wrapper generation. 

Further, it can compile incremental changes to either the testbench or design under test (DUT).  And, it uses the same hardware and models across the design cycle, making it a single platform for hardware and software verification.

Hardware-based verification platforms are giving design teams a way to break the verification bottleneck and reduce the verification budget.  They’re finding that they can now use a single platform to handle hardware/software architectural tradeoff analysis, hardware debug, hardware regression, software integration and embedded software validation.  Now, that’s a mantra worth repeating.

S2C: FPGA Base prototyping- Download white paper

Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy