Open side-bar Menu
 Real Talk

Archive for May, 2010

A Model for Justifying More EDA Tools

Monday, May 31st, 2010

One of the overwhelming issues facing the EDA community is the need and desire to increase total sales. One of the greatest hurdles in the ongoing chase to get more seats is the inability to convert the design software budget dollars into new seat licenses. Although most large companies have more than adequate dollars budgeted for software, less than a quarter of the dollars represent new tool acquisitions. The balance of the funds are for maintenance, training, and management functions like parceling out the limited number of seats available.

The inherent value of EDA tools is to provide more automation to the design task, thereby increasing the individual engineer’s productivity. As an example of the value of a tool, design for test tools reduce the time for test development and are able to improve fault coverage over manual methods in the test to over 90 percent of all faults. The tool leads to better test coverage of the design resulting in a higher probability of catching the rare or random errors that make the system fail. So the tools simultaneously reduce engineering time and improve test quality by enhancing internal node observability and controllability. As an added benefit, the window to the internal nodes makes the system debug and integration much easier, due to the availability of the internal state data at the time of failure. So here an additional tool not only improves the risk-performance equation in its intended department, but also aids another group in performing the debugging work.

The EDAC work on ROI justification does a good job of addressing the investment parts of the equation. (See the presentation on the EDAC web page www.edac.org/EDAC/EDACHOME/) The problems with the standard financial models for return on investment (ROI), however, include the lack of a sense of time (ROI equals the average return divided by average investment) and the total lack of connection with the issues that most concern the engineering managers. The managers are most concerned with risk reduction, overall productivity, and net increases in total dollar sales, whereas the standard ROI measures only look at changes in the direct outputs from the investment. The greatest problem in approaching the issue from an investment perspective is the need to quantify the results from a change before the fact.

The EDAC analysis does a very good job of displaying the effects of delays in product release on costs and revenues, but suffers in this regard, because it requires the quantification of risk factors and clear estimates of productivity changes. These are exactly the values that people want to measure, but are also the most difficult values to determine.

In addition, the direct outputs for new tool acquisitions are changes in productivity, a metric that the engineering community abhors because it implies the design task is a quantifiable, fixed process and not the exercise in creativity and skill in design that the engineers say it is. Therefore, the attempts to assign weighting values in the financial analysis to adjust the productivity creates a conflict for the person who will be reporting the numbers. A dramatic increase in productivity implies a large part of what the engineer does can be replaced by a piece of software. A small increase or a decrease in productivity implies the tool is not of great value. Neither of these results is desirable for the EDA community or for the engineer reporting the numbers.

One reason that the financial model breaks down in the ASIC world if that the return on investment depends on more than just the engineering department’s efforts. External factors like market position, pricing, profitability, and product features are all part of the return portion of the equation, but these factors are not in the control of the EDA tool purchase decision maker.  The overall history of ASICs has been, unfortunately, that although over 90 percent of all ASICs pass customer specifications on the first pass, less than half go into production. If a new product doesn’t go into production, the return on investment becomes a negative value that has no real relation to the measurement parameters of productivity.

Another reason that the basic financial models break down is the need to factor in some adjustment for risk. The relative productivity changes, as difficult as they are to measure, are much easier to quantify than risk reduction, because the level of risk may have no correlation to any dollar amounts. The addition of a tool may increase the risk due to the down time to learn the tool, or may cause a large enough change in the overall design methodology to expose other missing links in the tool chain. On the other hand, an incremental tool change can reduce the risk by enabling a more complete exploration of the design space, thereby ensuring a successful product design.  The risk reduction and productivity improvement are probably the most difficult parameters to quantify in assessing the value of a new tool, and the traditional financial analyses only point out the inability to predict a virtually unmeasurable future result.

New model

As an attempt to address some of the other issues in the valuation of tools, here is a simplified model that combines the traditional financial items like return on investment with some concepts from time to market analyses. The traditional inputs for ROI are the costs for the tools and the savings (in time and money) as a result of the tools. The new model also incorporates the estimated reduction in end-item unit volume and ASP for every month the product release is delayed from the best case schedule. Despite the statement that productivity and risk are hard to quantify, the model generates an ROI number as well as provides a means to evaluate a number of scenarios to bound the relative risk.

The model is in an Excel workbook with three worksheets. The assumptions and variables are entered into the first table called “Inputs”. This  passes the data to another worksheet for cost, ROI, and productivity analysis. The final sheet shows the time-to-market effects of the tools purchase, in terms of total design costs, size of market, and product sales. The effects of new tool purchases shows up in the “Impacts” worksheet, where relatively small changes in product development time have significant affect on the company’s sales numbers. The number of variables for contributions to the bottom line are too complex for a general analysis, but are easily available for more detailed analysis within the company doing the design.

All of the inputs for the analysis are available on the first page, and are the details you will need to get from the customer. The values are linked into the following sheets as variables in fairly simple equations. The pages are protected only to keep the formula intact. If you find a better algorithm for the cost/benefit evaluation, please feel free to modify the spreadsheet, by turning protection off and making your changes.

Note the “Costs “ page shows fairly small changes in productivity and a negative ROI for most cases. This is the problem with the traditional measurements, one can’t always find much in the way of good news in productivity or ROI for a standard analysis. If a new tool makes a sufficiently large change in productivity, the ROI eventually goes positive.

By combining the costs data and the effects on the total product life revenues, the model provides a means of identifying the total influence a tool purchase makes on the company’s revenues. In the “Impacts” worksheet, we observe the effects of tool purchases on the release of the target IC. By adjusting costs and delays, a user can also get an estimate for the end-of-life function, which is the  cross-over point in a late introduction where revenue goes below some threshold value.

For some scenarios, this cross-over point is before the design is completed, and therefore is a useful early indicator that a design program should be stopped early, rather than expending resources on a money-losing proposition. If the

EDA tool can help a company recover from this situation, then the tool truly is of much higher value to the user than just the change in productivity or some ROI. The value of the tool might be the salvation of a company.

Mind the Verification Gap

Monday, May 24th, 2010

Would you ever use a wrench to tighten a Philips screw? Or hammer a square peg into a round hole?

Chip design today has become more of verification task than design. Designers spend more than 50% of their time trying to come up with ways to verify their designs or, worse yet, someone else’s design. Despite the change in the nature of the design work, designers keep using the same old design tools, hammering away trying to close the design and verification Gap. Must you not Mind The Gap?

Over the past decade or so, designs have transitioned from code writing to IP and code verification. Most designers today are tasked with taking a piece of IP designed by someone else who may not be even around in the company, or a design so old that the original designer does not even remember the details, or even IP your company bought from a third party and try to make it satisfy the spec. All is well until you realize that the changes you made to the code have left many holes in the functionality which are not covered by the original vectors you got with IP/design. In turn, the changes resulted in many unintended consequences that you could not have predicted based on the IP/design spec. The issues only magnify once you put all the IP blocks together.

Well that’s exactly what happens when you try to hammer a Philips screw into place. Step back and take a good look at the techniques you use today! Are you still using the same simulation methods? Are you still relying on LEC to catch some of the problems? Are you tossing the verification work over the wall to the verification folks and calling it the day – that’s their problem (until it comes back to you with an embarrassing bug!)?

Over the last decade design teams have added linting to their flow. EDA vendors extended linting to cover even more exotic checks. The tools helped the managers to become a design IRS and gain a little more visibility into the quality of the design. But, neither did the verification tasks did get any easier, nor did the design quality improve by what was promised. Most designers used these tools only as a check list. The unintended consequence was the amount of extra work deciphering linter reports. The problem is that this activity often has low ROI because of the noise, the difficulty in setup and managing yet another set of files and results.

Even though designers are finding themselves doing more verification work than design, the tool of choice is still basically a big hammer (i.e. the simulator). Linters so far have helped managers more than the designers in the trenches.

It is perhaps time for more finesse and a bit of strategy. Next-generation tools can help designers better strategize their work, and better targeting their simulations. With targeted simulation and functionally checking the design on the fly, designers can now look deeper into design and make sure they did not overlook potential bugs.

What tools can help in this process?  Is it time to rethink strategies and retool? Perhaps it is time to address the Design and Verification Gap. This means marrying verification and design activities together, and starting verification essentially right at the outset. Perhaps it is also time to go beyond traditional simulation, linting and traditional verification techniques. Verification essentially needs to move hand-in-hand with the design. Early verification will not only increase productivity and ROI, but it will also focus designers to cover as many functionality scenarios as possible. Next-generation tools must also incorporate a simple setup along with super fast analysis runtimes to incrementally check the design, help designer target simulation, debug the design on the fly, and to provide feedback on the potential holes left in the design as a result of recoding or other changes. 

As your designs grow and you include more IP, your verification tasks will certainly grow. Be sure to Mind the Verification Gap.

ChipEx 2010: a Hot Show under the Hot Sun

Monday, May 17th, 2010

May 4th, 2010. Airport city, Israel. The weather forecast promised rain, so we all came dressed for a storm, only to find a big sun smiling above us! It turned out to be a very hot and dry day. And it was the day of the ChipEx. 

 

“ChipEx, what is that?” you might ask. Don’t feel bad for not knowing, it is only the second year that ChipEx is in the trade show business.

 

ChipEx is an annual international event of the Israeli semiconductor industry, sponsored by TAPEOUT magazine in cooperation with the Global Semiconductor Alliance (GSA). ChipEx consists of 3 main parts – vendor exhibition, technical conference and GSA executive forum. Given that the economy of Israel fared better than many other western countries, and Israelis are known for their technical innovation (The world economic forum designated Israel as one of the leading countries in the world in technological innovation. No surprise that all the latest Intel microprocessors were developed in Israel), the show was hot and sizzling with activities. Over 800+ people participated with about 50 EDA & IP companies presenting, and key-notes addressed by some industry heavy weights such as Rajiv Madhavan (CEO of Magma) and Gary Smith (Industry Analyst).

 

For the second year, Real Intent joined our Israel distributor Satris at ChipEx and we couldn’t be happier with our success. Our booth was hot with activity the entire time.  Besides normal networking among exhibitors and friends, many senior engineers and managers were hunting for the next generation technology and Real Intent has it! It all fits that innovative people are always seeking out innovative technologies that can help them stay at the leading edge.

 

You will also find that a trade show in Israel is a very different experience than other places in the world. People hardly want to hear any of the “Marketing stuff” (often described in stronger word), and that’s what they call our presentations, brochures etc.  Instead, people would step in, and ask about the technology. You hardly finished 2 sentences before the next question came in, as if to say – ‘we have no time to waste, give us the highlights and we’ll decide here and now if we want to hear more!’ In most cases, people did not want to see a demo or a short presentation. If they were interested, they’d ask to see it on a follow up visit. And this fits too, as innovative people usually have little patience for nonsense and are hot on the heels of the very best solutions.

 

We had over 40 visitors in total and many follow up visits scheduled. We definitely hit the “Real” needs in the design community with our automatic formal verification solutions targeting early functional verification, clock domain crossing verification and timing exception verification. 

 

At the end of the day, after all the grueling questions under the constant pressure of keeping poised and technical, I was hot tired! But it was well worth the effort and also great fun for me to engage in intelligent conversation with smart people having real needs.

 

Thanks for the Real Intent marketing team (even though I didn’t use their “marketing stuff”J), Satris and the ChipEX2010 committee for a successful show under the hot sun!

 

See you next year!!

We Sell Canaries

Monday, May 10th, 2010

When someone asked me the other day what Real Intent does, I told him, only half in jest, that we make and sell canaries. If you think about it, the verification tools we develop are the proverbial canaries for the chip-design coal mine. Their role is for them to be used in the advance party to give early warnings of bugs lurking in the chip. Used in this manner, our tools prevent late-stage blow-ups in chip functionality that can potentially ruin profit margins and may be even subvert an entire business model.

 

Talking about business models makes me think of start-up companies. It is very hard today to get a start-up company venture funded if it has a significant chip design component in its development roadmap. This bias is not wholly without reason. Hardware design is expensive and having to design your own chips makes it more so. While getting the product wrong the first time around is expensive for any start-up, it is especially so for a hardware company. If you need to reposition your hardware product or fix problems in it, it is all the more difficult and expensive if it involves redesigning a complex homegrown chip. The realization of the company’s product concept, and indeed the entire business model, becomes a prisoner of the chip design latency. You must get the chip right-enough quickly-enough to leave any wiggle room in the business model.

 

The risk is scary, but so is mining coal. Coal continues to be mined despite its risks and so must entrepreneurial initiative in chip design be perpetuated. As in coal mining, systematic processes must be instituted in chip design to mitigate risk. Accidents cannot be done away with, but can certainly be reduced in frequency.

 

One of the important technologies with the potential to significantly mitigate chip design risk is the application of pre-simulation static verification tools that target chip design errors in the context of specific failure mode classes. The technology has matured enough in the last decade to provide tangible value today. If I was evaluating a chip-design-heavy business proposal at a venture capital firm, I would certainly gate the funding based on whether the founders have experience with and instituted the use of static verification tools as an integral part of their chip design process and roadmap.

 

Real Intent has been a pioneer in this space and provides pre-simulation static verification tools that address some of the key failure modes. Real Intent’s Ascent product family finds bugs in control-dominated logic without the need to write assertions or testbenches. Because Ascent tools perform sequential formal analysis, they can even identify deep bugs that take many clock cycles to manifest as observable symptoms. Our Meridian tool family finds bugs in the implementation of clocks and clock-domain crossings. These bugs result from a confluence of timing and functionality and can be so subtle as to require a specific combination of process parameters for them to materialize. If ever there was a canary for chip design, it is Meridian. Finally, our PureTime tool family finds bugs to do with incorrect timing constraint specifications. Like clock-domain crossing bugs, these bugs too arise from a confluence of timing and functionality. Real Intent continues to develop new tools of this ilk to target additional failure modes. Our goal is to help make chip design risk acceptable again.

 

The adoption of these tools is up to you. Do you have a canary in your design flow?

Celebrating 10 Years of Emulation Leadership

Monday, May 3rd, 2010

            EVE is celebrating its 10th anniversary this year.  It has been quite a ride for all of us associated with this industry disrupter out of Paris.  Many of the same team from April 2000 are key member of today’s EVE team and wouldn’t have missed any of the excitement these past 10 years.

Exciting, it’s been.  It’s especially gratifying to know that our basic assumptions that served as EVE’s foundation when we started the company have turned out to be right.  I am talking about taking a novel approach to hardware-assisted verification by selecting a commercial FPGA instead of designing a custom ASIC as the building block of the emulator.  Similarly, we prioritized speed of execution to address the hardware/software integration stage of SoC verification.

            As for the rational behind our first criteria, we concluded early on that custom silicon would not scale and would be excessively expensive to adopt to address an overall market in the ballpark of $200 million.  Redesigning a chip every two to three years at smaller and smaller technology nodes would be economically disastrous.  We instead chose the best FPGA on the market and have continued to do so.

            As for the second assumption, we thought that speed of execution should not be compromised, particularly if we wanted to move outside the traditional space of hardware emulation.

            Over time, we have addressed all of the other important parameters that make an emulator a best-in-class tool.  They include fast compilation, thorough design debugging and scalability to accommodate a large spectrum of designs from a few million ASIC gates to one or more billion ASIC gates.  Equally, we have addressed energy efficiency by reducing the emulator’s footprint, energy consumption and air cooling requirements.  We did all of this by devising an architecture that is simple, elegant and efficient, and, even more important, by developing stacks of unique software.

            This focus on off-the-shelf FPGA parts and speed has paid off with installations at nine of the top 10 semiconductor companies and more than 60 customers.  Our hardware emulator ZeBu is used to verify designs of almost every conceivable consumer electronic product.

            The mention of ZeBu brings me to another point about our strategy –– how we came up with ZeBu.  Well, a best-in-class verification tool needs to support a best-in-class design … with zero bugs.  Zero Bugs, ZeBu.  Got it?

            It’s been a heady trip for the entire EVE team.  You’ll forgive us if our sense of pride seems outrageously boastful, but 10 years of solid achievement and growth is no small accomplishment.  We look forward to the years to come confident that we will continue the growth we have enjoyed in the past and today.  And, more important, support current and future design teams with the best-in-class emulation system.  Let’s raise our glasses and toast ZeBu and the team behind it.

CST Webinar Series
S2C: FPGA Base prototyping- Download white paper



Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy