September 03, 2007
Are EDA Companies Getting Fair Value for their Software?
Please note that contributed articles, blog entries, and comments posted on are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
Jack Horgan - Contributing Editor

by Jack Horgan - Contributing Editor
Posted anew every four weeks or so, the EDA WEEKLY delivers to its readers information concerning the latest happenings in the EDA industry, covering vendors, products, finances and new developments. Frequently, feature articles on selected public or private EDA companies are presented. Brought to you by If we miss a story or subject that you feel deserves to be included, or you just want to suggest a future topic, please contact us! Questions? Feedback? Click here. Thank you!

Does the OPC software have to change? Does the foundry have to provide additional data?

This is a standard Boolean operation that the OPC flow can do. The only thing that needs to be changed is the script that drives the OPC, that lets it pick up this layer and tells it what to do. We have offered to do this for the foundries. But it turns out we had a meeting with one of the foundries and while we were talking, the OPC engineers implemented the change. It is very unobtrusive to OPC.

But the foundries have to make a change regardless of how easy and how unobtrusive?

The foundries will have to make the change to the script but they do other Boolean operations as well so it is not hard. You are trying to drive home the point of what we are expecting from the foundries. We do expect them to implement this change to the script plus tell us what change in gate length they feel comfortable with.

Is that data fed back into the software?

We need to now how much change in gate length we are allowed because we will operate from that. What changes in gate length you can have tells us what savings in power exists and from that what timing changes the transistor will have. You will basically start Blaze MO with a timing closed design and you need to end up with the same timing closed design. From the timing closure standpoint we haven’t made any changes. You need to know what the timing impacts are.

Have you made arrangements with some, most or all the foundries?

I can not say all because you do not know how many consider themselves to be foundries. But we have made silicon with TSMC, IBM, Chartered, Samsung and USMC as well as a host of IDMs. We have a wide reach on this.

You have verified the 20% improvement?

In fact there is a paper by Qualcomm where they are nice enough to tell what the impact is. We feel comfortable saying 20%. 30% should be the norm, when we go to more aggressive geometries. When we get more 45 nm data, we should see more improvements. The leakage problem gets bigger as you go to finer geometries.

How do you price and package MO?

The list price for a single license (one year TBL) is $275K. However, if you think about where we put ourselves in the flow, it doesn’t make sense for any customer to buy just one license. You are right before you are going to go into production. You are going to use MO in massive parallel fashion. Every single deal we have done has been either a defined sire or an entire company. But the nominal price for Blaze MO is $275K.

If you approach a prospect and want to convince that prospect of you capabilities, what can you or they do other than a produce a new chip?

There are two things for that. The first one is that we will run the design part of our flow against whatever the prospects golden signoff tool as the very last step to prove that the design is equally closed now as it was when we started. The prospect is not going to risk us unraveling the timing closure that have already achieved. That is a necessary thing to do. Again, if you are talking about not disturbing the design flow, I don’t think you can sell, if you had to argue that they should trust our timing analysis versus whatever they have as golden signoff tools today. That’s one thing. The other thing is that we actually give you adequate estimates on what the
leakage saving will be. By now we have enough design data that we can show how good the correlation on that is and we will share that with the prospect.

How long does it take to do this proof of concept with a prospect (days, weeks, months)?

If we are selling to a fabless semiconductor company, the foundry will basically back us up that this works. In this case the proof of concept is fairly short. If you go to an IDM, typically the sales cycle will involve producing a test chip before they commit themselves. So the sales cycle for an IDM is longer than for a fabless semiconductor company.

So the “trick” (not the right word) is to adjust the gate length on a lot of transistors.

That’s the secret. That sounds very, very simple. If you think about it, if you are going to do optimization, you need to have a very sensitive optimizing engine because each transistor contribution is very, very small. It is only when you add up the millions of transistors that you are going to adjust that you get this 20% improvement in leakage.

How long does it take to run the software on the ever difficult to define typical chip?

If you talk about a typical chip at 65 nm, say 10 million gates, you run it over night. Maybe a little bit more than but definitely within a day and a night. The thing you will want to do because you are typically right at the back end of doing tapeout, is to do this while you are making other adjustments. You will do this in an iterative fashion. You will also do this hierarchically which means as each block gets ready from the design, you will run that one and then at the end you will run a top level complete optimization. One way of looking at this timing wise, if you look at it wall clock, we are typically hiding behind the time it takes to run physical verification which is also
done at the back end. We take less time than that.

What about the IF product?

IF is mainly targeted at 65 nm and below. As you know you need to metal fill on the chip in order to get a smooth surface. That has typically been done in a rather dumb fashion where you basically fill every place where there is room for it. If you are not careful about that, you will actually create capacitive loads to the . You will slow down the design. We are using the same timing analysis that we have in MO to identify nets that are sensitive and we stay away from those nets with the metal fill. We can generate whatever patterns are good for CMP. We do have a topography analysis tool that tells us what to do. We tend to incorporate the CMP simulation from the foundry. The idea is
to apply enough metal so you won’t have the problems but not to apply as much as with a dumb fill where you affect the timing.

In the case of IF are you adding an annotation layer to the GDSII file?

No. In this case you actually add to the metal layer. If you have a metal blob the OPC can’t do that. It needs to be metal in the first place. We make a change to the layout.

Do you know of any competition to either of these products?

Not directly! There are other wyas to control leakage power beside MO. The most typical way is Vt assignment whatever you HVT whatever. We have proven that for all the tricks that are done there we are additive because we are exposing another dimension. Even when people believe they have highly optimized for power we can still add something more. In fact there was an investigation published in STARC (Semiconductor Technology Academic Research Center) in Japanese where they showed that even with a design highly optimized for power we could still add over 20%. While there are other methods to control power and parametric yield, we can add more. We are unique and have patent protection for
that. The article is in Japanese. My wife is Japanese and she says that it is better I stick with English.

Is there any competition on the IF side?

With fill there are other products. Very specifically Calibre does fill but intelligent fill it is not. It is blind fill where you fill up every thing that is available. The key here is since we do have timing analysis we are able to select where to do or not to do fill. Calibre doesn’t have any information like that.

You can find the full EDACafe event calendar here.

To read more news, click here.

-- Jack Horgan, Contributing Editor.

Review Article
  • Are they really willing to change? September 04, 2007
    Reviewed by 'r36579'
    Certainly there is some talk about value-based pricing for some time now, especially in the area of DFM where such tools should apparently save millions or even billions.
    Here the opportunity of value-based pricing is again discussed but finally a traditional EDA model is being applied to sell the software. So why settle again for the traditional?
    o Because it is more predictable short-term satisfying analysts and shareholders?
    o Value-based models are not accepted foremost by the users, but also analysts, shareholder?
    You state:
    "[...] It is hard for any industry to change its basic pricing model. However, the EDA industry has moved in recent years from perpetual license to time based license and subscriptions. SIP firms have had success with a combination of licensing fees and royalties. The industry might be willing to consider new pricing innovations. [...]"
    New pricing innovations?
    What does this mean? We already see higher prices, segmentation, no more volume bundling of all products by large vendors, debit card principles,...
    But is this really new? Or does it look more like variations of the established TBL, commitment, backlog principles?
    Really new would be value-based models with shared risk/ reward! Success components! Flexible volumes!
    While it fluctuates heavily the IC industry outgrows the EDA industry. Yes, it seems like a more risky investment but has shown traditionally higher returns that the stable, slow growth, (for analysts) very predictable EDA industry.
    Are EDA companies really willing to entertain really new business models? New ways of working with their customers to realize and demonstrate value on the product level? Tapping into new sources of incoming streams other than license/ consulting sales? Establishing some solution orientation?
    Is there an infliction point that has to happen before these models can change? If yes, which one? Maybe a new generation of EDA leaders which are not stuck in old ways of doing things? Less influence by VCs and serial entrepeneurs who look more towards cash than to solve the problems of EDA customers?
    Right now the EDA industry rather seems determined to capitalize by all means possible on the increased R&D spendings of the IC industry. Like a leech.
    The EDA business/ market is increasing, alright. But at what hidden "cost" to his user base? Are they all 'satisfied' with this new 'hunger'/ demand of the EDA industry to ferociously grow since the IC industry grows?
    I would think that the current EDA accounting principles and the analyst expectations for predictable and long-term growth as well as the short-term expectations to capitalize on the increasing IC R&D budgets are creating a tremendous innovation bottleneck in terms of new models! And therefore cutting off new possible streams of income such as value-based models, success components.
    The cited move from perpetual to TBL licenses was a concerted effort by both the EDA industry and the EDA users.
    It is now time to sit together again to think about the next new model change.

      Was this review helpful to you?   (Report this review as inappropriate)

For more discussions, follow this link …

Featured Video
Peggy AycinenaWhat Would Joe Do?
by Peggy Aycinena
Simon Davidmann: A re-energized Imperas Tutorial at DAC
Peggy AycinenaIP Showcase
by Peggy Aycinena
ARM: A Gnawing Sense of Unease
More Editorial  
DDR 3-4-5 Developer with VIP for EDA Careers at San Jose, CA
Senior Front-End RTL Design AE for EDA Careers at San Jose, CA
LVS for PDK Design Engineer SILICON VALLEY for EDA Careers at San Jose, CA
Upcoming Events
11th International Conference on Verification and Evaluation of Computer and Communication Systems at 1455 DeMaisonneuve W. EV05.139 Montreal Quebec Canada - Aug 24 - 25, 2017
DVCon India 2017, Sept 14 - 15, 2017 at The Leela Palace Bengalore India - Sep 14 - 15, 2017
SMTA International 2017 at Rosemont IL - Sep 17 - 21, 2017
S2C: FPGA Base prototyping- Download white paper

Internet Business Systems © 2017 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy