I had an opportunity to interview Jacob Jacobsson, CEO of Blaze DFM. His company has a product, Blaze MO, which significantly reduces leakage power and thereby improve parametric yield. This fits in the design flow after tapeout and right before OPC. Near the end of the interview we discussed the issue of whether Blaze and by extension other EDA companies were deriving the appropriate amount of value for the benefits their products and services provide to their customers.
Would you give us a brief biography?
That would take two hours. As you can hear I have an accent. I was born in Sweden. My first degree was actually in languages but you can’t make a living out of that so I went for engineering and computer science. I moved to Silicon Valley in ’83 and joined the first true EDA company, Daisy Systems, where I was lucky enough for it to go public in very short order after I came. I went to Cadence after that. My background is actually as a silicon designer. I then went to work for Xilinx. I was General Manager for one of their divisions until Xilinx got so big that I could not know everyone by their first name. I ran a fabless semiconductor company called SCS. Then I was president of Forte Design Systems that made high level synthesis tools. I was then recruited to Blaze roughly a year and half ago for the opportunity here.
Forte is an ESL company.
Yes, it is an ESL company. As ESL companies go it is somewhat successful. The company felt a need to reorient itself in the direction of an IP company rather than as an ESL company. Neither I nor the Board felt that I was the right guy to do that. So we parted as friends.
Every time I talk to a current or former ESL company executive I say that the value proposition is compelling and that industry analysts have predicted great things for ESL but the success has yet to happen. What is your explanation?
We’re not here to talk about that but I will give you some general observations. I think it is absolutely nature bound that ESL methodology will eventually prevail. I think it is more a matter of timing. The industry has been good enough in dividing and conquering design size up to now so that it has not become absolutely essential. But the day will come for ESL technology. It is more a matter the companies are there that are basically waiting to see when that happens. Some of the better companies will make that happen faster because the barrier to entry gets lower for the customer. In my opinion it will happen but the problem has not grown so big yet that it has happened.
So there is insufficient pain at this point in time?
The pain will come. Cleary several companies including Forte believe that the pain is already there. I think we are teetering on it but it has not happened yet. Interestingly enough there is a kind of synergy between the problems we see in the back end going to 65 nm and 45 nm and the ESL on the front end. I think 65 nm systems are becoming big enough that the pain will become really high. One of the things we have observed is that it took longer to leave 95 nm to go to 65 nm. That was felt by the ESL companies.
Turning to Blaze DFM. You joined them a year and half ago. When and where did Blaze start?
Blaze DFM was formed at the end of 2005 by three people. Professor Andrew Kahng of UC at San Diego was one of the founders. His reason for founding Blaze was to exploit several of the ideas he had. He felt that DFM was underserved in the area of addressing the parametric yield issues. He joined with David Reed who is an EDA veteran most recently from Monterey. Some time before that David was with Cooper & Chyan Technologies. Also Kahng recruited Dr. Puneet Gupta who was one of his lab students. They basically exploited the ideas the professor had.
The reason for my joining was that by then they were ready to go to market and they wanted someone who had experience.
That’s why they recruited you. What attracted you to Blaze?
One of the things that Blaze did early on and I can not take any credit for it since they figured it out themselves. There are different ways of doing DFM. One way is to try and inject yourself into the manufacturing flow which is a very slow and high barrier to entry route. The other one is to fundamentally change the EDA flow by injecting yourself into the place and route or something like that which is doable but drastic changes to design flow take time to implement as well. Blaze decided to layer themselves right after you have taped out to GDSII and before you go to OPC (Optical Proximity Correction). For me it is important that this is non-disturbing to the design flow and as a result of that it would be easy to capitalize on what the company did. Also by putting yourself there, the effect of what you do is very easy to measure. You can measure the parametric yield impact from what you do versus if you don’t, for example by doing an A to B reticle in your tapeout. There is no discussion about the value of what you do. I found all of that to be very fascinating.
According to a company press release Blaze DFM raised $10 million in March.
That is enough for us to become profitable. Given the uncertainty of the capital markets lately it wasn’t genius but we were fortunate that we could do enough in that round. We are going to use it for the best.
Is that $10 million the total funding or were there earlier rounds?
There was another round before that. That round was around $6 million.
How big a company is Blaze DFM?
We have about 40 people.
You have two products, MO and IF. Does MO stand for Multiple Optimizations?
No, it does not. When the company was formed the first idea was to do mask optimization, to make mask generation cheaper. As part of the research of the company, they realized the same technology could be used to reduce power and improve on the parametric yield that way. The MO name prevailed but the target today for that product is power reduction. The other product is called IF for Intelligent Fill which is an intelligent way of generating fill patterns on the chip.
You may have read that we acquired earlier this year a company called Aprio. We did that to get access to a lithography engine that we didn’t have in the analysis space. Later on this year we will launch a product that exploits that litho engine to improve parametric yield.
Would you expand on what MO does?
MO has a very accurate timing analysis. It uses that information about which paths in a design are timing critical and uses that to find transistors not on the critical path. For those transistors we will adjust the gate length as much as the foundry feels comfortable with. By doing that you get a slightly slower transistor, which is why you need to know the timing, but with a profound improvement in the amount of leakage power that the transistor is going to waste. We actually have published customer testimonials that the improvement in leakage power can be 20% or better.
From your website I see the approach is to add some annotation to the GDSII file.
That’s the barrier to entry argument that you use. There have been DFM companies that say the entire flow will have to fundamentally change. Maybe that will happen some time in the future but when you do that, it will delay your introduction significantly. What we chose to do is to add an annotation layer to GDSII which is picked up by the OPC flow and being used to hit another target for these transistors. Basically you have a target for the OPC saying for this particular transistor I don’t want you to hit the nominal 90 nm (or whatever) target but I want 96 nm or whatever number the foundry is comfortable with.
Does the OPC software have to change? Does the foundry have to provide additional data?
This is a standard Boolean operation that the OPC flow can do. The only thing that needs to be changed is the script that drives the OPC, that lets it pick up this layer and tells it what to do. We have offered to do this for the foundries. But it turns out we had a meeting with one of the foundries and while we were talking, the OPC engineers implemented the change. It is very unobtrusive to OPC.
But the foundries have to make a change regardless of how easy and how unobtrusive?
The foundries will have to make the change to the script but they do other Boolean operations as well so it is not hard. You are trying to drive home the point of what we are expecting from the foundries. We do expect them to implement this change to the script plus tell us what change in gate length they feel comfortable with.
Is that data fed back into the software?
We need to now how much change in gate length we are allowed because we will operate from that. What changes in gate length you can have tells us what savings in power exists and from that what timing changes the transistor will have. You will basically start Blaze MO with a timing closed design and you need to end up with the same timing closed design. From the timing closure standpoint we haven’t made any changes. You need to know what the timing impacts are.
Have you made arrangements with some, most or all the foundries?
I can not say all because you do not know how many consider themselves to be foundries. But we have made silicon with TSMC, IBM, Chartered, Samsung and USMC as well as a host of IDMs. We have a wide reach on this.
You have verified the 20% improvement?
In fact there is a paper by Qualcomm where they are nice enough to tell what the impact is. We feel comfortable saying 20%. 30% should be the norm, when we go to more aggressive geometries. When we get more 45 nm data, we should see more improvements. The leakage problem gets bigger as you go to finer geometries.
How do you price and package MO?
The list price for a single license (one year TBL) is $275K. However, if you think about where we put ourselves in the flow, it doesn’t make sense for any customer to buy just one license. You are right before you are going to go into production. You are going to use MO in massive parallel fashion. Every single deal we have done has been either a defined sire or an entire company. But the nominal price for Blaze MO is $275K.
If you approach a prospect and want to convince that prospect of you capabilities, what can you or they do other than a produce a new chip?
There are two things for that. The first one is that we will run the design part of our flow against whatever the prospects golden signoff tool as the very last step to prove that the design is equally closed now as it was when we started. The prospect is not going to risk us unraveling the timing closure that have already achieved. That is a necessary thing to do. Again, if you are talking about not disturbing the design flow, I don’t think you can sell, if you had to argue that they should trust our timing analysis versus whatever they have as golden signoff tools today. That’s one thing. The other thing is that we actually give you adequate estimates on what the leakage saving will be. By now we have enough design data that we can show how good the correlation on that is and we will share that with the prospect.
How long does it take to do this proof of concept with a prospect (days, weeks, months)?
If we are selling to a fabless semiconductor company, the foundry will basically back us up that this works. In this case the proof of concept is fairly short. If you go to an IDM, typically the sales cycle will involve producing a test chip before they commit themselves. So the sales cycle for an IDM is longer than for a fabless semiconductor company.
So the “trick” (not the right word) is to adjust the gate length on a lot of transistors.
That’s the secret. That sounds very, very simple. If you think about it, if you are going to do optimization, you need to have a very sensitive optimizing engine because each transistor contribution is very, very small. It is only when you add up the millions of transistors that you are going to adjust that you get this 20% improvement in leakage.
How long does it take to run the software on the ever difficult to define typical chip?
If you talk about a typical chip at 65 nm, say 10 million gates, you run it over night. Maybe a little bit more than but definitely within a day and a night. The thing you will want to do because you are typically right at the back end of doing tapeout, is to do this while you are making other adjustments. You will do this in an iterative fashion. You will also do this hierarchically which means as each block gets ready from the design, you will run that one and then at the end you will run a top level complete optimization. One way of looking at this timing wise, if you look at it wall clock, we are typically hiding behind the time it takes to run physical verification which is also done at the back end. We take less time than that.
What about the IF product?
IF is mainly targeted at 65 nm and below. As you know you need to metal fill on the chip in order to get a smooth surface. That has typically been done in a rather dumb fashion where you basically fill every place where there is room for it. If you are not careful about that, you will actually create capacitive loads to the . You will slow down the design. We are using the same timing analysis that we have in MO to identify nets that are sensitive and we stay away from those nets with the metal fill. We can generate whatever patterns are good for CMP. We do have a topography analysis tool that tells us what to do. We tend to incorporate the CMP simulation from the foundry. The idea is to apply enough metal so you won’t have the problems but not to apply as much as with a dumb fill where you affect the timing.
In the case of IF are you adding an annotation layer to the GDSII file?
No. In this case you actually add to the metal layer. If you have a metal blob the OPC can’t do that. It needs to be metal in the first place. We make a change to the layout.
Do you know of any competition to either of these products?
Not directly! There are other wyas to control leakage power beside MO. The most typical way is Vt assignment whatever you HVT whatever. We have proven that for all the tricks that are done there we are additive because we are exposing another dimension. Even when people believe they have highly optimized for power we can still add something more. In fact there was an investigation published in STARC (Semiconductor Technology Academic Research Center) in Japanese where they showed that even with a design highly optimized for power we could still add over 20%. While there are other methods to control power and parametric yield, we can add more. We are unique and have patent protection for that. The article is in Japanese. My wife is Japanese and she says that it is better I stick with English.
Is there any competition on the IF side?
With fill there are other products. Very specifically Calibre does fill but intelligent fill it is not. It is blind fill where you fill up every thing that is available. The key here is since we do have timing analysis we are able to select where to do or not to do fill. Calibre doesn’t have any information like that.
Is this technology also patent protected?
It is patent protected in this case as well. It is essentially the same patents as in MO.
Would you provide some background on the Aprio acquisition?
I was rather new with the company at the time. My fundamental belief is that if you are going to make money, you need to have optimizers. You need to change something in the design in order to derive full value. In order to have that you need to have a good number of analyzers to give you the objective function. We had that from the work of Professor Khang and from inside of Blaze; good analysis in the timing area and very good in the topography area for the metal fill case. But we had no experience whatsoever in lithography. You are trying to be faithful to whatever shapes you have. Fairly soon after I came on board, we made a typical make versus buy decision. We said it is going to take so much effort and time to develop this lithography capability. I wasn’t happy because we came up with 2 to 3 years to do it. So we said lets go out and see if there is something to buy. At the time there wasn’t. We were busy exploiting the Blaze MO product. Eventually we hooked up with Aprio and acquired them in March of this year, one year after I came on board. What we get from that is preferred methodology. What happens when you really pattern this in manufacturing is that it allows us to be even more accurate about the timing and power impact of what we do. We use that to be even more aggressive about optimizing for power and timing. What the Aprio acquisition did for us is Blaze MO can also take into account litho effects. You can be more aggressive about the optimization that MO does. That’s one part. The other part is more of a byproduct. We get very fast hot spot checker. We really believe that optimizing is what we want to do. Just reporting to the designers that they have hot spots doesn’t deserve as much value as actually doing something about it.
As an aside, you know that Clear Shape who was really in the area of hot spot checking was acquired by Cadence yesterday. That is probably good for them because analysis is only fully exploitive, if you couple it together with optimization features. I assume that Cadence has it in its plans to an optimizer around that.
Blaze has a great advantage in that nothing needs to change in the design flow. In all the interviews I have done that is always a big issue. If you can improve performance, yield or whatever but have to change the design flow, there would be considerable resistance to adoption.
I built it a little bit based on upon my ESL experience where there was a drastic change on how things were being done. The pain level in order to adopt that would have to be much higher before you break through. Blaze is built on a kind of tactical decision in order to go to market faster.
You have that advantage, you have the ability to demonstrate objective improvement in power reduction, and you are in an area of considerable interest. What are the obstacles to considerable economic success?
There is no real big obstacle to speak of. Actually I alluded to one earlier. When you want to sell to IDMs, even if the reward can be big, they will require silicon proof. That takes time. You have to have some patience. There is another one although it is not an obstacle but more an opportunity. The semiconductor industry association (I think they are the ones) published the fact that the loss due to parametric yield was around $10 billion. The stuff being scrapped because it did not fit the parametric spec was $10 billion. We believe that we can address about 25% of that $10 billion. Blaze MO alone addresses ~$2.5 billion. It is really, really hard for an EDA company to derive that kind of value. I think one of the things you are going to see me doing is a lot of initiatives where maybe we go over to more of a service model to achieve a bigger and bigger percentage of that $2.5 billion. If you talk about obstacles the inefficiency in the system that is measured by the $10 billion in waste due to parametric yield is so big it is not an obstacle, it is a matter of how you execute in order to derive as much as you can.
A little bit more on the service model. I can easily see how that would work, if a customer needed to run you software only once. But you described how it is run on individual blocks and run after every change. How do you provide that as a service?
I am more really ready to say what I want to do. I would like to come up with a model where we somewhat participate in the gain the customer has. In that case a software license is not fully satisfactory. If we can in one way or another participate in the back end of that design and actual derive some of the benefit, the company will grow bigger and healthier than if it uses a traditional EDA model. The license price of the product is $275K. Maybe you can get over $1 million for a design center. If you can save $30 to $40 million in one tapeout, even though a $1 million sale is a good sale that is somewhat of a challenge for me to say we should only have that small a percentage of it. We will try to figure out other ways when the customer sees that we have delivered more of the value and derived a greater portion of the value as well.
When it comes to pricing models, one model, the one you would prefer, is the value received by the customer. Another model is competitive pricing. If someone can provide something comparable to your product but at a lower price, it is hard to command a premium.
I think that is the problem that has beset the EDA industry forever. In general I believe that patents are hard to defend themselves. We are unique in having an interface between design and manufacturing and having patent protection at the same time. We are somewhat immune from being undersold by other companies.
What happens is that the perceived value of an EDA product is influenced by the pricing of all the other EDA products where similar or not.
I agree with that. Sitting at the interface is somewhat of a luxury in this case. The other thing about it is the derived value in this case will be so high that it is not hard to convince the customer about the value. If you are better at deriving that value than anyone else, you should be able to command a premium for it. This is a tricky discussion. When we have something we will communicate it. Today, it is more musing about how you can get a bigger fraction of that $10 billion waste.
I can remember working for Applicon in the 70s. The firm believed that pricing should be based on target hardware margins. If a customer or prospect pushed back on price, they would give them more software to preserve the hardware margin. Of course today, that would not work today as all the EDA and MCAD firms are pretty much software only companies.
The way I look at it is that we sit right between design and manufacturing. From the designers’ point of view what we do when we optimize for leakage power is that we have created think of it as super process node that is tailored specifically to that design because we really improve on the leakage for the transistors where it doesn’t matter and that is unique for that design. Think of it as a pseudo node concept. The process node the design is addressing is faster than a vanilla node. That is the mental picture I am trying to exploit. But, how do you go about actually getting value for that? I don’t know the answer today.
When we figure it out, I can promise you we will prominently talk about it. Today this is more like an observation. We provide a tremendous value for our customers and yet we are selling it at nominal EDA license prices. I am not saying that this is a bad business. The money we have taken is enough even with the traditional EDA pricing model to get to profitability. But it seems to me that we are delivering more value than we are getting back for it.
Anything to add?
The value of doing optimization versus analysis. When we acquired Aprio, someone inquired “What is the exit strategy for a company like Blaze?” I said that there is room for one independent DFM company that exploits what we call electrical DFM which is really the parametric yield issue. We made one acquisition with Aprio. We are probably going to look at other things. We definitely intend to become this aggregator that is the independent DFM company. We will do that by running faster than every one else and being smart about what we make ourselves and what we acquire.
That can be a challenge because VCs have limited patience on how long they will wait to get their desired return on investment times 5.
As I said we are in a unique position to say that we are not going to take more money in order to become profitable. The key to that is maximizing growth here.
Over the last few months most of us have seen a drop in the fair market value of our greatest financial asset, our home, for reasons related to subprime mortgage market rather than anything to do with our particular house. Home Depot saw the sale price of its construction-supply business to a private equity consortium drop by around $2 billion because of difficulty raising capital. In the past we have witnessed precipitous drop in stock prices with the dot.com bust and with the Enron scandal. This should call into question the real value of just about everything.
We live in a capitalist economy where the fair market price, the value, of a product or service is what a willing buyer and a willing seller freely negotiate. Before signing the final agreement the seller is free to seek other buyers and the buyer is free to seek other suppliers. Either party may decide to simply wait. Synopsys paid a $10 million termination fee to walk away from acquisition agreement with MoSys. With this definition the question of the value of EDA products or anything else for that matter is largely moot. Nevertheless, we shall go on.
When buying or selling a house, your realtor will look at comparable prices to set proper expectations. If a house in the neighborhood recently sold for X, the seller will not sell for much less than that amount nor will the buyer pay much more than that amount regardless of the reasons behind the price; a foreclosure or a seller needing cash quickly on the one hand or a buyer from another state or country where prices are much higher. Oregonian buyers are always complaining about Californians who move into their state and drive up housing prices.
The primary purpose of a watch is to tell time. I have a $30 watch. It has kept pretty good time for many years needing only a $5 battery. I am sure that readers have or know someone that has spent 10 times, even 100 times as much for a watch. Perhaps they saw additional value in the watch as jewelry or as a status symbol (think Rolex). The same logic explains why some prefer BMWs and Mercedes while others buy a Hyundai. Different buyers will perceive different value in the same or similar items or between different items in the same general category.
Some companies may use a lower price to gain market share or improve stock price. Some companies may sell a product at a lower price as a loss leader, to get higher margin add-ons or service contracts, to gain endorsement from influential customers, to prevent a competitor from gaining a foothold with one’s existing customers In the old days Gillette gave away the razor and made money on the razor blades. Today cell phone carriers give away or heavily discount cell phones to get service subscriptions.
In the early days of MCAD when systems sold for over $100K a seat, salesmen had spreadsheets showing ROI based upon full ownership costs but with fewer personnel. In reality few designers or draftsmen were let go. Today productivity tools are promoted more on the basis of time-to-market and better end products rather than the savings from personnel reductions. This is easy to understand in the abstract but more difficult to quantify in the particular.
The above paragraphs provide some of the ways and some of the reasons why price fluctuates from the nominal value due to circumstances.
It is hard to imagine design and manufacturing an IC without EDA software. It is equally hard to imagine electronics products (consumer, automotive, military, ) with today’s functionality and price point without ICs. Nuts and bolts are needed to keep a car together but the contribution of EDA software to the design and manufacture of ICs goes well beyond that as a fundamental enabler. The inverted pyramid below shows the not-to-scale relationship between the 2006 global revenues of the EDA industry, the semiconductor industry and the electronics industry. Market size estimates are given at the end of this article.
Given this comparison it is easy to be sympathetic to the claim that EDA firms have received the short end of the bargain although it is nevertheless a bargain. On the day I am writing this article the market capitalizations of the three leading EDA firms are Mentor Graphics $1.2 billion, Synopsys $3.85 billion and Cadence $5.78 billion. Not too shabby.
It is hard for any industry to change its basic pricing model. However, the EDA industry has moved in recent years from perpetual license to time based license and subscriptions. SIP firms have had success with a combination of licensing fees and royalties. The industry might be willing to consider new pricing innovations.
Most people would acknowledge that the reward should be proportional to the risk. It cost around $5 billion to construct a fab. By contrast Mr. Jacobson says that Blaze can reach profitability even within the traditional EDA model with a total investment of under $20 million.
In fairness to Mr. Jacobson he is not arguing about the derived value of EDA software in general but the value of a very specific program that will significantly and measurably improve parametric yield. Time will tell whether he can discover a way to get the derived value he seeks.
Some Market Size Estimates for 2006:
According to EDAC (Electronics Design Automation Consortium) EDA revenue was $5.274 billion.
According to SEMI the capital equipment expenditures for semiconductor manufacturers was $40 billion.
According to the World Semiconductor Trade Statistics (WSTS) the global semiconductor market was $247 billion. According to FSA (Fabless Semiconductor Association) there are ~1,350 fabless companies with revenue of $49.7 billion, which equated to 20 percent of the semiconductor sales. According to Electronic Business and to iSuppli the top 25 Semiconductor Manufacturers had revenues of around $190 billion.
According to Consumer Electronics Association (CEA) factory-to-dealer sales of consumer electronics are projected to exceed $155 billion in 2007 or seven percent growth from the $145 billion in 2006.
The world OEM automotive electronics industry was $86.5 billion
The US electronic display market in 2006 was $11 billion
The top articles over the last two weeks as determined by the number of readers were:
New Kit From Cadence Cuts Risk and Time for Adopting Functional Verification Methodology The kit provides complete example verification plans, transaction-level and cycle-accurate models, design and verification IP, scripts and libraries -- all proven on a wireless segment representative design and delivered through applicability consulting.
The SoC Functional Verification Kit includes design and verification IP from Cadence and third parties, including an accurate high-speed model of the ARM968E-S processor, AMBA® PrimeCell IP® including interconnect and peripherals, and the ARM® RealView® Development Suite debugger, USB 2.0 from ChipIdea, and 802.11 from WiPro. The kit includes three main flows: architectural, RTL block to chip, and system-level. Users can implement the entire kit as an integrated flow, or may select flows individually. Also included are 13 workshop modules and over 40 hands-on labs which engineers can use to incrementally improve their verification productivity.
Mentor Graphics Reports Second Quarter Results Mentor Graphics Corporation announced second quarter revenue of $205.7 million, up 15% over the prior second quarter. On a GAAP basis, earnings were $.03 per share, up from a loss of $.01 in the year ago second quarter, despite $4.1 million of in-process R&D charges related to the Sierra Design Automation acquisition. On a non-GAAP basis, earnings were $.15 per share, up from $.09 a year ago. These results incorporate the change in the fiscal year with the second quarter running from May 1 to July 31.
Accellera Approves Functional Design Verification Standard Accellera announced that its Board of Directors, representing semiconductor, IP, EDA companies and systems houses, approved Accellera's Open Verification Library (OVL) 2.0 as an Accellera verification standard last month. OVL improves electronic design quality and supports Assertion-Based Verification (ABV) with Verilog, SystemVerilog, VHDL and the Property Specification Language (PSL).
The Accellera OVL standard includes a library of assertion checkers provided as an open standard. It improves electronic design verification when using Hardware Description Languages (HDLs) and results in better quality designs by enabling effective use of ABV methodologies.
Solido Receives $6.5M Second Round Financing Solido Design Automation today announced second round financing of $6.5 million, bringing total funding in the company to $9 million. The funding will be used to help accelerate the development and deployment of Solido's transistor-level statistical design and verification software technology, which aims to ensure robust analog/mixed-signal, custom digital, and memory IC designs.
Open SystemC Initiative Launches New SystemC Community Website The Open SystemC Initiative (OSCI), an independent non-profit organization dedicated to supporting and advancing SystemC as an industry standard language for electronic system-level design, announced the launch of its new website for the SystemC community. Featuring fresh technical content, a range of free downloads, links to the activities of worldwide SystemC user groups, and user-friendly navigation, the new site furthers the organization's goal of making the worldwide SystemC community more vibrant, connected, and well informed. Visit the new site at: www.systemc.org.
Other EDA News
Other IP & SoC News