December 11, 2006
Model Based Approach to DFM – Clear Shape Technologies
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
| by Jack Horgan - Contributing Editor
Posted anew every four weeks or so, the EDA WEEKLY delivers to its readers information concerning the latest happenings in the EDA industry, covering vendors, products, finances and new developments. Frequently, feature articles on selected public or private EDA companies are presented. Brought to you by EDACafe.com. If we miss a story or subject that you feel deserves to be included, or you just want to suggest a future topic, please contact us! Questions? Feedback? Click here. Thank you!
Clear Shape Technologies is a three year old startup with $10 million in venture funding that has developed a model based approach to DFM as opposed to the traditional rule based methodology. The firm has already established strong ties with TSMC, UMC and the Chartered-IBM-Samsung Common platform. The company has two initial products: InShape, a model-based full-chip Design Manufacturability Checker that predicts accurate silicon shapes,
providing designers the ability to do fast, accurate DFM hotspot detection of catastrophic failures and OutPerform, a complete and silicon correlated electrical DFM analysis and optimization product to enable designers using sub-90 nm processes to optimize and control the impact of lithography, mask, etch, RET, OPC, and CMP effects on their chip parameters.
I recently had an opportunity to discuss the company and its products with Atul Sharan, President, CEO and one of the founders.
Would you provide a brief biography?
I have been in the semiconductor industry for over 20 years now. The first half I spent more or less on the manufacturing side. I worked for companies like Integrated Device Technology and then I was in on the first stage of VLSI Technologies. Then for the last 10 years I have been more on the design and design automation side. At VLSI Technologies they actually had a subsidiary in their tools division called Compass Design Automation. They spun that off. As you remember in the old days before the
likes of Cadence and Synopsys, the large semiconductor companies, the ASIC companies, did their own tools. At VLSI Technology we had our own design tools. When the industry sort of segmented out, they spun that out as a separate company called Compass Design Automation that was subsequently acquired by Avanti. In any event from there I was at Ambit Design System which was a logic synthesis company acquired by Cadence. Then I was at Numerical Technology for six years along with Yao-Ting Wang, who is also a founder of Clear Shape. That company went IPO and was subsequently acquired by
Synopsys. I ran the sales and marketing force for Numerical. I was the VP for DFM at Synopsys. Then I spent some time in residence at a venture capital firm, Mohr-Davidow Ventures. Three years ago Yao-Ting, myself and a gentleman by the name of Fang-Cheng Chang, who was VP of Engineering at Numerical, started Clear Shape Technologies.
How was you experience in the world of venture capital?
I was not there long. The idea was to look into where the
next opportunities might be. One of the areas I identified was DFM. I looked at a lot of business plans of different DFM startup companies. Since then some of them have got funded. It was a good experience. It gave me a different perspective into what investors were looking for. That's when you realize that there are a lot of ideas that are good ideas but are not necessarily franchiseable and can not be turned into companies.
Did the technical guys come to you with the idea behind Clear Shape or did
you see the market opportunity and approach your former colleagues?
A few of us from Numerical saw the problems associated with manufacturing as they were going to be manifest on the design side. It really just evolved out of seeing that what we thought would be the right solution wasn't being pursued by anybody. We all decided to jump in. This was a good opportunity and we believed that we could make it work.
Did you self-fund, get seed capital, ..?
We got going and then got funded by USVP (US Venture Partners). Cadence was also an investor as was Asia Tech Management. My partner Yao-Ting was actually resident at Asia Tech, another venture fund with an Asian bent if you will. We got venture funding in April 2004. One of the things we realized was that in this economic environment it is important to build something that is capital efficient. We could have raised a lot more but we raised $5.1 million at that time. Actually the second time we raised the same amount about a year
and a half later. Intel Capital led the round which at that time was a little unusual for them. KL-Tencor also invested at that time.
I noticed that Mohr-Davidow where you were in residence is not listed as a Clear Shape investor.
Bill Davidow was on our board at Numerical and retiring at that time. The semiconductor specialist there was Rob Lobilinski. He has since started his own
fund. Things were changing at Mohr-Davidow.
What was the problem you identified that Clear Shape was going to solve?
That's a good question. What we did at Numerical and what you hear a lot about is called improving the resolution problem in lithography, all this OPC (Optical Proximity Correction) and RET (Resolution Enhancement Technology) Because the feature sizes were below the wavelength of the light source used to manufacture them which was the 193 nm stepper at that time
people were focused on improving resolution on the manufacturing side after the designer has signed off and taped out their design. As it turns out in the last five years one of the big changes is that the lithography roadmap for the industry is for the first time really fixed. A few years ago people were hoping for 157 nm steppers. Of course that is off the road map as are all the other advertised techniques people were pursuing. It was really 193 nm with a lot of resolution enhancement technologies like phase shifting and OPC including immersion. The only other technology on the horizon that
could be a savior is EUV but it is clear that is way out there, 22 nm at the earliest by most forecasts. Clearly no matter what you do on the manufacturing side, you are going to have a resolution problem. That's what people are focused on. From a designer's point of view what that really means is that I am going to have a variability problem. What I have designed and modeled ideally is not what is going to be produced in silicon. Primarily because of the lithography issues but also because of the copper interconnect or CMP (chemical mechanical polishing) issue that is coming downstream
as strained silicon and which adds a different kind of variability. From a designer's standpoint it did not matter what he did, he was going to have variability. In fact if I could make my design more robust, I could actually alleviate the problem of resolution because I wouldn't have to work as hard to manufacture tighter tolerances everywhere. That was sort of the fundamental observation. Then you have to look at what it takes to solve the problem. That is where we made some of the technology picks early on. How do you model the lithography, the edge and all the
associated impact in a manner that is accurate, fast and visible on the design side and that can be tied not just to catastrophic yield issues but also to electric issues like timing, signal integrity and leakage power? The development of those technologies would be disruptive in nature but not disruptive to the design flow. The incumbents like Synopsys and Mentor have OPC and RET market today not primarily but solely by the acquisitions they have made. It wasn't something they have developed on their own. Their approach is to really shove what is used on the manufacturing side upstream which is a non-starter.
We have the luxury to build everything from the ground up, a variability platform and developing the technology that could account for variability upstream in a manner that is accurate and fast.
Would you expand on the disruptive technology that you have developed?
If you look at OPC for example, it is really for mask design. It is really manipulating the behavior of light through the mask so that you get what you need on the silicon. It is a little like when you take a picture.
It is not so much improving the camera as it is improving the development process. What we are doing in a sense is looking at those technologies. By the way, it is almost an accident that the EDA companies are supplying the OPC tools because they are really mask design tools. Having said that all of these corrections that are now required on the mask to manipulate the behavior of light are of no interest at all to the designer. They have already taped out. It is like any other manufacturing step in the fab, except it just happens to be an operation that is performed on the
mask. Serendipity has it that it is supplied by EDA companies but it is really a post GDSII manufacturing operation. We realized that what you needed to do is go in one step from the layout to what the shapes were going to be in silicon and find a way to model the problem mathematically, physically and analytically in a manner that you would not have to through the steps so painfully of generating the corrections to the mask which by the way today takes 5 to 6 days per layer of the chip to do. It is not something you can do on the design side. We model the entire process including the highly non-linear process of
applying OPC and RET corrections. By doing that we can accurately and very fast we can do in hours what used to take days. Out tool is not useable nor is it intended to be used for mask design. You still have to go through OPC and post GDS II tools but from a designer's point of view it gives the information they need.
What would they do with this information?
Good question. A lot of this information is really useless to them. All they want to
know is where there are going to be problems. In the past rules have been enough to dictate there the problem was going to be. So here are my rules, mostly minimum width and spacing kind of rules. If my layout is violating the rules, then I know I will have problems and I will have to fix it. You build those into place and route and into DRC. What our InShape tool is able to do is hide all the complexity I just talked about and from a designer's point of view it looks to them like the end result of what they would see with the rule check they perform today. It is a model
based checking tool. It flags where the problems are going to be. What is also unique with our technology is the ability with the way we do the modeling to tell them how to fix the problem because it is not a simple fix in lithography. You could have a problem and then a few geometries away you have to move an edge to fix the problem because of the way light behaves. From a designer's point of view we do the work with the fabs to validate everything else. The designer puts any layout including full chip Again something our technology can do uniquely fast. It highlights
where the problems are going to be in terms of where catastrophic problems may be created and how to fix it. Underneath the hood we also generate silicon shapes that are read by our electrical tool, OutPerform, which then uses those shapes to perform the variational analysis electrically and to figure out what is the difference between the ideal and the actual shape and then identify areas where you are going to have timing failures, leakage power failures due to variation and also update the information required to close timing with more accurate silicon based information. There is a lot of stuff that is going on under the hood but from the
designer's standpoint there is really a short learning curve to use. In fact some of our users have written up reviews and benchmarks of our tools in Deepchip and ESNUG forum.
What do they see?
With the InShape tool what they essentially see is flags of where the problems are going to be much like with a DRC tool except it is running a model based check inside. It also shows them the guidelines on how to fix these in their layout. We have developed an automated flow to go with the
Cadence Chip Optimizer. That was an announcement that Cadence made a month ago about the use by one of our mutual customers, ATI. This fixing automatically can be done using the tool from Cadence electrical optimizer or it can be done by any layout editor the customer is using. That's the catastrophic part, the physical part. That essentially means you are capturing the failure before you would see them on the mask or worse on silicon. Not only would it improve yield but it would dramatically reduce respins and prevent failures. From the design standpoint the tool
OutPerform taps into the routing, extraction and timing information. It does all the computations required to update all this information to make it silicon accurate so they can do timing closure and also point out any failures associated with timing, signal integrity or leakage power that are caused by variation and go back into the standard flow and fix. It is an important point because what is unique about a variability platform is that it can be used in any design flow no matter which tools you are using from any EDA company. We do not require you to replace any tool.
You have relationships with a number of foundries.
We are the only DFM company that is certified by all the three major foundry platforms. If you look at out press releases you will see an endorsement by TSMC. Earlier this year at their technology forum they also made an announcement that we are qualified. The same thing is true for UMC. In fact we have a joint press release with UMC saying we have been working with them for 18 months to develop a DFM based flow. And also
with IBM, Samsung and Chartered common platform. They announced a few months ago that we are the only solution for full chip.
Did the foundries provide you with data?
Absolutely! We worked with them diligently and went through a very vigorous qualification process. Not only did we provide information but we installed out tools there. They had to satisfy themselves that the accuracy of what we say we are able to predict with the information they provided in a very
secure manner which they do not do to their customers that the tool will do what we say it will do. This is an important point because keep in mind although it is a huge step forward it is still limited and does not allow you to fully complete the mask design steps and things like that so any technology that requires doing OPC in the standard traditional way is not going to work. There is not enough information that way.
How did you manage to early on to convince the foundries to work with you?
obviously highlighting the problem. Designers over the last few years have become very cognizant of the problem. People are seeing the issue from the design side with variation as is the foundries. These are the real technology issues that everybody is starting to see. It was more a matter of showing them that we have technology to solve the problem. I think that the recognition that these problems exist and that they have to be addressed started to happen a couple of years ago. The question really was "How are you going to do it?" I think what
needs to be done there was pretty much consensus that started to happen. I would say that exists today. It still was clear how it needed to be done and that's where we came in.
What is the pricing for your products?
Both of our tools, InShape and OutPerform, have a list price of $300,000 per year. The other point is about these tools is that again because of the architecture they are linear in terms of distributive computing so that they can be scaled with the number of
What is the typical customer environment?
The 65nm is just starting to ramp up. That's where these technologies start to be adopted. For example we expect them to use these tools actually not expect they are using these tools for cell design to begin with as you would expect. You want your cells to be variation robust. We expect them to be used at the block level and of course we expect them to be used for doing chip assembly. In fact
another part of the Cadence announcement is that we are integrating our tools into the SoC Encounter routing platform. Clearly for us these tools are not only used by designers but also get integrated into the implementation environment. The sue model will get resolved. We expect people will use this in the same context as when they are doing DRC for cell, block and chip level designs.
When were these products first available?
InShape is a fully released product being used in production
environment. It was released in Q3. The OutPerform product is in beta right now and we expect it to be released in the next few months.
How many customers are there now?
We are working with all the leading IDMs. I have already talked about the foundries because the foundries have made announcements
themselves. As part of these announcements we have talked about Qualcomm and NEC. There are a lot of endorsements by NEC in our announcements. Cadence announced earlier that ATI is using our product in their flow. These are the ones we announcing right now But I can tell you we have already validated several other customers as well.
Obviously these products have relevance at 65nm and below. Are there certain types of companies or applications where they are particularly well
We do have some engagements at 90 nm. If you think about it, the ultimate goal here in general and perhaps in particular is to extract them maximum value form your fab or your process technology platform. You could do designs at 65 nm and 45 nm and get working silicon and perhaps even high yield. The question is how much performance and area is left on table. Even at 90 nm people start making compromises with recommended rules and this explosion of rules. There are applications for our technology at 90 nm and
people are using it there. But you are right. The sweet spot is 65 nm and going down to 45 nm. Because it is a fundamental problem that is dictated by manufacturing, it is difficult to segment by the type of application because anybody who is doing design and wants to extract the most from their process technology platform would use it. Of course leading edge gut would get there earlier, right, because they are the one to first delve into the technology.
You stated that several companies supplying OPC and RET tools are doing so as mask editing
tools. Is there anybody out there with a similar approach more on the design side?
No, not in the manner we do it. We haven't seen anybody that is tackling it like we are. A lot of so-called DFM startups are actually trying to improve the OPC and RET side and the ones who are already there are trying to move it upstream like Mentor and Synopsys. But that just doesn't work. Like I said before that was part of the genesis of our company. We had looked at most of these companies because they are all
of the same timeframe. We looked at what they were trying to do and nobody was doing what we are doing and that is how we got going in the first place.
Is this technology patented?
Absolutely! That's something we are very cognizant of. We do have significant patents. We continue to file more to protect these technologies.
What do you see as the main challenge for the company moving
The challenge is more of a macro challenge. To be candid, I am not sure there is much that a company like ours can do about it. I'll make a couple of points here. Firstly, as a company that obviously had to work very hard with the fabs to get qualified both at IDMs and at foundries and then solve the problem on the design side. The business model that EDA companies have make it harder and harder for even the large companies to get the value that is due for the complex problems being solved. That's number
one. Number 2 is of course the general economic macro environment of where the semiconductor industry is today. I think those are more the challenges than anything else.
Would you expand on the problem of the EDA business model?
EDA companies have to invest in where the money and the problems are today. They can't in my opinion and don't really invest where the problems are going to be because they have to place bets on what the problems are going to be and what the solution is
going to be and get it right. They all fight for the current dollars. But large and leading edge design companies at some point are looking at those design problems. That's where they have to look at startups. The problem arises because these companies also have multiple year agreements with these technology suppliers, sort of all you can eat deals. The fight that these EDA companies have among themselves to gain dollars decrease the value designers want to pay. Of course as a startup you have the luxury because you have been investing for three years and as an entrepreneur
the first thing you have to do is to hope you are approximately right about the problem and the solution. If you are, you can command a premium. The macro environment is one that makes it hard.
So you are saying that prospects have already committed dollars to multiyear contract to address what was and may still remain a problem and therefore don't have money left.
They have money left. I want to speak a little bit about the fact that DFM is a little unique on the design
side. Not only committed but when the companies a re competing not on technology that would solve some of the future problems but that are competing on the structure and the transactions of these arrangements. However what we have seen recently which is why we are doing as well as we are is companies are starting to create separate buckets or budgets for DFM. They recognize that these are problems and that these solutions are not available within their current structure. I was speaking more of the macro environment where the EDA companies in my opinion are not getting their value for the complex solutions they
provide even for current problems. It just makes it a little challenging.
Like I said for DFM people recognize this is not replacing an existing tool. It is actually in most instances making their existing tools work better. They are starting to create separate budgets for these kinds of tools.
Do you have a vision of where Clear Shape is going in terms of product direction over the next few years?
do. We obviously have a plan but at this point we are not talking about that. Clearly we are developing technology that is the two products I have talked about. Of course we are leveraging what we have done.
Ca I make one point I the design side? We have found over the last 12 to 18 months and it is also relevant to your question on the foundry that designers increasing on the electrical side are forced to tackle variation issues across the board. They have to add more margin on extraction, on the library, on SPICE models and of
course when they are doing timing closure. Fundamentally what OutPerforms allows them to do is silicon accurate design and then to review these margins. That leads directly to getting the maximum out of your technology. Margins were traditionally a way of addressing random variation. A lot of lithography issues I talked about are what are called systematic variations that are patterns dependent. If you try to use the previous methods like margining and other things more suited for random variations, you end up giving up too much performance and area.
style="font-size: 10pt; font-family: Verdana">I'm out of questions.
Who do you think of DFM?
I come from an environment where from 50,000 feet the earlier you can detect and avoid a problem the better off you are. Waiting until the design is done and hoping that changes in manufacturing will improve things is not intellectually satisfying.
I think that is probably a better summary of what we are doing than I gave.
Proliferating more and more rules addresses but does not solve the problem. It just creates more and more margins. It works but leaves a lot on the
I think that is right on. In fact a tool like InShape would stymie the need to continue to create these rules. You can always create patterns. The model ultimately has a different number of rules. I think people are seeing that potential. Moving from a rule based methodology to a model based approach is what we recognized needed to be done. But it had to be done in a manner that would be fast enough not only to be used on the design side but to be integrated into routing and things like that even
Synopsys Posts Financial Results for Fourth Quarter and Fiscal Year 2006 For the fourth quarter, Synopsys reported revenue of $283.4 million, an 11 percent increase compared to $254.8 million for the fourth quarter of fiscal 2005. Revenue for fiscal year 2006 was $1.096 billion, an increase of 10.4 percent from the $991.9 million in fiscal 2005.
The top articles over the last two weeks as determined by the number of readers were:
Net income for the fourth quarter of fiscal 2006 was $9.6 million, or $0.07 per share, compared to net loss of ($13.5) million, or ($0.09) per share, for the fourth quarter
of fiscal 2005. GAAP net income for the fiscal year ended October 31, 2006 was $24.3 million, or $0.17 per share, compared to net loss of ($15.5) million, or ($0.11) per share, for fiscal 2005.
LSI Logic and Agere Systems to Merge in All-Stock Transaction Valued at Approximately $4.0 Billion LSI Logic Corporation and Agere Systems Inc announced that they have entered into a definitive merger agreement under which the companies will be combined in an all-stock transaction with an equity value of approximately $4.0 billion. Under the terms of the agreement, Agere shareholders will receive 2.16 shares of LSI for each share of Agere they own. Based on the closing stock price of LSI on December 1, 2006, this represents a value to Agere
shareholders of $22.81 per share.
The combined company, to be called LSI Logic Corporation, will offer a comprehensive set of building block solutions including semiconductors, systems and related software for storage, networking and consumer electronics products. The companies had combined revenue of $3.5 billion for the 12 months ended September 30, 2006. The companies operate in more than 20 countries, with a combined workforce of approximately 9,100 employees, including nearly 4,300 engineers. The companies together own a substantial patent portfolio consisting of more than 10,000 issued and pending U.S. patents.
Open SystemC Initiative Announces Proposal for Significant Extensions to Transaction-Level Modeling (TLM) Standard
Cadence Enterprise System-Level Verification Enables Predictable Software, Hardware and System Quality Cadence announced a solution for ESL verification, which combines automated hardware, embedded software and system-level verification with system-wide management and new high-performance engines. This solution, combined with the Cadence® Incisive® Plan-to-Closure Methodology, extends the traditional electronic system level approaches focused only on systems engineers and C-level tools to the rest of the
The Open SystemC Initiative (OSCI), announced the delivery of the Draft SystemC Transaction-Level Modeling (TLM) 2.0 kit, containing proposed extensions to OSCI TLM application programming interface (API) standards, an open-source library implementation, and interoperable modeling examples for world-wide public review by the SystemC community. To download an open source license of the Draft TLM 2.0 kit and library implementation visit
enterprise with a path from an executable plan to system-level closure. It enforces the system requirements across all engineering functions doing design and verification from an abstract system-level model and verification plan to in-system IP verification, systems integration, validation and closure.
Leakage Power Reduced 240X Using Sequence/Dongbu Electronics Advanced Power-Gating Flow Sequence Design, EDA's power-aware SoC design technology leader, and Korea's Dongbu Electronics Inc., one of the world's largest pure play wafer foundries, announced test results demonstrating a 240X reduction in leakage power using their jointly developed, advanced MTCMOS power-gating flow. Power-gating design techniques significantly reduce leakage power which can easily consume up to one-half of a modern SoC's power budget if
left unchecked.Other EDA News
To read more news,
-- Jack Horgan, EDACafe.com Managing Editor.
You can find the full EDACafe event calendar here
To read more news, click here
-- Jack Horgan, EDACafe.com Contributing Editor.