September 04, 2006
Mentor Calibre nmDRC
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
| by Jack Horgan - Contributing Editor
Posted anew every four weeks or so, the EDA WEEKLY delivers to its readers information concerning the latest happenings in the EDA industry, covering vendors, products, finances and new developments. Frequently, feature articles on selected public or private EDA companies are presented. Brought to you by EDACafe.com. If we miss a story or subject that you feel deserves to be included, or you just want to suggest a future topic, please contact us! Questions? Feedback? Click here. Thank you!
On July 10th Mentor Graphics announced the availability of Calibre nmDRC tool for physical verification which dramatically reduces total cycle time through hyperscaling and incremental verification and integrates critical elements such as critical area analysis and critical feature identification. Calibre nmDRC is part of a new platform from Mentor, the Calibre nm Platform. Calibre nm Platform leverages next-generation technology in litho-friendly design (LFD), DRC, resolution enhancement technology (RET), and post-layout parasitic extraction and analysis to help design teams transition efficiently from a rule-based approach to a model-based approach. I had a chance to discuss Calibre
nmDRC (pronounced Caliber nanonmeter DRC)before the official announcement with Joe Sawicki, VP and GM Design to Silicon Division.
What is Mentor announcing?
The thing we are announcing today is really exciting for us. It is a brand new version of Calibre that we are naming Calibre nmDRC that completely redefines performance characteristics for design rule checking. It is the center of our Calibre nanometer platform which includes all the capabilities to add Design for Manufacturability to the whole signoff process. The thing which is really exciting about this is how it sort of breaks the ground rules of how this industry works. The common myth is that all innovation happens in startups. Big companies eventually buy the startups but then because we are intransigent, incompetent and can't figure out to roll out of bed in the morning,
eventually we destroy that software. New startup companies come into play and that becomes the next platform. Calibre was home grown to begin with. It was developed at Mentor Graphics. This new version is also a home grown version of the tool.
What was the motivation for this release?
Frankly, whenever someone says that we've got a new version that puts in place a whole new definition of performance, the fundamental question is why do users need that. Essentially the reason they need that is because over the last few technology nodes as we've gone from 180nm to now the first designs at 65nm, we have essentially walked away from a world where random effects dominated to one where systematic effects are a lot more significant. 180nm yield, if you had designed you circuit correctly, was all based upon particle defects, random defects in the manufacturing process. You looked for compliance to manufacturing by doing geometry checks that are mapped against those sorts of
defects. If you are looking for particle defects, you look at minimum width because that tells you whether or not you are going to be susceptible to shorts caused by particles and at minimum spacing because that will tell you whether or not you are going to be susceptible to bridging. And also a bunch of overlay checks that make sure that from region to region you are getting proper alignment between the two areas and that you are maintaining connectivity.
Because of the fact that we are now dealing with subwavelength manufacturing you are seeing very complex effects. A chip with yield problems might have three lines that are virtually identical in terms of width and spaces, yet only one of them is the one causing the yield problem as you start to move within the manufacturing variability because of this very large optical context that causes what is called a pinching issue.
What that has driven in terms of overall environment of sign-off and physical verification is that we have greatly increased the complexity of that operation. Up until 180nm the primary affect you worried about going from node to node was design size and geometry count. If you just do the basics of Moore's Law and plot it out over a 10 year horizon, the mathematics are that you get about 100 times the data you need to verify every 10 years. This is why we went from Dracula to Calibre in the transition from .35 to .25 because that geometry count could not be kept up with a flat tool. What has been happening in addition, starting at 130 is the increase in the number of basic DRC rules.
Fundamentally fairly flat going from .35 to .25 to .18 but at 130, 90 and now 65, the number of rules you have to run against those geometries is also drastically increasing. This is because there has been an attempt to encapsulate these more complex failure mechanisms in design rule checks. In addition, especially coming on with 65 and 45, we adding all these additional tasks to sign-off, things like recommended rules which say “Okay, I can meet a certain spacing but it is preferable if you had a wider spacing”
and litho-friendly design that brings simulation of lithography hot spots into the design space as well as critical area analysis which takes a model based
approach to looking at the particle defect problems.
This explodes the amount of analyses, computation and debug work that a designer needs to do as part of sign-off. At the same time design cycles are decreasing. The last time I worked for a living when I did microprocessor design in the late 80's, a two year design process was perfectly acceptable. With more consumer driven SoC approaches, a year is a horrendous failure. People have to squeeze that down to 6 to 9 months. Simply put, you have to fit the same amount plus all this additional analysis into a shortened design window that drives the requirement for a new performance paradigm. That brings us to introduce Calibre nanometer DRC.
How has Calibre evolved over 5 generations?
Let me give you a quick tour of three major architecture points with the tool. In the first version it was all targeted at single CPU. What we would do is take the hierarchy inherent in the design, create some other hierarchy either by adding combinations or breaking up large flat regions. What you did was run those operations in sequence from the bottom of the hierarchy to the top. Ran them for one operation in the rule deck and then ran them for the next. The reason why Calibre became the standard tool for deep submicron verification is that we did this breaking up better than anyone else out there and thereby had less work to do.
The next version released in the 2000 to 2001 timeframe was a multiple CPU version with data partitioning. It was the first version to take advantage of multiple processors by executing threads on a shared memory processor box. We put data interlocking in place such that we could take those cells and bins and parallelize their execution across multiple CPUs. For the most common hardware available at that time you would have 4, 8 or maybe 16 processors this gave great results and wonderful scalability. It did have one issue when you start to look towards a larger CPU cluster which is that by definition the best compute time you can get was the sum of the longest cells within each
operation. We could add another 3,000 processors to a task and it would get no shorter. The magic under the hood for the new version is that we have added operation parallelism to data parallelism. So now in addition to operating on each of the cells with an operation in parallel we are actually able to execute multiple operations in parallel as well. This allows us while one of those cells, the biggest bin, is still completing on the previous operation, we can pump up to 5 to 10 operations out into the cluster and radically decrease the overall run time and increase the scalability.
Since we have built this up around a core hierarchical engine that has the ability to thread and distribute tasks, we are able to optimally take advantage of all three common architectures out there whether it be a single CPU running small blocks and cells, an SMT box like someone's desktop (two Opterons with 4 cores in it) or one of these 40 to 50 CPU Linux clusters that are being put in place. This is a real advantage because most other approaches that are being talked about that are being able to scale out to multiple processors at this point are configured in such a way that it will operate well only on these multicore clusters.
Do you have any hard data on performance?
You can find the full EDACafe event calendar here
To read more news, click here
-- Jack Horgan, EDACafe.com Contributing Editor.
Be the first to review this article