Mentor Calibre nmDRC

Yes! Runs that would take to 2 to 3 days on a single CPU execute in 2 hours or less on up to 40 CPUs with hyperscaling. Although the architecture was implicitly put in place to take advantage of large clusters, it also gives customers a big bang where SMP boxes on average were getting 2 X runtime improvements for these clusters.

We have been out to 23 or 24 customers with beta software. There is a wide range of IDM, fab-less and foundry customers. The results are for SoC, microprocessor and memory designs. We have published results for ten of these ranging from 70 to 130 nanometer designs running on anywhere from 8 to 40 CPUs where we get runtimes under 2 hours.

People, especially software vendors, will talk on scalability across on multiple CPUs. If you think about it, no customer goes out and runs their designs and says “Great it ran in three hours. I would like to run it again on more CPUs and get it to run it in an hour and a half.” What they do is put in place a cluster and then they want to be sure that whatever design they throw against it that they get reasonable turnaround. We did a study at a major foundry where they had three designs ranging from 8.2 GB to 27.3GB, a 3x increase in data file size. They ran that on 32 CPUs and the run time increased only 20%. This means that with an established cluster irrespective of what designs going in there, we get very good runtime scalability across multiple design sizes.

One of the key points that enabled is to be able to go out to 24 beta customers (as you know beta customers can be very difficult to support with a new tool) is that Calibre nmDRC just drops into the existing environment. Although it has the ability to have a new rule file language, it runs backward compatible with all the old rule files people have spent years building. It fits into the same design environment, using the same scripts and same reporting. Someone is able to access the ability to get the 2 hour runtime rather than the overnight runtime simply by loading the new software.

What we have talked about is being able to radically decrease the turn around time on a chip design. But that is not the only story. You are going to have errors. This means you need the ability to put in place a very efficient debug environment. There are two new capabilities we are introducing with nmDRC that greatly facilitate that. The first is dynamic result visualization. Traditionally you would run a verification tool. When the tool is done, you have access to all the errors. Instead the new tool sends out the errors to the user environment as the tool finds them. While the tool is still running, the user can go an error, find out where it is in the design database, and fix it. The next error would come and the user then would go and fix that one and so on.

In conjunction with that you want to make sure to give the user the capability to fix these errors without ending up causing additional errors that can only be found with a full chip run. Calibre nmDRC gives you an incremental DCR capability. Users can go off make a change and tell Calibre to run the entire chip. Calibre will determine what changes have been made and check within a full chip context halo around that region defined by the largest area of influence of any particular rule. So you might have this two hour runtime. You execute that and get your first error, fix that and rather than having to run for another two hour, you can do a 20 second run.

Whereas previously you might have an overnight run, debug that for perhaps a half day and start another full chip run. If you did everything right you would be done. If you didn't, you would have to run yet another cycle. It could take days to weeks. With the new version between the collapse of runtime and the dynamic results visualization as well as incremental we are able to collapse that to realistically someone can make sure physical verification in two runs which means you are looking at 4 to 6 hours.

We have also put in a new front end. This is essentially a tcl front end. Conceptually it takes it from an equivalent assembly language for geometry to a higher level programming language for geometry. As an example a set of rules describing a metal stack would take 500 lines in the old version but would take only about 64 lines in the new version. At some level it is not going to matter how much typing you have to do. The important point here is that because it is a high level programming language, your ability to support development as you iterating and cleaning these rules as well as porting to the next process is significantly enhanced.

Is there any way to take the old rules and automatically convert them to this new high level language?
No. It's from here forward. Whenever people get done and have what is called a golden rule file, there is little impetus to go off and change that. Even if it were an automatic process. the qualification time would be just too long. This is something that people would do going forward.

Can Calibre nmDRC be used with design systems form other vendors?
I have often described Calibre as the Switzerland of EDA. Because Mentor does not participate in place and route market, we integrate in with virtually every design creation tool on the planet. What is different with Calibre nmDRC is that we have also added the capability to do native database read and write from OpenAccess as well as from the MilkyWay environment.

So signoff has gone from a simple DRC check to become a multimode analysis concentrating on manufacturability. What are some of the problems in doing this?
DRC metal spacing for 90 nm would be 130 nm metal1 spacing. The DFM rule is that if you did this at 200 nm the yield is better. This is the equivalent of saying if you throw a 180 nm design against this process, it will yield better than a 90 nm design. Consider the typical results for a DRC run. Because you are doing verification for all the integration points along the way it is rare to see more than a handful of errors, especially after you have screened those errors for hierarchical redundancy after a DRC run. This is fairly easy to conceptualize - going off to your layout editor and fixing those. The traditional results of a recommended rule file because it is essentially a recommendation that you design at a bigger design process is that there are thousands and thousands of errors. You might ask yourself “What does a designer do with this?” They close that window and pretend they never saw it because there is nothing you can do with this level of data.

One of the other things that makes this challenging as yield detractors become more complex, it can become more and more difficult to put them into a geometrical check. What happens instead is that we end up moving them to a model based check. The two major ones that have happened to date are critical area analysis where you include a model of your defect density as you are doing this yield check as well as litho-friendly design which is a tool we launched back in the spring that allows you to simulate the lithographic environment of the manufacturing line to determine these complex yield effects that are impossible to code geometrically. The good news on this is that they are a much terser way to describe the issue and a much more reliable way. The difficult part is because they are not a simple measurement but instead essentially a simulation is that they become far more computationally complex than any type of geometry check.

People in the industry often talk about the need to do via doubling. Consider a real simple piece of layout with 2 single vias. What are the options? You can take one and double it or you can take the other one and double it with an x clip. You can extend the two line ends so that your enclosure coverage is a little bit more. Or you could make both metals a little wider to expand closure around the via in three directions. Or some combination of these. The fundamental question becomes which one are you supposed to do? This is not even beginning to talk about how you interact that with the ability to spread those wires and make those metals fatter. It becomes very complex to look at that. If you are just going off and doing a via doubling, how do you go off and figure out in the context of all the recommended rules, critical area analysis, all the litho analysis and all the other analyses you need to put in what is the best way to deal with the problem? As you go sequentially, it is difficult to determine if you are actually going to move to a point where you are getting better or worse yields. There was a paper that got a lot of play at PSY where someone did a very simple via doubling experiment and showed they can actually get less yield by doubling vias because they ran up against other effects. The goal here is you want to get one of the gold dots not one of the green.

« Previous Page 1 | 2 | 3 | 4 | 5  Next Page »

Rating:


Review Article Be the first to review this article

EMA:

Featured Video
Editorial
Peggy AycinenaWhat Would Joe Do?
by Peggy Aycinena
Retail Therapy: Jump starting Black Friday
Peggy AycinenaIP Showcase
by Peggy Aycinena
REUSE 2016: Addressing the Four Freedoms
More Editorial  
Jobs
Manager, Field Applications Engineering for Real Intent at Sunnyvale, CA
Development Engineer-WEB SKILLS +++ for EDA Careers at North Valley, CA
AE-APPS SUPPORT/TMM for EDA Careers at San Jose-SOCAL-AZ, CA
ACCOUNT MANAGER MUNICH GERMANY EU for EDA Careers at MUNICH, Germany
FAE FIELD APPLICATIONS SAN DIEGO for EDA Careers at San Diego, CA
Upcoming Events
Zuken Innovation World 2017, April 24 - 26, 2017, Hilton Head Marriott Resort & Spa in Hilton Head Island, SC at Hilton Head Marriott Resort & Spa Hilton Head Island NC - Apr 24 - 26, 2017
CST Webinar Series



Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy