September 26, 2005
New Physical Verification System from Cadence
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
Jack Horgan - Contributing Editor


by Jack Horgan - Contributing Editor
Posted anew every four weeks or so, the EDA WEEKLY delivers to its readers information concerning the latest happenings in the EDA industry, covering vendors, products, finances and new developments. Frequently, feature articles on selected public or private EDA companies are presented. Brought to you by EDACafe.com. If we miss a story or subject that you feel deserves to be included, or you just want to suggest a future topic, please contact us! Questions? Feedback? Click here. Thank you!


Also the complexity of the rules themselves. There are literally thousand of rules in one of these rule decks. One of the reasons is because the languages that are used to express these rules are low level languages, sort of like assembly code in a way; Boolean operations, edge-to-edge reference, spacing checks. There isn't much in the way of high level abstraction or expression of intent. One particular check say a latchup check might take 500 to 600 lines of DRC code to express it. That's both a problem for the person who is going to have to write the rules and it is also a problem for the person who is actually looking at errors that were found by that rule and has to refer back to
the code to figure out what broke. And certainly not least, the rework cycle is quite long and unfortunately wholly serialized, meaning that when a job is run, you have to wait for the end of the job before you can look at the errors, start to understand what was found and begin to fix them, grade them or waive them if they were false errors.


What is really new with this physical verification system?

The thing that we have done that is truly remarkable here is this optimizing compiler. We have created a software architecture that is highly modular in nature. At the beginning of its execution the first thing this optimizing complier does is to look at three different things, simultaneously analyze and then optimize that particular run for those three entities. They are the rule deck, the incoming data from Open Access (we also support a GDSII stream as you might imagine) and the actual computer resources available. It looks at these three, the dependency tree inside the rule deck and analyzes whether it can be broken down into independent subdecks that don't require any cross
calculations between these sections of the deck, the hierarchy of the design, the replication, the use of arrayed structures inside the chip and finally the available machines on you server array or on your network. This part of the flow actually stops and does a quick performance check. It ascertains exactly what performance levels the machines are capable of. It takes these three factors and creates an optimized run deck if you will for the particular job and then launches it and spreads it across the server array.


The net result of this architecture is that we are able to give you remarkable levels of performance improvement. The industry standard solution out there right now from one of our competitors is able to perform at a base line level that our customers have told us that if they are going to consider a change, they are going to need to see performance improvement in the neighborhood of 10X.


Mark shared with me a graph showing the speedup for three different designs for 16, 36 and 64 CPUs. The first two designs were 130 nm and the last design was a 90nm, a big processor. The GDSII file sizes were 396 MB, 3,947MB and 662Mb. The times for a single CPU to process the deck were 121 min, 447min and 750min. The performance scaled linearly for all three designs.


These are pretty big chips by the way, not little things. The point being with 16 CPUs we are able to meet or beat the 10X requirement. This will take a job that traditionally runs overnight, one that you launch at 5PM and come in the next morning to look at the results and now you will be able to do it over lunch. As one of our associates said it better be a short lunch. If you happen to have more computer resources available the design environment scales linearly as shown by the performance of the 32 and 64 CPUs configurations.


The basic idea is linear scalability, massive parallelism and an optimizing complier that simultaneously optimizes the rule deck, the incoming data stream and available resources.


How is the relative performance with only one CPU?

The most common available industry solution (Mentor Graphics Calibre) is the base line. Mark showed me another chart showing the CPU time and memory consumption for six different 90nm customer designs. In all but one case, Cadence's new physical verification system tied or beat the baseline CPU time. Memory consumption was also very competitive, no more than 20% greater in any case.


The press release contains the following quote from Shoji Ichino, general manager, LSI Technology Development at Fujitsu:


"The Cadence Physical Verification System is the leading solution that addresses Fujitsu's needs for advanced sub-90-nanometer designs and that also delivers the performance scalability we require to reach 65 nanometers and below. The system offers outstanding performance, concurrent results reporting, and superior integration with the Virtuoso platform and OpenAccess. The Cadence Physical Verification System is in production use by our worldwide design teams for 90- and 65-nanometer physical verification and its extensibility will be used in the future to address manufacturing and yield optimization."


This underscore the fact that although we're not releasing this product for volume distribution right know, it is in production use. Fujitsu, our development partner, is using it in production at both 65 and 90 nm and achieving splendid results at this point.


I understand that PVS is a highly modular environment

By modular in nature we mean that we've got a whole variety of different engines in our architecture. This means that the optimizing compiler as it is looking at the whole rule deck can basically call a number of different execution engines depending upon the nature of the rule and the nature of the incoming data stream that it is looking at. We've got high performance flat engines, hierarchical engines and a whole family of what we are calling “dedicated” engines which are special purpose executables targeted specifically at some of the ugliest and most complex chips we described earlier like latchup checks, antenna checks, width dependent checks and density gradient checks
as well as programmatically integrating in our RC Extraction, our litho solutions, our RET product family and mask data prep. This thing was architected in a way that we would be able to live with it for 15 years. As you can imagine this is a pretty big project. We can't afford to do this every year. Designed for modularity, extensibility, and scalability not just in performance but scalability
Internal enhancement extended and improved.


Tightly integrated with Encounter our digital IC design tool set as well as with Virtuoso our custom IC with tool set.


How would you summarize?

The net result is that this solution dramatically improves the amount of time it will take for a full chip signoff level DRC to be done. We are able to scale the performance linearly up to 100+ CPUs with no sacrifice whatsoever in accuracy. In fact there are a lot of cases where we are able to improve accuracy. Those dedicated engines have the ability to use much more accurate algorithms, as opposed to the low level checking sort of assembly level command I mentioned earlier. This is integrated within all the standard Cadence flows our custom IC and digital flows and in the end this should provide our customers with a much higher level performance and a lower cost of ownership and
support.


How is this solution package?

The way we are packaging this has to do with three different configurations. You might consider an L, an XL and a GXL configuration. There are a number of different options depending upon your level of interest in some of the modules I described like RET and like mask data prep (MDP). You might or might not want to use our solutions in all categories. It is being broken up in ways that you can sort of pick and choose some components. The primary packages will be centered around a baseline version, an extended performance version and a version that is scalable under literally unlimited number of CPUs as well as a bunch of really advanced technologies for yield enhancement and
optimization.


What operating systems does this support?

Right now pretty much every Linux variant you can thing of out there, 32 bit and 64 bit Linux machines as well as SUN Solaris. All the stuff you would expect to find in any one of our major semiconductor design customers.


« Previous Page 1 | 2 | 3 | 4 | 5  Next Page »


You can find the full EDACafe event calendar here.


To read more news, click here.



-- Jack Horgan, EDACafe.com Contributing Editor.


Rating:
Reviews:
Review Article
  • October 09, 2008
    Reviewed by 'prag_79'
    A good article ,giving incite into the latest strategy employed by Cadence in develpment of high performance physical verification tool that can dramatically speed up design closure as claimed.

      Was this review helpful to you?   (Report this review as inappropriate)


  • Uploading avatar May 23, 2009
    Reviewed by 'greatterrakomp'
    Hello, everyone! Nice forum, nice discussions!

    Can you, guys, tell me how i can upload an avatar so that it could be displayed with all my posts? i'm just newbe in using forums and can't find this feature here.

    P.S. i'm sorry, may be i'm posting to the wrong category, but i just want to use all features of the board.



      Was this review helpful to you?   (Report this review as inappropriate)


For more discussions, follow this link …
CST Webinar Series

EMA:

Featured Video
Editorial
Peggy AycinenaWhat Would Joe Do?
by Peggy Aycinena
Retail Therapy: Jump starting Black Friday
Peggy AycinenaIP Showcase
by Peggy Aycinena
REUSE 2016: Addressing the Four Freedoms
More Editorial  
Jobs
Development Engineer-WEB SKILLS +++ for EDA Careers at North Valley, CA
Principal Circuit Design Engineer for Rambus at Sunnyvale, CA
ACCOUNT MANAGER MUNICH GERMANY EU for EDA Careers at MUNICH, Germany
Manager, Field Applications Engineering for Real Intent at Sunnyvale, CA
Upcoming Events
Zuken Innovation World 2017, April 24 - 26, 2017, Hilton Head Marriott Resort & Spa in Hilton Head Island, SC at Hilton Head Marriott Resort & Spa Hilton Head Island NC - Apr 24 - 26, 2017
CST Webinar Series



Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy