September 26, 2005
New Physical Verification System from Cadence
Please note that contributed articles, blog entries, and comments posted on are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
Jack Horgan - Contributing Editor

by Jack Horgan - Contributing Editor
Posted anew every four weeks or so, the EDA WEEKLY delivers to its readers information concerning the latest happenings in the EDA industry, covering vendors, products, finances and new developments. Frequently, feature articles on selected public or private EDA companies are presented. Brought to you by If we miss a story or subject that you feel deserves to be included, or you just want to suggest a future topic, please contact us! Questions? Feedback? Click here. Thank you!


On September 12, 2005 Cadence introduced its Physical Verification System for rapid turnaround of DRC and LVS. The system's massively parallel approach facilitates multiple design turns per working day-even for the largest designs at 90-nanometers, 65-nanometers and below that would otherwise require overnight or multi-day runs. Cadence claims that PVS delivers near-linear performance scaling across very large numbers of CPUs and compared with conventional tools, significantly decreases physical verification cycle time as well as the overall number of cycles required. Cadence had lost its leadership position in DRC/LVS to Mentor Graphics some time ago. This new offering is based in part
on technology acquired from eTop, a Beijing-based EDA firm. I had an opportunity to discuss this with Mark Miller, Cadence VP for Business Development DFM.

How does DFM fit into Cadence from an organizational point of view?

The big split is at the top between the field sales channel oriented stuff versus product development side of it. That whole branch that does product development is called PRO for Product and Technology Organization. That's headed by Jim Miller, no relation to me (at least not that I'm willing to admit). He has a couple of operating groups plugging into him including the Virtuoso team, the Encounter team and the DFM team. The DFM business unit is run by Mark Levitt, the VP of DFM, and I work for him.

Is physical verification a subset of DFM and if so what percentage (some, most, all)?

It's a significant portion. At Cadence it includes all of the RC extraction tools for high precision transistor level extraction, power analysis technology, all of the Voltage Storm stuff, as well as all the physical verification and yield optimization technologies.

What would be considered part of DFM but not part of physical verification?

Basically anything that would be used for final chip analysis, optimization, signoff and tapeout. All the signoff level accuracy tools. Analysis and extraction teams all report through here. Also the lithography optimization and treatment.

What functions are included in physical verification?

Generally DRC (Design Rule Check), LVS (Layout versus Schematic) and EDRC (Electrical Design Rule Check).

What is the importance of these applications and are they become even more important nowadays?

Traditionally these technologies have been used to signoff literally every design as it approaches completion of the design phase before handing it over to manufacturing. This stage of design rule checking ensures that the physical implementation of the design is indeed manufacturable. In other words that all the tolerances, the spacing, the edge to edge relationships between all of the geometries in the data base are at the minimums or greater of what the manufacturing technology you are targeting for is capable of. It is a crucially important step and it is done for virtually every design that takes place. The complexity of that task as you move from 180 nm to 130 nm to 90 nm to 65 nm
grows exponentially. The rule deck, the collection of rules that you are checking the design against grows dramatically in size and the rules themselves are much more complicated. So therefore there has been a big increase in the amount of run time, CPU time consumed and the number of iterations that design teams have to go through to reach DRC closure on their designs.

Where does physical verification fit in the design flow?

There are sort of two variations of the theme. First, as people are creating data, as they are building cell libraries or blocks or placing a couple of blocks together and wiring them up they will need to incrementally check their work as they create new data. That's what generally thought of as incremental or interactive DRC. The other variation of the theme is when you start getting large quantities of data, for example a large block that just finished being placed and routed inside a P&R system and now you want to add the cell expansion from the cell library to the place and route data. Now you want to do a high precision DRC on that entire block. That might run for hours and in
some cases for days, if you are trying to do the whole chip. That's the second category which is sort of large job or whole chip batch physical verification. That's really where this new product that we're talking to you about today is targeted. It's aiming at that large block or full chip physical verification challenge. That's the one that has grown explosively in terms of its complexity and the amount of time necessary to perform it.

What is the source of the rule deck?

In the end the design teams themselves as it turns out. But in general the base rule deck comes from whoever is going to be manufacturing the chip. If you're inside an IDM like Intel or like STMicro designing the chip internally, you're going to get it from the CAD organization that supports the product. If you're at a foundry like TSMC, UMC or Chartered, you're going to get the rule deck from your foundry support team. In the case of a fabless company that's using a foundry to manufacture the design they will augment the rule deck with a lot of internal expertise. They may feel strongly that they can do more or an even better job. Really in the end you do this thoroughly, you ensure
that your design performs at process nominal yield and process nominal parametric values. But if your team happens to be extra smart of extra experienced, they may use secret sauce techniques or tweaks to add to the deck in addition to the baseline stuff that will allow them to squeeze an extra few points of yield out of it. At least that's the theory.

Is there any benefit from prior experience, prior runs or hierarchical organization or does one start from scratch every time you make a physical verification run?

By nature they are often somewhat incremental, meaning that each time you finish a run, you're going to see results. You perform a series of checks and then make changes or modifications to the design to fix the errors that are found. Depending upon the nature of it, you may want to run just one or two cells to make sure that you've got it right before you launch the big batch run again, if you've made a localized change. The answer is sort of both. In the end you're still going to want to run the entire design in batch from top to bottom but people do use it incrementally on occasion to perform checks like that. There are other more traditional tools from Cadence inside Virtuoso where
there's the capability to do small incremental checks.

What was the motivation for developing this new product?

With batch physical verification the rule decks get much more complicated and the complexity of the rules gets a lot bigger as well as one makes process node transitions.

This whole area of design rule verification and DRC tools has been around for a while. It was part of what originally formed Cadence. We kind of invented this game when ECAD and SDA Systems merged to form Cadence in 1988. ECAD was basically a company that wrote just DRC tools. We've had a history as long as anybody in the industry about this technology in particular. Over the years, as you know Moore's Law and all, design sizes have increased rather dramatically on a regular schedule and the type of jobs that were expected to be done by these verification engines has grown. One of the first and fundamental breakthroughs that came maybe 10 to 12 years ago, was people started to hierarchically process their designs. Originally all this stuff was done flat, meaning just like every other process in the design you looked at one layer at a time. You could actually process all the geometry as one big flat pile of polygons, rectangles and pads. That worked great until designs reached a size and the rule decks increased a little bit in complexity. Suddenly you were looking at untenably long run times, maybe multiple days in a lot of cases, multiple weeks in the worse cases. The first real innovation that took place, maybe 10 to 15 years ago, was when people introduced the idea of using hierarchy. As you know there are a lot of duplicate structures in a big chip design. If you're looking at a memory chip, every one of the core memory cells is a replication of the one next to it. So the tools would leverage the use of that replication and take short cuts if you will to try to dramatically shorten the run time. That worked great until about 5 years ago when we started running into subwavelength lithography related rules, meaning we were trying to print things smaller than the wavelength of the light you're trying to print them with. Now we are using 193 nm light. The issue there is that lithography effects are all about who's next to you, who's your next-door neighbor because it is frequency domain analysis. The relationships that cause violations here are dependent upon whether there is another object nearby, meaning they are all about interference patterns. That as it turns out broke hierarchical processing. You go to all the trouble to break down the design hierarchically to get to the bottom level of data you were trying to process in that hierarchy and realize that there was a cell right next-door, a piece of data right next-door from a different branch of the hierarchy that you had to throw it away, flatten it and process it the old way anyway in order to get the right answers. That's one of the reasons that run times have exploded, kind of gone in a very ugly direction. The second is that the existing solutions that are there right now haven't really taken advantage of multiprocessing architecture, specifically distributed processing. About 10 to 15 years ago SMP or symmetric multiprocessing and multithreading was the vogue thing to do if you look at the architectures of all the SUN machines that were being sold at that point in time. They were 4 to 8 CPUs with one common memory and one common disk subsystem that they would talk through. That multithreading, that SMP approach is great at accelerating one particular algorithm to a point but unfortunately it runs out of gas at about 8 to 10 CPUs due to the sharing of the memory and the disk subsystem and interprocess communication. There is a different architecture that has come to prominence in the last several years that is basically the grid oriented computing architecture, sort of blade servers and hundred to thousands of them available. You basically break the design down into
tiles or windows and process it. All the solutions that are out there right now really haven't taken advantage of the most recent approach of trying to distribute the processing of the design.

1 | 2 | 3 | 4 | 5  Next Page »

You can find the full EDACafe event calendar here.

To read more news, click here.

-- Jack Horgan, Contributing Editor.

Review Article
  • October 09, 2008
    Reviewed by 'prag_79'
    A good article ,giving incite into the latest strategy employed by Cadence in develpment of high performance physical verification tool that can dramatically speed up design closure as claimed.

      Was this review helpful to you?   (Report this review as inappropriate)

  • Uploading avatar May 23, 2009
    Reviewed by 'greatterrakomp'
    Hello, everyone! Nice forum, nice discussions!

    Can you, guys, tell me how i can upload an avatar so that it could be displayed with all my posts? i'm just newbe in using forums and can't find this feature here.

    P.S. i'm sorry, may be i'm posting to the wrong category, but i just want to use all features of the board.

      Was this review helpful to you?   (Report this review as inappropriate)

For more discussions, follow this link …


Featured Video
Peggy AycinenaWhat Would Joe Do?
by Peggy Aycinena
Acquiring Mentor: Four Good Ideas, One Great
More Editorial  
Sr. staff ASIC Design Engineer -2433 for Microchip at San Jose, CA
Manager, Field Applications Engineering for Real Intent at Sunnyvale, CA
SENIOR ASIC Design Engineer for TiBit Communications at Petaluma, CA
Upcoming Events
DeviceWerx - 2016 at Green Valley Ranch Casino & Resort Las Vegas NV - Nov 3 - 4, 2016
2016 International Conference On Computer Aided Design at Doubletree Hotel Austin TX - Nov 7 - 10, 2016
ICCAD 2016, Nov 7-10, 2016 at Doubletree Hotel in Austin, TX at Doubletree Hotel Austin TX - Nov 7 - 10, 2016
Electric&Hybrid Aerospace Technology Symposium 2016 at Conference Centre East. Koelnmesse (East Entrance) Messeplatz 1 Cologne Germany - Nov 9 - 10, 2016
S2C: FPGA Base prototyping- Download white paper
TrueCircuits: UltraPLL

Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy