September 26, 2005
New Physical Verification System from Cadence
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
the code to figure out what broke. And certainly not least, the rework cycle is quite long and unfortunately wholly serialized, meaning that when a job is run, you have to wait for the end of the job before you can look at the errors, start to understand what was found and begin to fix them, grade them or waive them if they were false errors.
What is really new with this physical verification system?
calculations between these sections of the deck, the hierarchy of the design, the replication, the use of arrayed structures inside the chip and finally the available machines on you server array or on your network. This part of the flow actually stops and does a quick performance check. It ascertains exactly what performance levels the machines are capable of. It takes these three factors and creates an optimized run deck if you will for the particular job and then launches it and spreads it across the server array.
The net result of this architecture is that we are able to give you remarkable levels of performance improvement. The industry standard solution out there right now from one of our competitors is able to perform at a base line level that our customers have told us that if they are going to consider a change, they are going to need to see performance improvement in the neighborhood of 10X.
Mark shared with me a graph showing the speedup for three different designs for 16, 36 and 64 CPUs. The first two designs were 130 nm and the last design was a 90nm, a big processor. The GDSII file sizes were 396 MB, 3,947MB and 662Mb. The times for a single CPU to process the deck were 121 min, 447min and 750min. The performance scaled linearly for all three designs.
These are pretty big chips by the way, not little things. The point being with 16 CPUs we are able to meet or beat the 10X requirement. This will take a job that traditionally runs overnight, one that you launch at 5PM and come in the next morning to look at the results and now you will be able to do it over lunch. As one of our associates said it better be a short lunch. If you happen to have more computer resources available the design environment scales linearly as shown by the performance of the 32 and 64 CPUs configurations.
The basic idea is linear scalability, massive parallelism and an optimizing complier that simultaneously optimizes the rule deck, the incoming data stream and available resources.
How is the relative performance with only one CPU?
The most common available industry solution (Mentor Graphics Calibre) is the base line. Mark showed me another chart showing the CPU time and memory consumption for six different 90nm customer designs. In all but one case, Cadence's new physical verification system tied or beat the baseline CPU time. Memory consumption was also very competitive, no more than 20% greater in any case.
The press release contains the following quote from Shoji Ichino, general manager, LSI Technology Development at Fujitsu:
"The Cadence Physical Verification System is the leading solution that addresses Fujitsu's needs for advanced sub-90-nanometer designs and that also delivers the performance scalability we require to reach 65 nanometers and below. The system offers outstanding performance, concurrent results reporting, and superior integration with the Virtuoso platform and OpenAccess. The Cadence Physical Verification System is in production use by our worldwide design teams for 90- and 65-nanometer physical verification and its extensibility will be used in the future to address manufacturing and yield optimization."
This underscore the fact that although we're not releasing this product for volume distribution right know, it is in production use. Fujitsu, our development partner, is using it in production at both 65 and 90 nm and achieving splendid results at this point.
I understand that PVS is a highly modular environment
as well as programmatically integrating in our RC Extraction, our litho solutions, our RET product family and mask data prep. This thing was architected in a way that we would be able to live with it for 15 years. As you can imagine this is a pretty big project. We can't afford to do this every year. Designed for modularity, extensibility, and scalability not just in performance but scalability
Internal enhancement extended and improved.
Tightly integrated with Encounter our digital IC design tool set as well as with Virtuoso our custom IC with tool set.
How would you summarize?
How is this solution package?
What operating systems does this support?
Right now pretty much every Linux variant you can thing of out there, 32 bit and 64 bit Linux machines as well as SUN Solaris. All the stuff you would expect to find in any one of our major semiconductor design customers.
You can find the full EDACafe event calendar here.
To read more news, click here.
-- Jack Horgan, EDACafe.com Contributing Editor.