May 28, 2007
Timing and Signal Integrity – CLK Design Automation
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
Jack Horgan - Contributing Editor


by Jack Horgan - Contributing Editor
Posted anew every four weeks or so, the EDA WEEKLY delivers to its readers information concerning the latest happenings in the EDA industry, covering vendors, products, finances and new developments. Frequently, feature articles on selected public or private EDA companies are presented. Brought to you by EDACafe.com. If we miss a story or subject that you feel deserves to be included, or you just want to suggest a future topic, please contact us! Questions? Feedback? Click here. Thank you!

Have you benchmark performance on real designs?


Yes.  One design is a 60 million gate full chip, 32 GB of parasitic files compressed, 9.1 instances with some very large macros.  On 8 CPUs this takes a little under 2 hours.  On 16 CPUs it takes around an hour.  The other tool today takes about 18 hours to run.  That means that designer’s behavior can fundamentally change.  You can go through several design revisions per day as you try to get to design closure not 3 design revisions per week with the other tool.  The other key thing is incremental.  On a 6.5 million gate block with 1.5 million instances it took 11 minutes on four CPUs to do the base line.  They swapped out 50,000 cells
to check for some voltage variation.   We can run that now around 1 minute.  If you were to change a cell or something like that the answer would be back in seconds.  Again, a major breakthrough in behavior because the current flow says if you change 5 cells or change 50,000 cells, you wait 3.5 hours.  With CLK you can make that change and get an answer before you can come back with a cup of coffee.

The summary is that we are one of the next generation tools in EDA that have to come out to build a flow that can deal with 10 million, 20 million, or 30 million placement points in its design or 600 million or a billion transistors, however you want to measure it.  Those flows have to take advantage of the presence of multiple processors in one machine.  They have got to be able to scale to the amount of changes being done.  That’s what the next generation tool set does.  It is a ground up architectural development from the bottom to the top.  You can’t take an existing tool and retro fit it.  Maybe the algorithm but certainly not the data

structures and the architecture.

It has been shipping for a little while to our initial partner.  We are now doing additional commercial engagements.  That’s the story.

The pricing starts at $25K.  Is that for one CPU?

That is for one processor with up to two cores in it for the base timing system.

How does the pricing go up with the number of processors?

Basically, we go from one system to a system with all the functionality on as many processors as you want at a substantially higher price.  The idea was that if the CAD manager was trying to compare this with their existing tool for static timing, they should see a good price performance advantage from the baseline going up.  But most of the customers we are working with right now are looking at this not necessarily as a replacement for their existing signoff tool but as a larger quantity thing which is going to become their mainstay engineering tool for getting timing and signal integrity closure.  We are not trying to deal with the world.  We are not trying to sell
to every logo out there.  We are trying to sell to big logos that have a chance to engage with us.

You mentioned that very early on when you went to prospects without a product, they said statistical, which was a new thing at the time, would be nice but they really needed to significantly improve their existing tools.  My observation over the years is that people typically look at their existing tools wishing they were faster, more robust, more functional etc and then they would be better off.  They rarely have the vision for a quantum leap because they are struggling with their existing problems on a daily basis.  Now that it is three years later and you are going back to the same prospects, are they still stating the same thing or are they now saying that they
realized the importance and need for statistical?

Actually, for the most part, it has not changed.  If anything, they have been able to digest where statistical does or does not fit.  It seems to occupy some important niche responsibility.  There are some harder things people have learned as they have gotten closer to the problem that suggests that in one of those contexts statistical is a feature not the solution.  Let me break that down.  There is a manufacturing engineering problem that has to do with people who see lots of designs and are trying to figure out how the process is drifting or not drifting.  This is the sort of stuff that PDF does today.  In that context statistics are meat
and potatoes.  They have always done statistics.  That is how they track processes.  In that context statistical tools have meaning.  In the engineering domain where we work for reasons both technical and from an adoption standpoint it turns out that statistical is a feature not a product.  In fact statistical really has to sit within the framework of a classical corner space environment

It turns out that manufacturing processes are not unimodal.  In fact if you look at the manufacturing data, they tend to be tightly clustered about a number of different points.  It turns out that a lot of the variation you see is from line to line in a fab or from fab to fab if you are a multifab shop or within a die it has to do with reticle to reticle section.  Within these different multimodes, the classical Gaussian curve, the normal curve, is a unimodal curve.  If you actually plot the data, you find clusters.  Those cluster turn out to be what we call corners.  So corners turn out from a statistical and manufacturing data base still to be the best way

to represent information to the users.  Now within a corner things are very tightly correlated such that you can look at statistical information but the variation is much more tightly wound.  It is not as wide.  So in that context, statistical becomes a utility that you can use say instead of on-chip variation or as a follow-up tool to look for outliers that the corner analysis did not reveal.  But it is the case in the engineering mode, statistical turns out to be a feature that you turn on after you have completed you corner analysis to reveal additional information about you margin.

People do see statistical in their engineering roadmap as something they would like to use within the context of a classic corner based analysis.  Now there are other issues that have to do with how foundries develop their information, how they present the information and how engineers learn how to use statistical data that also once again say from an adoption standpoint it is much easier if users can see the statistical data side by side with their corner information.  That just makes the adoption curve easier.  From a technical standpoint, it turns out that corners still are the best way to represent information.  They have been people smarter than I am who have

made that observation.  There are adoption reasons that say for getting people to adopt statistical as it is appropriate is best done from inside a classical corner analysis.  That is again from the engineering perspective.  There is another domain which is manufacturing engineering and yield analysis where statistical is part of the meat and potatoes of what they do.  But that is not the market we serve.

You said one of your board members was at ClearCase.

Paul Levine.  He was the founding CEO of Atria Software.  He took Atria public and merged it with Pure Software. Pure Software ultimately merged with Rational before Rational was acquired by IBM.

As I recall ClearCase is a software source control system.  What these systems typically do is determine what to be recompiled and relinked given the header files and source modules that had been modified.  This would save substantial compilation and linking time.  But the resulting software executable still had to be totally retested.  Why is it different with signal integrity when cells have been swapped or …?

The key is not that we are doing less signal integrity analysis so much as we are doing all the signal integrity analysis to guarantee it is the same answer.  The way our system works and we are applying for patents is that we have all the results information inside our system.  We know what the previous signal integrity analysis did.  When you make a change, our incremental algorithms kick in and see what other things were affected by that.  We know enough about how the previous calculation worked to be able to figure out what else has to get pulled in to do the calculation.  We also have knowledge about where that dampens out and you can stop doing the calculation.  It is not an algorithm thing.  It is inherent in the data structures and data architecture.  We can figure out automatically the calculation we need to get to the point where you are getting the same result as if you ran the whole design structure.  All this is done in the background.  All automatically.  That’s how are tests are run.  We run circuits and we do signal integrity calculations.  We have a system that just scrambles it, does arbitrary changes.  We literally do a binary to binary compare to make sure we are getting the same answer.



You can find the full EDACafe event calendar here.


To read more news, click here.



-- Jack Horgan, EDACafe.com Contributing Editor.


Rating:


Review Article Be the first to review this article
CST Webinar Series

EMA:

Featured Video
Editorial
Peggy AycinenaWhat Would Joe Do?
by Peggy Aycinena
Retail Therapy: Jump starting Black Friday
Peggy AycinenaIP Showcase
by Peggy Aycinena
REUSE 2016: Addressing the Four Freedoms
More Editorial  
Jobs
AE-APPS SUPPORT/TMM for EDA Careers at San Jose-SOCAL-AZ, CA
FAE FIELD APPLICATIONS SAN DIEGO for EDA Careers at San Diego, CA
Development Engineer-WEB SKILLS +++ for EDA Careers at North Valley, CA
Manager, Field Applications Engineering for Real Intent at Sunnyvale, CA
ACCOUNT MANAGER MUNICH GERMANY EU for EDA Careers at MUNICH, Germany
Upcoming Events
Zuken Innovation World 2017, April 24 - 26, 2017, Hilton Head Marriott Resort & Spa in Hilton Head Island, SC at Hilton Head Marriott Resort & Spa Hilton Head Island NC - Apr 24 - 26, 2017
CST Webinar Series



Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy