Warning: Undefined array key "upload_with_nude_flag" in /www/www_com/htdocs/get_weekly_feature_ad.inc.php on line 69
Introduction
Altos
Design Automation is a startup that provides ultra-fast, fully-automated
characterization technology for the creation of library views for timing,
signal integrity and power analysis and optimization. The
I recently had an opportunity to talk with Jim McCanny, CEO and co-founder.
Would you provide us with a brief biography?
I have
been in EDA for my entire career. I
started my professional life as an EDA developer at Texas Instruments, working
in the
You said that you went over to the dark side. Which side was more interesting, more challenging, more satisfying?
They both
have equal amounts of interesting stuff.
Having spent so many years on the technical side, it was interesting to
try something new but still be able to leverage what I had learned on the
technical side. On the technical side
you tend to be point focused on one customer, one user. You can add a feature or make a change that
helps one user and it is satisfying that that you are having an impact. But when you go into marketing, you can make
pretty broad strokes that have impact across a whole market. That was new, exciting and very fulfilling,
especially at a small company like CadMOS where we had to evangelize signal integrity,
which was a new issue. People would
rather ignore some of these issues that are coming along. You get some resistance. Not only did we build a successful company we
transformed the digital design flow.
They had to have signal integrity as part of the flow. We had a lot of influence on how signal
integrity was measured and how it could be dealt with in the design flow.
The
marketing side appeals to sort of the broad strokes. Occasionally you miss the one-on-one
connection with the end user, particularly when you get to Cadence and you know
you would be doing things on a broad scale.
But when you make contact with a customer, you never got to see them
through an entire project. At EPIC we
were working a lot with Intel and AMD, going down and spending a couple of days
a week, working onsite with these guys, seeing their issues and the progress
they were making on their big microprocessors.
That was very satisfying too. I
like both. The good thing about being in
a small startup is that you stay close to the technology without having to
spend all the evenings and hours writing code.
Once you start writing code, it sort of drags you down. You have to spend so much time thinking about
it. It can become very isolating after
some time. So I think the dark side has
its benefits in the sense that it is a little more social and you get to
interact with more people.
Large firms often acquire smaller
firms for their technology. This gives
the founders and the investors an opportunity to cash in. However, it would appear that a lot of
employees eventually drift away from the larger employers because of the
difference in environments.
Yes. I think the big companies are nice in terms
of the security and the hours are a better (not a lot but a little better). There are some more benefits but everybody
likes to feel useful. At times at a big
company like Cadence, you may work with a customer and then the sales guy will
do an all-you-can-eat deal. You have no
idea whether the little piece you worked on was important or not. You can’t relate what you do to the bottom
line. In small companies everybody is
involved. Everyone gets to experience
the highs and lows. They understand what
they do really matters. Once you have
had that kind of excitement, that drug, it is hard to give it up. You can wear yourself out in a startup and
then go spend a few years in a big company, make some good contacts and find
your feet again. Then you just feel like
it is time to do something new again, to get out there. At least that was it for me. I felt
that Cadence was very good to me. I had
no complaints. They had great people
there. There were interesting things to
work on but I missed the excitement of being at a smaller company, really
interacting with the customers and the developers, and working with a very
focused team. The reason EDA has so many
startups is not only the potential to get acquired. You can grow the company to a large size and
make good money. It is very exhilarating
to solve real problems for customers and work very closely with the technical
team. You feel like you are changing the
world a little bit.
Is Altos self-funded?
Initially we were self-funded. Last December we took a small amount of Series A funding ($1.5 million). Most of that was from a private investor. We got some money from Jim Hogan at Vista Ventures.
How big a firm is Altos?
It is still very small. We are 7 people. Now we are trying to add a few more.
You said that there was a problem that Altos was addressing that others were not. What is that problem?
The
general problem of characterization.
People have made efforts to solve this problem before. There are solutions in the marketplace. It is not that there have not been products for
characterization. What has happened was
that there has been a lot of new factors that have all come along at the same
time and have been looming on the horizon that have put the existing
characterization solutions under undue stress.
This is just going to put a big hole in the whole ecosystem of people
using existing digital design flow. Things
like low power. As you introduce low
power you start to do new things. You
need to look at multiple voltages on a chip, which means you have to
characterize libraries at multiple voltages.
You start seeing thermal effects such as temperature inversion where worse
case corners no longer occur at the highest temperature. You may get worse cases occurring at lower
temperatures. You see people using
multiple threshold devices which increase the size of the library typically by
3x and doing power shut off, things like state retention flops. In addition there are new model formats for
more accurate modeling like CCS and ECSM.
People are also starting to look at yields, trying to come up with an
alternative set of libraries that would tradeoff performance for yield. All these factors were exploding the number
of potential library views that you are going to need. The complexity of the models is going up
too. You have the kind of perfect storm
of more complex cells like some of these state retention flops, and more
complex models like current based models.
Then looming on the horizon of course is statistical timing. The complexity in generating statistical
models is kind of like the hurricane.
The other factors are more like gale force winds. Together all these are like a double perfect
storm. This is the area where people are
sort of making do with older technology and getting by living with huge run
times, large computer farms and dedicated teams. A lot of people are doing it in house with a
lot of homegrown tools. We just felt
that it was a time to take a fresh run at this.
I think we are starting to see that this was the right decision.
What do the acronyms CSM and ECSM stand for?
CSS
stands for Composite Current Source and ECSM stands for Effective Current
Source Model. They are new delay models which
use a current source. These give you
more accuracy than the table lookup model that has been the industry standard
for 15 years that Synopsys introduced in the early 90’s or late 80’s.
CCSN is
the extension of the CSS Synopsys model to address signal integrity. Synopsys had an equivalent to the NLDM model
called Liberty SI. That has been deemed
to be very hard to characterize and takes a very long time and may not be as
accurate as some people need at 65 nm and below. CCS Noise is a much more accurate model and
it takes less time to characterize.
However it is a more complex characterization task as it requires a lot
of internal details not just the boundary information. A lot of in-house tools are written for treating
cells as black boxes.
Are CCS, ECSM simply generic terms or are they formal standards?
There are
competing standards. ECSM come from
Cadence plus some stuff from Magma. CCS
is the Synopsys equivalent. CCS is part of
the
Editor: Si2 is
an organization of over 100 semiconductor, systems, EDA, and manufacturing
companies focused on improving the way integrated circuits are designed and
manufactured in order to speed time to market, reduce costs, and meet the
challenges of sub-micron design. Si2 focuses on developing practical technology
solutions to industry challenges.
The Open
Modeling Coalition (OMC) was formed by Si2 in mid-2005 to address critical
issues - such as accuracy, consistency, security, and process variations - in
the characterization and modeling of libraries and IP blocks used for the
design of integrated circuits.
The OMC technical objectives are to define a consistent modeling and
characterization environment in support of both static and dynamic library
representations for improved integration and adoption of advanced library
features and capabilities, such as statistical timing. The system will
support delay modeling for library cells, macro-blocks and IP blocks, and
provide increased accuracy to silicon for 90nm and 65nm technologies, while
being extensible to future technology nodes. Technology contributions
from Cadence Design Systems, IBM, Magma Design Automation, Synopsys, and other
companies are in support of these goals.
Tell us about the Altos products.
Since our
inception we have built two products.
The first one we call Liberate which is a standard cell and IO library characterizer. It builds
What is the main differentiation of your product?
The main
differentiation of our products is that we do a lot of things to make
characterization go faster. Basically characterization
was a bottleneck with all the different views and models that people were
starting to require. It was going to
become self evident that it was so costly to do that people would start cutting
corners and would not do certain things.
Statistical timing would not become a reality unless models were readily
available. That’s how we can play a role
and add value. It is very easy to
use. A lot of the characterization tools
require the user to tell them I want to characterize it this way. There is a lot of the manual intervention, a
lot of setting up vectors and conditions. We automate all of that. We track the optimal set of vectors that you
need to fully characterize the cell. We
can filter out duplicate vectors exercising the same path. Because we are automated we have found that
we do better than a lot of other people do with a more manual approach. Things may be missing with the manual
approach. With about 90% of the libraries
we get from other people, we are able to pinpoint some holes, some areas they
have missed.
We
support the latest models like CCSN.
CCS Noise is a model which is very familiar to us because it is similar
to what we were doing when we did CeltIC at CadMOS. We are very familiar with signal integrity
models. We are probably ahead of
everyone else in the market, certainly in supporting these models in an
automated way.
What is the vision of the company?
Statistical
timing is very useful for people at 65 nm but it is essential at 45 nm. The reason for that is that the worse case corner
method that people use today is way too pessimistic. This has two very bad side effects. One is that you spend time trying to meet a
target that you have already met. You
are wasting your design resources doing timing closure when you are easily meeting
the marketing target. There is very
large on-chip variation (OCV) factor in use today. What OCV says is that for every slow path, I
am going to add 10% to 15% for every delay and subtract 10% to 15% from the
clock path delays and still try to meet my setup time. What on-chip variation is trying to do is account
for all the different delay effects that could possibly happen. It’s sort of like Murphy’s Law. You could have lithography effects, you could
have dishing, random particle effects, non-uniform doping, and so on and so
forth. They kind of lump everything into
some worse case number. What happens
even if you can meet timing under these extreme conditions is that you end up
blowing your power budget because you are making your gates much bigger than
you need to be. You have a lot of lower
Vth devices that increase your leakage.
With SSTA you are getting more realistic models. If you model statistically the things that do
change in the process, then you get a much better idea of where you are with
respect to the yield you are after. You
have the potential of catching some of these corner cases that sit outside
those worse case guard bands. You
benefit on both sides of the coin.
As an
example Renesas saw approximately 6% improvement in clock frequency and 35%
reduction in the number of paths they had to fix once they got through timing
closure. As you get to bigger size blocks, you will see these numbers
approaching 15% to 20% potential increase in clock frequency and the number of
hold variations will drop considerably which means a lot less ECOs required at
the end of the design cycle.
The promise
of statistical timing has been talked about by many companies, by all the big
EDA companies and by the foundries.
However there is a piece missing.
It is like everyone is talking about electric cars but no one is
building the fuel cell to make it all work.
We see our role is providing accurate libraries quickly so that people
can make the transition from corner based models to statistical models. You are starting to see a lot of people pushing
statistical signoff. There are SSTA
tools from the big guys at Cadence, Synopsys and Magma. There are also startups such as Extreme DA
pushing statistical signoff. Once that
gets acceptance, I think you will see SSTA as part of the implementation flow,
probably at 45 nm. One of the great
things about statistical analysis is it will give you a better optimized design
if you are trying to optimize across multiple metrics such as power or yield as
well as timing.
Today our
product supports standard cells for statistical characterization. We are planning to go to IO, memories and
cores because on any given chip you will have all these different components. You need these models for everything. As a byproduct of doing characterization, we
characterize each cell’s sensitivity to variations even down to transistor
within a cell, i.e. how sensitive it is to minor perturbations to the process
like a change in channel length or a change in Vth. We can feed that back to standard cell
designers. You can tell them that they
can make this channel a little wider. It won’t impact delay and it will improve
leakage, power consumption.
What type of variations do you
cover?
In terms
of cell characterization we look at two types of variations. Some people call it global and local. Most people refer to them as systematic and
random. For systematic inter-cell
variation the process varies in the same direction by the same amount for each
transistor in the cell. Everything kind
of moves in one direction. For example,
you length may vary by 5 nm. You
characterize the cell with nominal conditions and vary the length by 5 nm,
characterize the cell and capture the sensitivity to that parameter. You do that for any parameter the user thinks
is significant. Random variation is more
challenging because it is similar to mismatch in analog world where you are
actually looking at the sensitivity of each transistor within the cell to a
particular type of variation, things like Vth variation. We have to vary each transistor. We characterize and capture the
sensitivity. That would be very, very
slow if you ran a full blown
The only
difference between statistical and regular characterization, the input is
basically variations and sigmas for the parameters you want to look at. IDMs
have that information. You type it in as
a series of commands and we run the characterization. Other people like the large foundries have
been collecting this information in the past mostly for use in analog and
statistical SPICE models where they will actually measure some parameters and
their 1σ variation. Sometimes those
parameters are captured as principal components to ensure that each of the
parameters can be modeled as independent effects. If you know the correlation between different
parameters, you can give us that information and we will account for that
during characterization. Mostly what we
have seen so far is that people are treating parameters as being either fully
correlated or fully uncorrelated. We
support both.
We
generate our own internal database.
Essentially we characterize once and store the characterization data in
our own internal generic format. In a
post processing step we can spit it out in different formats for all of the
vendors. Right now there is no standard
in this area. Once again each SSTA tool
is doing its own thing. Luckily from
what we have seen so far people are using some variation of the
How do you validate your models?
Generally
we have people trying to validate SSTA by generating long chains of gates on
the same cell or a mixture of cells, running a
What is unique about what you do?
As I
mentioned with random variation you are looking at the sensitivity of delay,
leakage or any parameter in your library to variation on each transistor. On average, in the libraries we have seen
there are about 25 transistors per cell.
We have some small ones, inverters that have 2 or 4. Some of the large cells have 100 or even up
to 200 transistors. If you did it in a
brute force method, even assuming linear sensitivity, you end up with 25x average
run time increase. We do not get
that. We get 3x to 4x run time
increase. The reason is that we do
something we call the inside view. We
understand the cell being characterized and we understand the paths through
that cell, and which transistors are sensitive to which type of variation; for
delay it can be a different set of transistors than for capacitance or
leakage. We have a number of techniques
to try to avoid having to do ~25 simulations.
We have validated what we are doing against
What kind of performance do you get?
As shown
in the table below for the first almost 400 cells it took 6 hours on 8 CPUs (48
CPU hours) to characterize three systematic parameters and 1 random
parameter. If you just do the nominal
characterization, we are able to run this in 1 hour on 8 CPUs (8 CPU hours), which
is a phenomenal improvement in run time over what people are running today
which is on the order of 30 CPU days to characterize a typical library cell. If you look at the 6 hours versus 1 hour,
there is 1 hour for nominal, 3 hours for systematic and 2 hours for random. The second example we modeled 9 systematic
parameters, 1 random parameter. The run
time was 2 hours for nominal, 18 hours for systematic and 7 hours for
random. There is a 3 ½ x runtime
overhead for the random parameter.
Library |
CPUs |
Variety Runtime |
Systematic Parameters |
Random Parameters |
Liberate Runtime |
387 cells |
8 |
6 hrs |
3 |
1 |
I hr |
504 cells |
16 |
27 hrs |
9 |
1 |
2 hrs |
This is
the whole difference between what we do and what others do. In the past people have used shell scripts,
just big wrappers around running SPICE.
We do not do that. We actually
read in the SPICE circuits, read in the models, analyze the circuits and try to
figure out how to optimize the circuits for characterization. We have our own built in SPICE engine to do
this analysis. You can use this with
characterization or we can do a kind of final characterization using the golden
SPICE simulators HSPICE, Spectre or ELDO.
We have interfaces to all of those.
This piece is what gives us a big performance boost.
Basically
when we characterize, you can specify what the variation is for anything that
is named in the SPICE model. It can be
1σ or 2 σ. That is completely up to
you. If you combine type parameters in
the command they are assumed to be 100% correlated. If you keep them distinct, they are assumed
to be 100% uncorrelated. We can also
account for the fact that things like random variation are related to the size
of the transistor, basically proportional to one over the square root of the
area of the transistor. We will account
for the fact that larger transistors have less random variation than smaller
transistors. It is up to you how many
parameters you want to specify, what we are typically seeing is L, W, Tox
(oxide thickness) and Vth as the major contributors to process variation.
When we
do sensitivity analysis, we can do linear or non-linear. It is
linear if I increase or decrease by 5% and the sensitivity is the same. In a lot of cases it is different. Right now we can do multiple points, typically
at least 2 points, and run some simulation to determine if it is linear or
non-linear. The formats right now are
still supporting only linear, so there is work to be done in the formats to
support the full blown non-linear sensitivity.
That is coming. We have seen some
proposals for that.
We felt
it was important to establish a base with our Liberate product that shows that
we can do all the regular non-SSTA characterization. Because our belief is in SSTA, if you say
variation equals zero, you should get the same results as SSTA. If you do not have the same library characterization
system with the same assumptions and same mindset, it is very difficult to
ensure that consistency. We support
multiple vendors right now. You will see
people with mixed flows where one vendor’s tool will be used for signoff and
another vendor’s for implementation.
That situation has been quite common in the past. I think it will happen with SSTA. Again our ability to do random variation very
quickly remains the main distinguishing factor of our tool. People believe that
random variation will become a larger component of the total variation as
people understand more and more how to deal with systematic variation: basically
design it away or remove it from the actual process. People generally accept that there is always
going to be some element of randomness and that random contribution will
probably increase over time. To be able
to model very efficiently is very important.
Who are you going after in terms of market?
We see
the market in three different segments.
There is obviously the IP and foundries, people who deliver libraries. They are very interested in this stuff. One of our first customers was Virage Logic
who use our Liberate product for standard cell characterization. We are also working with IDMs. Renesas was the first customer for our
statistical characterization tool. We
also see quite a bit of interest in the COT market. If you look at the top 10 COT companies, they
all either develop their own libraries or they re-characterize somebody
else’s. There are probably good reasons
for that. With the volume of business
they are doing, it makes sense to reduce margins, to improve quality of the
cell whether it is actually on the layout itself or in the model.
What is the pricing and availability?
Variety is available now,
The top articles over the last two weeks as determined by the number of readers were:
Magma and Synopsys Agreement Narrows Delaware Case
The companies jointly stipulated to: Synopsys withdrawing infringement claims against Magma with regard to two of the three Synopsys patents at issue; Magma withdrawing infringement claims against Synopsys with regard to one Magma patent at issue; and Magma withdrawing claims of antitrust violation by Synopsys.
Synopsys Posts Financial Results for First Quarter Fiscal Year 2007
For the first quarter, Synopsys reported revenue of $300.2 million, a 15 percent increase compared to $260.2 million for the first quarter of fiscal 2006. Net income for the first quarter of fiscal 2007 was $23.4 million, or $0.16 per share, compared to $1.7 million, or $0.01 per share, for the first quarter of fiscal 2006.
HP Reports First Quarter 2007 Results
HP announced financial results for its first fiscal quarter ended Jan. 31, 2007, with net revenue of $25.1 billion, representing growth of 11% year-over-year from the $22.7 billion. Net earnings were $1.5 billion up 26% from the $1.2 billion in the year ago quarter.
Magma Agrees to Drop All Anti-Trust Claims Against Synopsys
Synopsys announced that Magma Design Automation has requested that the Court dismiss all antitrust claims against Synopsys. In return, Synopsys agrees not to pursue Magma for malicious prosecution or any other claims related to making these anti-competitive accusations against Synopsys. The Court has been asked to dismiss all of these claims 'with prejudice,' meaning they cannot be revived.
Analog Devices Announces Financial Results for the First Quarter of Fiscal Year 2007
Total revenue for the first quarter of fiscal 2007 was $692 million, which included $657 million of product revenue and $35 million of revenue from a one-time technology license. Product revenue for the first quarter of fiscal year 2007 increased approximately 6% compared to the same period one year ago and increased approximately 2% compared to the immediately prior quarter.
Net income for the first quarter of fiscal 2007, under generally accepted accounting principles (GAAP), was $153 million, or 22% of total revenue, compared to $121 million for the same period one year ago and $138 million for the immediately prior quarter.
Other EDA News
Other IP & SoC News
You can find the full EDACafe.com event calendar here.
To read more news, click here.
-- Jack Horgan, EDACafe.com Contributing Editor.