Is Altos self-funded?
Initially we were self-funded. Last December we took a small amount of Series A funding ($1.5 million). Most of that was from a private investor. We got some money from Jim Hogan at Vista Ventures.
How big a firm is Altos?
It is still very small. We are 7 people. Now we are trying to add a few more.
You said that there was a problem that Altos was addressing that others were not. What is that problem?
The general problem of characterization. People have made efforts to solve this problem before. There are solutions in the marketplace. It is not that there have not been products for characterization. What has happened was that there has been a lot of new factors that have all come along at the same time and have been looming on the horizon that have put the existing characterization solutions under undue stress. This is just going to put a big hole in the whole ecosystem of people using existing digital design flow. Things like low power. As you introduce low power you start to do new things. You need to look at multiple voltages on a chip, which means you have to characterize libraries at multiple voltages. You start seeing thermal effects such as temperature inversion where worse case corners no longer occur at the highest temperature. You may get worse cases occurring at lower temperatures. You see people using multiple threshold devices which increase the size of the library typically by 3x and doing power shut off, things like state retention flops. In addition there are new model formats for more accurate modeling like CCS and ECSM. People are also starting to look at yields, trying to come up with an alternative set of libraries that would tradeoff performance for yield. All these factors were exploding the number of potential library views that you are going to need. The complexity of the models is going up too. You have the kind of perfect storm of more complex cells like some of these state retention flops, and more complex models like current based models. Then looming on the horizon of course is statistical timing. The complexity in generating statistical models is kind of like the hurricane. The other factors are more like gale force winds. Together all these are like a double perfect storm. This is the area where people are sort of making do with older technology and getting by living with huge run times, large computer farms and dedicated teams. A lot of people are doing it in house with a lot of homegrown tools. We just felt that it was a time to take a fresh run at this. I think we are starting to see that this was the right decision.
What do the acronyms CSM and ECSM stand for?
CSS stands for Composite Current Source and ECSM stands for Effective Current Source Model. They are new delay models which use a current source. These give you more accuracy than the table lookup model that has been the industry standard for 15 years that Synopsys introduced in the early 90’s or late 80’s.
the extension of the CSS Synopsys model to address signal integrity. Synopsys had an equivalent to the NLDM model
called Liberty SI. That has been deemed
to be very hard to characterize and takes a very long time and may not be as
accurate as some people need at 65 nm and below. CCS Noise is a much more accurate model and
it takes less time to characterize.
However it is a more complex characterization task as it requires a lot
of internal details not just the boundary information. A lot of in-house tools are written for treating
cells as black boxes.
Are CCS, ECSM simply generic terms or are they formal standards?
There are competing standards. ECSM come from Cadence plus some stuff from Magma. CCS is the Synopsys equivalent. CCS is part of the
Editor: Si2 is an organization of over 100 semiconductor, systems, EDA, and manufacturing companies focused on improving the way integrated circuits are designed and manufactured in order to speed time to market, reduce costs, and meet the challenges of sub-micron design. Si2 focuses on developing practical technology solutions to industry challenges.
Modeling Coalition (OMC) was formed by Si2 in mid-2005 to address critical
issues - such as accuracy, consistency, security, and process variations - in
the characterization and modeling of libraries and IP blocks used for the
design of integrated circuits.
The OMC technical objectives are to define a consistent modeling and characterization environment in support of both static and dynamic library representations for improved integration and adoption of advanced library features and capabilities, such as statistical timing. The system will support delay modeling for library cells, macro-blocks and IP blocks, and provide increased accuracy to silicon for 90nm and 65nm technologies, while being extensible to future technology nodes. Technology contributions from Cadence Design Systems, IBM, Magma Design Automation, Synopsys, and other companies are in support of these goals.
Tell us about the Altos products.
Since our inception we have built two products. The first one we call Liberate which is a standard cell and IO library characterizer. It builds
What is the main differentiation of your product?
The main differentiation of our products is that we do a lot of things to make characterization go faster. Basically characterization was a bottleneck with all the different views and models that people were starting to require. It was going to become self evident that it was so costly to do that people would start cutting corners and would not do certain things. Statistical timing would not become a reality unless models were readily available. That’s how we can play a role and add value. It is very easy to use. A lot of the characterization tools require the user to tell them I want to characterize it this way. There is a lot of the manual intervention, a lot of setting up vectors and conditions. We automate all of that. We track the optimal set of vectors that you need to fully characterize the cell. We can filter out duplicate vectors exercising the same path. Because we are automated we have found that we do better than a lot of other people do with a more manual approach. Things may be missing with the manual approach. With about 90% of the libraries we get from other people, we are able to pinpoint some holes, some areas they have missed.