Jim McCanny, co-founder and CEO of Altos Design Automation, Inc., is one of the most vocal voices on the use of characterization technology and what trends will be coming down the chip design pike.
I was able to catch Jim to talk about where EDA was heading and how characterization technology plays into those trends issues and chip design challenges.
……………
Ed: I was at an event, recently, where the premier investor in EDA startups cited Altos as one of his startups that did it right. Altos also got mentioned in Paul McLellan’s book, EDAgraffiti, as a company that did it right. What did Altos do that was “right?”
Jim: The things we did right? Well, I’d say that we focused on a real need – characterization run-time was too long to support the electrical analysis needs of 90nm and below. We used an experienced team and got a product to market quickly. And finally, we took only a relatively small amount of funding and relied mostly on organic growth and kept control of the company.
This last item, I think, is the one that has resonated with private investors. It made us somewhat immune to the big economic downturn in early 2009, as we had always been operating in a very fiscally responsible way.
Ed: Good point.
Jim: Finally while it was nice to be mentioned as a company who did it right, I don’t think we can be the “model” for every EDA startup. We did it right for the particular market we were going after and the current economy. Other target markets at another time might require a different approach.
Ed: I’m still fuzzy on what characterization is. Can you give me the 30 second elevator explanation?
Jim: I’d be glad to lessen some of the mystery, Ed. It’s elevating the behavior of a group of related analog transistors to a higher level of abstraction that is fundamental to digital design. For example a simple Nand gate typically has four unique analog transistors. Characterization enables each Nand gate to be modeled as a cell with equivalent timing, power and noise characteristics. That is equivalent to a 4X reduction in the circuit size to be analyzed.
Ed: So how big are we talking about?
Jim: For complex cells and blocks, there can be hundreds or even thousands of transistors and for memory instances there are often millions of transistors so the abstraction dramatically reduces the number of distinct elements that the digital design tools have to work with. Without characterization, there would be no synthesis, place and route or static timing analysis. There would be no IP reuse, basically no SoC design flow.
Ed: So characterization is obviously extremely significant to chip design. I recall that Altos started off back five or six years ago, touting the onset of statistical timing analysis (SSTA) and how characterization would be a required element in SSTA-based design flows. Adoption hasn’t really been overwhelming, yet it appears that characterization helps with static timing analysis driven chip design as well as SSTA driven chip design. What’s the difference in productivity and value that characterization brings to static timing analysis and SSTA based chip design?
Jim: SSTA is one of the areas that we saw as driving the need for faster characterization.
Ed: Now, can you remind me what SSTA is again?
Jim: Sure. SSTA is a methodology for predicting the impact of process variation of the performance of your design. It requires an accurate library that captures the effect of variation on timing (delay, slew, constraints etc.). Creating accurate models in a reasonable time frame is a big challenge. For example, the most accurate method is to use Monte-Carlo simulation but that would take thousands of times longer than “nominal” characterization (which itself can take days or even weeks). Clearly this “brute-force” approach wasn’t going to work if SSTA was to be feasible. We are able to create an SSTA library hundreds of times faster than using Monte Carlo, but still with great accuracy. Without this capability, SSTA would not get anywhere.
Ed: So is the push to lower manufacturing processes a factor in the increasing use of SSTA?
Jim: Yes! We are now starting to see serious usage at 28nm. You actually bring up a good point. There are several methods for predicting process variation such as “corner” analysis or “advanced on-chip-variation” (AOCV). Both of these solutions require either more characterization or longer characterization run-time; so our “ultra-fast” characterization technology is still very relevant whether SSTA is used or not.
Ed: As we get down to finer processes, what problems will chip/SoC designers encounter?
Jim: For most of today’s designs, the key challenge is optimizing both power and timing. Variation can play havoc with this process which is why SSTA is starting to get some traction. If you add too much margin then you can kill your power budget. However if you don’t account for variation you can have a dead part on your hands or suffer from low yield.
Ed: What else will crop up?
Jim: Another key challenge is what to do with all the available silicon real estate. The most obvious thing is to integrate more and more components on-chip. To get to market quickly this means using off the shelf IP. Making sure all the IP works together in a consistent way is tough. If you rely on pre-built models from the IP vendor you may suffer from over-guard banding or simply that the models are not up to date with the version of the process you are using. The best way around this is to either re-characterize everything to a single well defined set of characterization criteria or run an independent validation of your IP before using it.
Ed: IP quality is definitely a challenge. Harking back to that EDA investor, he seems to be saying that the valued technology will be in the front end, going forward. What’s your take and how does characterization play into that supposed trend?
Jim: There has always been value at both ends in EDA. Layout verification, layout editing, place & route, post-layout simulation, static timing analysis are all back-end solutions and major EDA markets. Sure integrating systems and software has huge potential but so does any solution that can make sure your chip will work in silicon or can improve its yield.
Ed: So what’s ahead for EDA? Is it a stagnant, mature industry, as so many people were saying a year and two years ago? Or maturing but vital in the semiconductor supply chain?
Jim: I don’t think it’s mature. There is simply too much churn in customer needs. Current tools are continuously getting enhanced and new tools are always coming on the market. Just look at the Spice simulation market. Three years ago, I think everyone would have said it’s stagnant. But look at all the new players and new capabilities that have come out in the last few years.
Ed: What do you see here?
Jim: There have been big improvements in performance, capacity, new models, new integrations into other solutions and innovative use of distributed processing.
Ed: So what is the technology development/adoption cycle for EDA?
Jim: I think EDA has cycles of about 8-10 years from the leading-edge adopters to trailing-edge users. There were a lot of new solutions around 2000 that have served the industry well for the past decade, but are now aging. Obviously, sometimes the EDA industry gets ahead of itself and has to go through a few lean years like we have just done. The danger is that when the industry needs new tools and solutions they won’t be there, as the past year and half has been pretty brutal and instead of investing in the future, many of the big EDA companies had to make cuts. Key areas such as analog automation, IP integration and verification and system and software design still need a lot of work.
Ed: Keeping in mind that there could be a reduction in new tools, what technologies do you see rising above the others in terms of user need, value added and just plain necessary for, say, 28nm designs that are full of complex IP blocks, many of which don’t integrate easily with one another?
Jim: Tools that truly enable IP integration and verification. By verification I don’t mean “will the IP work stand-alone” but “will it work as desired in the integrated system,” e.g. at the voltage levels being used, at the process corners being used, with the expected amount of process variation etc.
Ed: And what issues will we see rise to crisis level in power? Timing? How will they get fixed?
Jim: Power is really dynamic but timing is usually analyzed statically. How do you really model dynamic, temporal effects such as IR drop, crosstalk and substrate noise using static methods without gross “worst-casing”. In addition noise effects can cause very analog like waveforms that break the assumptions of today’s delay models that assume a linear or piecewise linear ramp. There is room for better timing models and smarter ways to statistically model the impact of dynamic effects like noise and IR drop and possibly hybrid static-dynamic analysis tools.
Ed: So what’s ahead for characterization technology? For Altos?
Jim: Our focus is in “enabling a world of IP.” By that, we mean that we want to make reuse of any form of IP highly productive, be it cells, complex I/Os, embedded memory or custom blocks. To do this we are working on bringing the same kind of automation and performance we have brought to complex cell characterization to IP block characterization. We also see characterization as more than model creation but also as a means to validate IP. A characterization tool tells you how the block will perform under a range of different conditions but doesn’t tell you if it performs as expected or how much margin you have to deal with the “unexpected”. We are on a path to change that.
Ed: Seems promising! I look forward to hearing more on this front down the road. Thanks, Jim, for taking time out of your busy day to share your viewpoints on these topics.
– end –