Stealth Strategy - Apache Style

This is what we have been doing in the past. Early on everybody was using statics. Statics was pretty gross. If you look at the published material, Simplex started in the 1996 timeframe. It's been around going on 10 years now. It was a technique that at least lets you see grossly, just looking at resistive effects of the grid and average currents, whether you have any major problems with the grid. It also took so long. If you went through verification and found a problem late in the cycle and you had to fix it, it had a major impact on time to market. You basically had to re-floorplan and redo some of the chip. Since Simplex was a verification solution, you ran it at the end before you went to the fab. People didn't want to find a problem, so they tended to over design the hell out of the grid to make sure that when they went to verification, there was no issue. The problem with that is that it wastes a lot of resources. Whenever you use metal for power and ground, it eliminates those tracks from routing which means you have a more difficult route to design. It obviously wastes area which means if you are in a very competitive consumer market, you will have difficulty. It can also impact performance. There are a lot of tradeoffs but nobody had a better solution for doing it and it lived for a long time.

In the last few years you would not believe the number of customers that have come to us with chips that have passed the latest flow which at that point was still static. The chip goes through the final verification step, it gets passed on static and it comes back. Either it doesn't work at all or it doesn't work in scan mode which seems like an easy problem but it is really a worse case of simultaneous switching type of problem or it has no yield and they can't figure out why. It turns out that it is because of the inaccurate analysis inherent in the static approach where you are looking at average currents and resistance of the network only. You can not see some of the effects that are actually happening, especially in these more advanced processes. That's what Andrew saw ahead of time. When we got there, people started coming to us. We were able to predict accurately what was going to happen, i.e. that the issue would become a major point of concern for the customer. That's kind of where we are today.

The next kind of solution to arise was the iterative solution. We also played in this area. We focused on doing power integrity, looking at dynamic effects, the simultaneous switching effects of the power. If you have multiple drivers on the bus on the chip, what is the impact of those multiple busses changing at the same time on the voltage drop? We call it DVD, dynamic voltage drop. How does that impact the timing, since the drive to some of these gates have changed? How do you analyze and determine exactly what the voltage looks like at that point? What the wavefront is? Not just the voltage but what the wavefront looks like. How do you highlight what the waveforms effects are on the timing and the delay of that gate? That's what we have been focused on. Other people have focused on signal integrity. Everybody been kind of taking and analyzing the pieces separately. Now we are moving to this Sidewinder product, which analyzes the concurrent impact on timing and operation of the chip. The chip doesn't really care which noise it is. It just knows it has noise from multiple sources and what's the impact on the timing and operation of the chip based upon that noise. We are moving more and more towards this concurrent approach. The difficulty is the complexity goes up. You need to have more accuracy in these approaches. That changes the scope of the job. If you look at our particular market in dynamics, some of the things that happened was that the big guys, especially Cadence and Synopsys that were basically splitting the market between AstroRail and VoltageStorm for static analysis, had underestimated the task of taking it from static and actually look at the dynamic characteristics.

Cadence and Synopsys were promising customers literally for 2 years with a 6 month rolling forecast that this capability would be there. They promised that they would be there and it has yet to happen with a viable product which is why we pretty much own this particular market. The customers view this area as very critical and the big guys haven't been able to perform because of its complexity. They treat statics as Ohm's law. It is V=IR, very simple; big I for average current, R for resistance only. What we are solving is a dynamic waveform that looks at the instantaneous current effects. Since it is instantaneous, it has edges and transitions. The di/dt affects the inductance on the chip or on the package. Also decap is built into the chip intrinsically, just by the fact that you built the chip there is a certain decap that is included. If you characterize the chip, you can get what the decap is for each cell, for the power grid and everything else. You include those types of effects as well. It's just a tremendously different complexity level. Out of the other 5 startups I have done this is the most complex challenge that we have ever undertaken. I think the other guys really underestimated the problem.

Some of the keys to analysis is looking at each and every level of characterization. You need to handle full chip. As you know, for some of the designs for graphics vendors (e.g. ATI) guys, and for some of designs for communication networks, chips get immense; up to 50M to 60M gates, if not more in some cases. It's just a tremendous complexity. There are full chip issues. One area of the chip can affect another part of the chip. So you can't really separate it looking at different sections one at a time. It really needs to be done on a full chip basis and it needs to be done accurately. The way people have been done full chip in the past is that you extract and try to simplify. We had to do the same thing to some extent but we are careful to then take a look at the lower level and characterize those extremely carefully at the transistor level so that when we extract things off, we have a very accurate model of what the switching current looks like. We have the exact waveform for each of the instances, for each of the gates based upon the load and the supply that they happen to see. That's how we try to maintain transistor level accuracy, even though we are looking at cell based capacity. The other problem that people have is on a full chip basis, they don't know exactly which vectors are going to create the worse case dynamic voltage drop and therefore impact the timing the most. It's too complex a problem for an engineer to work through even a reasonably sized block much less a full chip to really establish that this is the worse case vector set and is guaranteed to cover all the conditions.

We had to develop an approach that first looks at the effects and creates scenarios that are statistically valid that look at which instances based upon the timing information we have can even switch together at the same time and carry that forward to try to get a realistic worse case expectation without the ability to have detailed vectors from the customer. If they have vectors, we can run them. Some times this is a good way to correlate things. But in general, it does not give you a true worse case picture. We cover that area as well.

We have also been focused on sign-off. We have actually become a sign-off requirement at a number of different customers now for dynamic. Different customers create different criteria for doing this. It is interesting to watch. Some are timing based. Ultimately they do not care what the DVD is, if the timing of the chip works and it functions correctly. Some people put a limit on it. They use a 5% to 10% limit on static. They have a much higher limit by necessity on dynamic drop, maybe 3X larger or more than static drop. We have learned a lot over time. That's the thing that I think keeps us ahead. The experience you get from doing 150 to 250 tapeouts, working with real customers on real projects, gives you a lead that even when other guys come out with a solution, it is really hard for them to catch up with that experience. It has been interesting to watch.

« Previous Page 1 | 2 | 3 | 4 | 5 | 6  Next Page »


Review Article Be the first to review this article
Featured Video
ASIC FPGA Verification Engineer for General Dynamics Mission Systems at Bloomington, MN
SOC Logic Design Engineer for Global Foundaries at Santa Clara, CA
Sr. Staff Design SSD ASIC Engineer for Toshiba America Electronic Components. Inc. at San Jose, CA
Principal Engineer FPGA Design for Intevac at Santa Clara, CA
Development Engineer-WEB SKILLS +++ for EDA Careers at North Valley, CA
Upcoming Events
DVCon 2017 Conference at DoubleTree Hotel San Jose CA - Feb 27 - 2, 2017
IoT Summit 2017 at Great America ballroom, Santa Clara Convention Center Santa Clara CA - Mar 16 - 17, 2017
SNUG Silicon Valley 2017 at Santa Clara Convention Center Santa Clara CA - Mar 22 - 23, 2017
CDNLive Silicon Valley 2017 at Santa Clara Convention Center Santa Clara CA - Apr 11 - 12, 2017

Internet Business Systems © 2017 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy