[ Back ]   [ More News ]   [ Home ]
October 17, 2005
The Case of the Missing Via: Sequence Design
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
Jack Horgan - Contributing Editor

by Jack Horgan - Contributing Editor
Posted anew every four weeks or so, the EDA WEEKLY delivers to its readers information concerning the latest happenings in the EDA industry, covering vendors, products, finances and new developments. Frequently, feature articles on selected public or private EDA companies are presented. Brought to you by EDACafe.com. If we miss a story or subject that you feel deserves to be included, or you just want to suggest a future topic, please contact us! Questions? Feedback? Click here. Thank you!


On September 26 Sequence Design introduced CoolCheck for formal power-grid verification. They claim that CoolCheck is able to detect errors that elude both static and dynamic voltage-drop analysis. CoolCheck employs a vector independent approach to examine power grids early in the design cycle, verifying both electrical and physical connectivity.

I had an opportunity to discuss this with company CTO and VP of Advanced Development Jerry Frenkil. Jerry was one of the founders of Sente, Inc. Sequence was formed in June 2000 by the merger of Frequency Technology and Sente in January of 2001. While at Sente, Mr. Frenkil was the Vice President of Low Power Design, where he led the services and applications activities. Prior to co-founding Sente, Mr. Frenkil was an independent consultant focused on IC design

Would you give me an overview of CoolCheck.

We have a new technology, a new product that we're calling CoolCheck. Basically what this does is that it is a tool for connectivity of you power grid. It's a power grid analyzer. It is not operating in the power space, in the power domain. It's not going to give you voltage drop. It's going to give you connectivity results. What it is doing is it is verifying the connectivity of all the cells to the grid. It does this without any simulation vectors. So that's why we are calling this a formal approach. The whole idea here is to catch errors that vector dependent methods might fail to catch. In that regard it really complements existing techniques from us and other vendors in the
static analysis as well as the dynamic analysis domains.

In CoolCheck we trace the path of every cell to the power grid. We compute how well connected that cell is to the grid. Our output is that we will point out where the electrical connections are weak. Because we are doing this without any kind of vector based power analysis, we can do this earlier. We can point out issues and errors to the power grid designer much more quickly in the design process than we could with conventional static or dynamic analysis.

The sorts of errors we point out to the designer are things like missing vias (probably the mjost common error) or circuitous connections to the grid. This might be a case where in fact the cells are connected but the path is very long and the path is not the intended path to the grid. In an extreme case, a complete power disconnect. These issues can be detected by static IR drop and dynamic voltage drop analyses but they don't necessarily find them all. The reason is that these techniques are all vector driven one way or another. If your simulation vectors or your stimuli don't stimulate a particular portion of the chip or a particular instance and if that instance is poorly
connected, you will never see it. In our approach as opposed to relying on external stimulator, we have a patent pending technique where we check every cell in terms of how it is connected to the grid, that way we are assured we are not going to miss any.

A visual example:

Here we are showing symbolically a couple of rows of logic. On the upper most row there's a power bus on either side in purple. The light blue is the power bus running through the cells. On the upper row we can see that the power bus is tapped on both sides while in the lower row, only tapped on one side. So that the cells that are highlighted are both in fact connected to the grid. The one on the lower row is poorly connected. The path to the grid has to go all the way through the logic cells to the power strap on the other side. This occurs from time to time because a via is missing or occasionally one via as opposed to what is called a via farm, an array of vias. In that case the
electrical connects are made but not made very strongly. This is a good example of the sorts of things, the particular category of areas that CoolCheck capture.

We can show the frequency of the problem, the statistics in the form of a histogram. The vertical axis is the number of instances and the horizontal axis the resistance. We are looking at how many instances are connected to the grid by a particular value of resistance. In a typical case the vast majority are well connected, the resistance to the grid is below 50 ohms. As one would expect there aren't too many that have resistance beyond 50 ohms. However, we might see a few around 120 ohms. These are things that one does not wish to see in a well designed grid for a chip. Usually these do not occur through design strategies. These are usually escapes, some kind of problem: a missing
strap, a missing via, that sort of thing. The far area or tail of the distribution is the area that designers want to look at. First of all they want to find out if there are any and secondly if there are, where they are and go look at them.

Does the CoolCheck software produce such diagrams?

The software does not produce this chart directly. However, we produce tables of this data. We have all the data. We list it more in an ASCII type of report. Users can script it up and search for tings.

We do generate what we call a thermal map not meaning temperature but rather it is color coded to indicate those areas that are problems. Red areas are those that designers would want to look at and see if there are particular design problems with regard to how the cells are connected to the grid. This enables designers to actually pinpoint where potential issues may lie.

We took a particular chip and ran it through CoolTime which is our dynamic voltage drop analysis tool as well as through CoolCheck. We plotted the results of both. There were some differences in these plots. The plot from the voltage drop analysis looks pretty good. There are some warm areas but the center of the chip looks pretty fine. However, the CoolCheck results indicate that there are areas in the center of the chip that are not well connected. The reason they didn't show up in the dynamic analysis is that that area of the chip is not really stimulated in the particular analysis. This is an indication that while dynamic analysis is certainly an important thing to do, it does
not necessarily catch all the problems. What CoolCheck is intended to do her is take a certain set of the problems and look at them exhaustively. Basically those problems are the strength of the connections to the electrical grid.

What are the inputs and outputs of CoolCheck?

Fairly conventional inputs: major physical design data base, LDEF, GDSII, Verilog Netlist, Synopsys.lib. The output is these thermal maps I have been describing and the detailed ASCIII reports. The former highlights the current path. You can go in and poke around in the display and find out what's their main current path for a cell through the grid.

Where does CoolCheck fit in the design flow?

The idea is really to perform a formal grid verification before you move into static or dynamic voltage drop analysis. Today designers typically do either a static IR drop analysis or first a static IR drop and than a dynamic voltage drop analysis or some are just relying on dynamic voltage drop analysis. The way we see this working together is that CoolCheck would precede both static and dynamic analysis. Once it passes the CoolCheck run and the designer is comfortable with the results then the grid itself is well designed. The questions, if there are any, that arise in static or more probably dynamic voltage drop analysis pertain to dynamic effects and are more power related as opposed to grid related. For example, you could have a well designed grid but still get dynamic voltage drop errors because you have too much ground bounce, too many cells switching at once, too much inductance due to package pins. In that case there is not a whole lot you can do with the design of the grid itself to fix that. You probably are going to need to change the design or add decoupling capacitors in the right places. We are trying to break the problem of power design and verification into different pieces. CoolCheck is intended to verify that the grid is electrically and resistively robust. We offer some other techniques in dynamic voltage drop analysis and optimization
which is not part of this announcement. CoolCheck is really intended to be a verification of the connectivity and the strength of the connectivity.

What is the packaging and pricing of CoolCheck?

$80,000 for a one year license. Typically logged into a license server, conventional LM license.

Product availability?

It's in beta test right now. It will come out shortly with 2005.3 release due later this month.

How many beta sites?

Three active beta sites, several more in line to use it.

Any customer willing to go on the record?

There is one cited in the announcement.

At risk of asking the obvious, why is power an issue and why a growing issue?

We see as a company a set of power problems in two broad categories, one that we call power grid integrity and one that we call power management. In the later category it is all about reducing power. Most all of us have come face-to-face with this with our cell phones, laptops or PDAs. We want our batteries to last longer. We also run into this with tethered machines where they are generating too much heat. One of the big techniques that designers have used is to lower the voltage. What we've seen for several years now is tat the supply voltage has been dropping. A number of years ago it was 5 volts then 3.5 volts and so forth. Now we have a lot of designs at 1 volt, some a little above and some a little below. When the voltage drops that low, the integrity of the power grid becomes a big concern. That leads us to the second class of problems we see namely integrity problems. Power grid integrity has always been an issue but now has become highly exacerbated. If you look, for example at the big high performance microprocessors from Intel, you can have 100 amps running around these things. 100 watts on a 1 volt power supply, that's 100 amps. That's not even peak current. Because the current can be so large is one reason why you have to these grids. The second reason is that when the supply voltage drops so low, your noise margins become very small indeed. It requires careful design, careful verification to show that the chip is manufacturable, that it's going to work across a variety of operating conditions. These situations have caused the IC design industry, the EDA industry to pay a lot of attention to how power is calculated, what its effects are on certain circuit parameters and in particular to what happens on the power grid and how that affects the rest of the chip. With this attention designers need tools, techniques and methodologies. The first was really static IR drop which is doing V = IR for all points on the power grid where I is an average current. I think most everyone has known for a long time that this is an approximation, a fairly rough one but that was the best we had for quite a while. Then we and our competitors came along with dynamic voltage drop. This catches more problems than static does because we are using dynamic waveforms for the current. For example using static IR drop (V=IR), you can't tell what the effect of packaging does on the voltage waveforms on your power grid. Again if you are doing just V=IR calculation, you can't tell what the effectiveness of decoupling capacitors will be on the power grid. The dynamic voltage drop analysis techniques have been developed to look at the power grid issues in much more detail and to enable more sophisticated design and mitigation techniques. With CoolCheck we are talking about a different look at this all together. We are not trying to say that there is something wrong with dynamic voltage drop analysis but its being time based or vector driven, whatever you want to call it. By definition you can't check all the connections. So we have come up with an adjunct capability with CoolCheck that will check all the connections in the power grid resistance space. We're trying to arm the designers and verification engineers with a variety of tools to try to check the various issues that they are concerned with. A few years ago there were only a couple of power issues that people were concerned with. Now there are all
sorts of power issues that people are concerned with. We're trying to help all these guys with various perspectives they are focused on.

What are the typical solutions to the problems other than those that CoolCheck finds? Add decoupling capacitors, create voltage islands ..?

There are several different approaches. I would say that the most time honored one is to make the grid bigger and fatter. Generally most teams in the past have taken the approach that we are so concerned about these issues and we really don't know how to analyze them, we were just going to be way over conservative about these issues and design this big honkering power grid so that we don't have to worry about it. As time has gone on they began to realize that's problematic because that can take up too many wiring resources. They are trying to scale that back but in doing so they have begun to worry about just how much voltage drop they have. So when they find a spot where there is too much voltage drop they typically go in and maybe size up locally as opposed to doing it all over the place, add an additional strap in certain spots, sort of an ad hoc approach. That will work in the IR drop domain. That really attacks R. If they have problems due to what we have called circuitous route or missing vias, if they find those they will go fix them with specific techniques. If it is missing a via, that's pretty straightforward. You simply go and add the via. The hard part is finding it. If it is a circuitous path, maybe they will change the placement a little bit, they will add some additional power straps. Once the static IR drop has been verified to be reasonable, then they will begin looking at dynamic effects. Here there are a couple of techniques. The chosen technique depends upon which issue they have. If the issue is broad based, dynamic peaks are too large, they will look to see if it's an inductance problem. If it's a package inductance problem, i.e. the L of the package interacting with the di/dt of the circuit, if that is causing the problem they will probably try to use a better package that will have less packaging inductance. That tends to be fairly expensive. These packages with better electrical characteristics can be fairly costly. If they can't really bare that cost or the dynamic voltage drop effect is more local, they will
look to utilizing decoupling capacitors. In this case capacitors will be added between the power supply and the ground to basically serve as local charge reservoirs, to smooth out the current peaks. Until recently a number of company design teams would do what we call blind decap insertion where in a pseudo prophylactic manner put decaps wherever they had open space, filler cells or space that wasn't utilized under the assumption that more decoupling capacitors can't hurt and can only help.

What is now occurring due to some of our voltage drop optimization technology we recently introduced is that we can put decoupling capacitors exactly where they are needed so that we don't put too many. We don't want to put too many. There are a number of reasons. One is power and one is manufacturability. Most decoupling capacitors are built using gate oxide, basically transistors which are tied off so we are using the capacitance between the body and the gate. Beginning around 90 nm to 65 nm these capacitors leak, so called gate leakage. The more these thin gate oxide capacitors are used. If you are in a power sensitive application, it behooves you to use the smallest number you
need. The second issue there is that gate oxide are defect sensitive. You don't want to have any defects in these areas. The more of these you use, the more likely you are to have yield issues, manufacturability concerns. Designers today are certainly using decoupling capacitors but they are trying to do it in a much smarter way, to use the smallest number possible and to place them exactly where they are needed. That's part of our technology offering.

How does one determine where to insert decupling capacitors?

In decap insertion there are two different ways to insert the decaps: high cell movement and low cell movement efforts. One way is to insert the decaps into available space without moving any other cells. When we do this, we try to pick the best positions for these decoupling capacitors. Position is really important because the further away the decap is from the aggressor, the less effective it becomes. Putting in additional decaps, if they are far away, really doesn't help. The blind insertion method I mentioned earlier is really not that good. That's a low cell movement effort. The second method is where we will move cells around in order to create a space in which to place the decap immediately adjacent to the aggressor. The idea here is because these are placement sensitive, if we can put it right next to the aggressor then not only is that the most effective position but it reduces the total number of decaps needed. Moving the logic cells around is no big deal. Moving them around in a timing closed fashion is another story. That's the high cell movement effort approach. When we go to move the logic cells around to create space for a decap, we check the timing graph. We have all this data stored in the single database within our tool. When we see an aggressor we look at the placement data to understand what cells surround that aggressor and we look to
see if they are on the critical path. If so, we are not going to move them. If they are not, we understand to what extent we can move them. We understand the timing slack we have on those cells. We will move them but not so far that there is a timing problem. The result at the end of the day when we use this high effort cell movement is that we will have inserted decaps to mitigate dynamic voltage drop effects and we will have inserted the minimum number we needed to meet the user's constraint for voltage drop. We will have moved around some of the logic cells but we will not have broken timing. The user has timing closed and voltage drop closed.

Are you aware of any competitors trying to do the same thing as CoolCheck?

No, I'm not. I think that this is the first of its kind.

Sequence has products in the static and dynamic analysis arena. What customer feedback led you to develop CoolCheck?

It was really the case of the missing via. Our CoolTime tool has been used to find problems. A couple of questions would come up in its usage. A designer would use it and say I see that I have got a voltage drop problem here but what's really causing this? Consider the voltage “thermal” map from dynamic analysis. If they see a red spot, they say “I see I have got a hot spot here. The voltage drop is a little larger than I might want there. What's causing it? Is this a real problem and if so what do I do about it?” That's was one of the questions that would come up. The second question that would come up would be designers would sometimes find issues like missing vias and they would go back and look at the dynamic voltage drop analysis results and ask how come it didn't show up there. We would go back through this and show them that the stimuli that they used and the way they used the tool didn't stimulate that part of the design. They could see that the cells that were missing vias weren't stimulated. They weren't drawing any power and hence the dynamic analysis was missing them. They said “You know your dynamic analysis is great. We like it but we need some help finding these issues too.” With those two things we went off, scratched our heads (quite a bit by the way), had this cooking for a while. It took us
some time to have good runtime performance because it's fairly complex stuff. So to answer your question directly, we had some questions about some problems that they need some help solving.

It has been said that if there were an infinite number of monkeys at typewriters, one would type out the works of Shakespeare. Given enough vectors would the conventional simulation find all these problems?

Probably! These problems that CoolCheck finds, they would find. They would be faced with another issue, if it's a problem in the grid domain. If you think about the equation for voltage drop, it is IR + Ldi/dt + CdV/dt. To route out the problem you have to look at all these factors. What CoolCheck is enabling you to do is to pull one of these out all by itself and look at that comprehensively. We think that is pretty significant.

You said it took a while to run. In the typical case, how long would that be?
A few hours. We worked on that quite a bit to get that.

Would a typical customer purchase one copy of CoolCheck, one copy for every designer, ..?

It depends on how many designers you've got running power grid analysis and doing power grid design. I think you would want one copy of this for each one of those guys.

Are there any applications which would be considered a sweet spot for CoolCheck?

I think these days it is pretty much universally applicable. Anyone who is concerned about a power grid, should be looking at this. In the one extreme let's look at the very high performance microprocessor consuming dozens of watts. Clearly in that case you have some very large current running around your processor. So you want to be very careful there. On the other extreme the low power devices, it's a different situation. There the power supply is dropped as low as they can go to minimize power. At the same time these are often consumer oriented devices. For that reason the designers are going to be very, very cost sensitive. They are going to want to squeeze the power grid so
that they don't utilize too much in the way of routing resources. The combination of these two things means that while the currents won't be as large as they are in the high performance microprocessor case. On the other hand I think they are squeezing things in a different way. They are going to have power grid concerns there too. The short answer in my mind is if for whatever reason you are concerned about your power grid, you are going to want to use this kind of tool.

What other tools are available for power grid design?

Typically two other kind of tools. You've got a router and analysis tools. The router will do the power grid routing for you and then you are going to analyze it. Basically today this is a trial and error kind of exercise. Designers will take their floorplan for the power grid and feed it to the router. The router will create the grid. The designer will then run it through analysis tools. First static IR drop analysis and then dynamic to see if it performs well. If it does, then great, they are done. If not, which is usually the case initially, then they begin to iterate. This tool would be added to the mix running before you do static IR drop and again going to be used in

What other areas is Sequence Design working on?

I mentioned earlier that there are power problems in two categories, one is called power integrity that focus on power grid and the delivery of power to the actual circuitry and the other is power management which is focused on reducing power consumption. Pretty much everything we have talked about here has been in the power integrity area. That's part of what we do. We also focus a lot in power management which is the analysis of power consumption and reduction of power. We have a lot of work particularly on leakage reduction. We announced early this year our tool CoolPower which is used for power gating, insertion and optimization. We will have more stuff coming out on that.

Any other topic that my readers might be interested?

Feature size. These power problems become exacerbated as we go forward with technology. There's a couple of reason for it. One is naturally the line width becomes smaller. Not only does it become narrower but it also tends to become thinner because there is vertical scaling going on. That means the resistance of these routes goes up. That's just going to make it harder for power grid designers. Secondly, leakage current is going through the roof literally. I mentioned briefly about the gate leakage at 90nm and 65 nm. That's just one of the components. There are several other leakage components growing dramatically. What we are seeing is that it is becoming very difficult to pull out one single perspective or one parameter and fix that one alone. For example, decoupling capacitors. You can use these gate oxides capacitor but they are going to leak like crazy. They will leak even more as you go to smaller line widths. You can avoid that by using metal-to-metal decoupling capacitors but they are not as effective and you need better automation in terms of placement and sizing. That affects potentially your initial placement and how well you can close timing. What we are seeing as the line widths decrease is the whole design process is becoming more complicated. There's no news there. We've all known that. What is eye opening for some people is the extent to
which the various power issues are becoming intimately intertwined with all the other issues. That's where we play, which is really the confluence of power, timing and signal integrity. CoolCheck is a new capability in that regard. CoolTime voltage drop optimization really draws in our strength in power, timing and SI, rolls them all together to come up with something that other people just can't do.

The top five articles over the last two weeks as determined by the number of readers were:

Cadence Announces Best Overall Paper Presented at CDNLive! Silicon Valley Conference; Six Additional Papers Receive Special Mention for Excellence at Tech-Heavy User Conference The best overall conference presentation -- chosen by the Cadence Designer Network steering committee -- was "IBIS Generation and Validation Methodology Using Spectre MDL". More than 200 abstracts were submitted, and 83 were selected for presentation at the conference, which drew more than 550 attendees.

Matlab as a Development Environment for FPGA Design - Technical Paper The paper discusses an efficient design flow from Matlab to FPGA. Employing Matlab for algorithm research and as system level language allows efficient transition from algorithm development to implementation.

FLEXBUS: A High-Performance System-on-Chip Communication Architecture with a Dynamically Configurable Topology - Technical Paper The paper describes the FLEXBUS architecture in detail and present techniques for its run-time configuration based on the characteristics of the on-chip communication traffic.

Cadence Announces Third Quarter 2005 Financial Results Webcast

Floorplan-Aware Automated Synthesis of Bus-based Communication Architectures - Technical Paper The apaper an automated approach for synthesizing cost-effective, bus-based communication architectures that satisfy the performance constraints in a design. The synthesis flow also incorporates a high-level floorplanning and wire delay estimation engine to evaluate the feasibility of the synthesized bus architecture and detect timing violations early in the design flow. Case studies of network communication SoC subsystems are presented.

EDA Industry Reports Flat Revenue in 2nd Quarter of 2005 EDA Consortium's Market Statistics Service announced that EDA industry revenue for Q2 of 2005 was $1,091 million, versus $1,094 million in Q2 2004. Total product revenues, without services, were $1,028 million in Q2 of 2005 vs. $1024 million in the same quarter of 2004.

Other EDA News

Accellera Co-Sponsors GSPx, Offers Session on How PSL and SystemVerilog Improve IP Delivery and Verification

Mentor Graphics to Release Q3 Financial Results on October 20

Kawasaki Microelectronics Adopts Apache's SoC Power Closure Design Flow

How Accurately Can We Model Timing In A Placement Engine? - Technical Paper

Asynchronous Circuits Transient Faults Sensitivity Evaluation - Technical Paper

Accellera Elects Shrenik Mehta as Chairman, Dennis Brophy as Vice-Chairman

Altium Releases Service Pack 3 for P-CAD&174; 2004
Matlab as a Development Environment for FPGA Design - Technical Paper

Phil Moorby Selected to Receive EDA Industry's Kaufman Award; Inventor of the Verilog Hardware Description Language Recognized as One of the Catalysts Behind Evolution & Growth of Electronic Design Automation Industry

Cadence Announces Best Overall Paper Presented at CDNLive! Silicon Valley Conference; Six Additional Papers Receive Special Mention for Excellence at Tech-Heavy User Conference

Marvell Adopts Synopsys' Galaxy Platform for High-Performance Communications Products

Other IP & SoC News

Samsung Electronics Announces 2005 Third-Quarter Earnings; Record Number of Handset Sales in the Third Quarter

EZchip and TeraChip Offer New Interoperable Processing and Switching Solution; New Solution Well Suited for Delivery of IP Video Services

The Demand for Increased Functionality, Improved Signal Integrity and Interconnect Density is Driving the Requirement for Flexible and Flex-Rigid Substrates

AMD Opens New 300mm Fab 36 in Dresden, Germany, Continuing Its Track Record of Flawless Manufacturing Strategy Execution

ISM4803 Technosoft Universal Drive for Distributed Motor Control

TI Announces Rail-to-Rail, 2A Operational Amplifier in Tiny Power Package

Marvell Introduces First WLAN Chipset Based on EWC Specification

Tundra TSI564A(TM) Serial RapidIO(R) Switch Now Sampling

picoChip Launches Most Complete Family of WiMAX Reference Designs, Including Industry First for 16e; New Designs are First to Cover Fixed WiMAX (802.16d), Mobile WiMAX (802.16e) and Korean WiBRO

Smaller, Slimmer, and Smarter - New Silicon Sensors from STMicroelectronics

Spansion Announces World's First Single-chip 1 Gigabit NOR Flash Memory; First Samples Based on 90-Nanometer MirrorBit(TM) Technology

Chips Complying with Bluetooth 2.0 + Enhanced Data Rate (EDR) Hit the Market in 2005

Extending Boundary-Scan to Measure Analog Voltages

AMD Reports Third Quarter Results; EPS of $0.18 Driven by All-Time Record Sales of $1.523 Billion

Lattice Semiconductor Confirms Revenue Guidance for Q3 2005 Tuesday, October 11, 2005

Cypress's New High-Speed USB 2.0 Microcontroller Has World's Smallest Package And Lowest Power For Mobile Phones, PMPs, PDAs and Webcams

SMC Ships First in New Line of Ultra-Efficient Single Chip Gigabit Ethernet Switches Powered by Agere Systems Ethernet Chips

MOSAID Acquires Virtual Silicon

Analog Devices Introduces Advanced Circuits for Industrial and Instrumentation Applications
SigmaTel Updates Guidance for Third Quarter 2005; Expects Revenue to be up 55% Year over Year

Adaptec Strengthens SATA Product Line With New SATA II 4-Port and 8-Port RAID Controllers for Cost-Effective, High-Performance Storage

Intel Ships Multi-Core Server Platforms

AccelChip's New IP-Explorer Technology Takes DSP Algorithm Optimization to New Heights

Inphi(R) Corporation's IN581AMB FBDIMM Features Industry's Smallest Size, Latency and Power Consumption; Allows OEMs to Deliver Higher Performance

Supermicro Delivers Industry's Most Complete Multi-Core Intel(R) Xeon(R) Processor Server Product Line

Cypress Introduces Next Generation of 2.4-GHz WirelessUSB(TM) Radio-on-a-Chip With Low Power Consumption

Alereon Introduces AL4000 WiMedia Ultrawideband Product Family; First Commercially Available 480Mbps Certified Wireless USB/WiMedia PHY

Xilinx Revises September Quarter Sales Guidance

Signal Integrity Puts Altera's Stratix GX FPGAs Into Ceterus Networks' New Cross Connect

Texas Instruments Strengthens its Wireless Infrastructure Portfolio and Stakes its Claim on the Emerging Pico Base Station Market

TI Demonstrates Ultra-Low Power MCU Advancements During 4th Annual MSP430 Advanced Technical Conference

UMC Reports Sales for September 2005

You can find the full EDACafe event calendar here.

To read more news, click here.

-- Jack Horgan, EDACafe.com Contributing Editor.