[ Back ]   [ More News ]   [ Home ]
November 03, 2008
Computational Lithography
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
Jack Horgan - Contributing Editor

by Jack Horgan - Contributing Editor
Posted anew every four weeks or so, the EDA WEEKLY delivers to its readers information concerning the latest happenings in the EDA industry, covering vendors, products, finances and new developments. Frequently, feature articles on selected public or private EDA companies are presented. Brought to you by EDACafe.com. If we miss a story or subject that you feel deserves to be included, or you just want to suggest a future topic, please contact us! Questions? Feedback? Click here. Thank you!


Lithography tools have long since been pushed past the point where the minimum feature size of the circuits is smaller than the wavelength of light that can be projected through the mask to create them (see well known figure below). In other words, feature-size scaling has advanced faster than the rate of wavelength scaling.

Lithographers have come with several ingenious techniques to address this challenge. The challenges get greater as the industry moves to even lower process nodes.

On September 17th Mentor Graphics announced an agreement to jointly develop and distribute next-generation computational lithography (CL) software solutions to enhance the imaging capability of lithographic systems used in the manufacturing of integrated circuits at the 22nm node and beyond. The agreement is part of IBM’s computation scaling initiative to create the industry’s first computationally-based process for production of 22nm semiconductors.

I had an opportunity recently to discuss this subject with John Sturdevant, Mentor’s Director of RET Support.

Would you provide us with a brief biography?

I manage the technical support organization for our RET product which means we are working both with our leading edge customers to align our product development road map with their RET needs as well as with our engineering groups to in fact specify those new solutions. Our team is heavily involved with 32 nm and 22 nm development around the globe. I’ve been doing that for 5 years. My background prior to that was in lithography R&D. A lithography background is what most of the team of 20 or so people have. It serves us well in this sort of endeavor.

How and when did you arrive at Mentor Graphics?

I started out at IBM for several years in their lithography R&D. I spent 7 or 8 years at Motorola doing the same thing. Then I was manager of lithography R&D at Integrated Device Technology for about 4 years and then came over to Mentor in 2003.

Before we get into the recently announced relationship with IBM, would you describe the general problem as people go from process node to process node as it relates to lithography?

Given my background I probably have a bias as do many fellow lithographers, to see lithography at the center of the semiconductor development universe. It may be only partly true. We have grown accustom to lithography being the gating item for each new technology node. It is certainly true, when you look at the cost of development, particularly the cost of semiconductor equipment associated with lithography. Lithography has typically been gating for each technology node going back well beyond 180 nm. I started at 500 nm. Progress has typically been gated by the ability of the industry to get either new resolution from a lower exposure wavelength or a higher numerical aperture or some combination of those or some other enabling technology to lower the K1. You can chart that all the way back to even one micron days. The interesting thing now, as we look at 22nm development, is that it is the first time in the industry’s history where we don’t have access to a lower wavelength or a higher numerical aperture. Certainly great work continues to go on at 13.1nm for EUV (Extreme Ultraviolet, a politically correct way of saying x-ray lithography). But it is pretty clear that EUV will not be a manufacturable, cost effective solution that will be ready for 22nm. While the numerical aperture has increased up to 1.35 today, which is quire remarkable given what we thought 4 or 5
years ago, it is clear that the fundamental materials limit on the resist, immersion fluid and the optics will limit that. I believe that all three major scanner suppliers have officially jettisoned their plans for research into higher NA. That leaves us going from 32nm to 22nm with exactly the same wavelength and exactly the same NA. So it poses real challenges and that serves as an entrée for IBM and Mentor to collaborate on new approaches to eek out incremental process window at 22nm.

You referenced K1. Would you explain what K1 is?

It is really a sort of fudge factor for the degree of difficulty of the patterning process. It relates to an old equation from Lord Raleigh, which was actually related to astronomy and imaging stars with basic optics. It basically states that the minimum resolution that is achievable is equal to the wavelength divided by the numerical aperture times this fudge factor. If we consider the wavelength and the NA to be fixed, then to get lower resolution, smaller dimensions, we have to lower that K1. Historically it has been a figure of merit or difficulty. In the old days, we would say that a process would only be manufacturable, could yield semiconductor manufacturing, if the K1 is above
say 0.7. Well, many, many enablers throughout the value chain in lithography have now put it such that the industry believes that we can achieve it for a K1 at even .3 or .35. The lower the number, the harder it is.

You said that EUV lithography would not be applicable at 22nm.

I believe that is the industry consensus. There was an EUV symposium last week, where industry researchers and experts got together. There is an awful lot of money and momentum behind EUV research but given that some of the fundamental challenges with throughput, how many wafers per hour an EUV scanner could do in manufacturing is down around 2 or 3 per hour, and given price tags of upwards of $80 million for an EUV scanner, I think the industry consensus is that 22 nm will have to be manufactured with existing 193nm wavelength and that EUV will probably be delayed to the next generation, maybe 16nm.

Would you give us an overview of some of RET techniques that have been in use for some time?

On the whole these RET technologies, which I will walk you through, have been things that effectively enable the fudge factor K1 to go down. One of the first ones used in the industry is something called off-axis illumination. It is just engineering the illumination source shape in a pretty crudimentary way so that instead of the light impinging upon the mask in a straight linear fashion, light is brought in off-axis. This goes all the way back to 250 nm days. It was found that by doing this, you could engineer the way light is diffracted off the mask pattern and collected by the imaging optics and that you could get enhanced process latitude. Since then, that has become more and more
complex. I will talk a bit later about the technology we are working with IBM, which is a much further extension of the same off-axis illumination approach.

The second technique was phase shift masking (PSM). There are several different types or flavors of PSM. One that has been ubiquitous in the industry at least going back to 90 nm is attenuated PSM technology, which has become quite commonplace and mainstream for many layers. It is referred to as a sort of weak PS (Phase Shifting). It is fairly easy to implement in manufacturing. There is a stronger version of PSM called alternating PSM. That is quite a bit more difficult to manufacture the mask both on the production side of the mask and on the design side of that data. As such, alternating PSM has had spurts and stops by various research groups and a little bit of manufacturing
implementation around the world. It has certainly not become mainstream.

Then there is the whole suite of optical proximity corrections (OPC), which have been around probably since 180nm. Certainly our Calibre RET product is involved with supporting all those things. One of the simplest approaches is something called subresolution assist feature, commonly known as SRAF. This is a way to add shapes to the design that do not print on the wafer, yet engineer the diffracted pattern of light to improve the process window. This SRAF technique is very common place. OPC is correcting or biasing the mask to sort of predistort it so that what ends up on the wafer is what the designer wanted. That’s more and more important as that K1 factor goes down. I think
overall that the RET techniques have been off-axis illumination, phase shift masking and optical proximity correction (OPC).

With this array of hardware/software RET approaches, how does manufacturing ensure that what ends up on the wafer is what the designers intended?

Good question. Going back to 130nm and maybe a little bit of 90nm that was a pretty big unknown because we were doing things on the hardware or on the mask side and on the scanner side but we were, I wouldn’t say flying blind, but they were rudimentary techniques that were like design rule checks, originally and collectively called ORC for optical rule check. Our software and other software enable simple and crude checks of what the edge placement error is, the anticipated location of a printing of a feature versus what the target was. There were like DRC decks. They were pretty simple and crude. They would check some very basic things. Since about 3 or 4 years ago, the industry has evolved. We have product and competitors have products in this space, complete simulation based contour generation, meaning that now we can not do not just isolated locations on the design where we double check what the printing is on the wafer versus the target, we can now do an entire contour and compare that to the target. We can get much more sophisticated and thorough in the way we can interrogate the intersection of design and process variability. For instance, we can anticipate if process variables in the fab like dose or exposure change, how that anticipated contour compares to the design and we can highlight hot spots or potential areas where the design might be
weakened. In short these full chip contour simulation checks have evolved and are available to enable exactly that, a checking of the design target versus prediction on wafer image. In fact when you go and compare the actual wafer images with the predicted images, it is pretty remarkable how consistent these two are with one another.

What is the design flow? If a design goes to these RET techniques, does it ever go back? Is there any feedback loop?

There are feedback loops on two levels. The furthest upstream feedback loop in design is something we call litho-friendly design. The same engine that can do full chip contour generation can be used early on in the process maybe when basic standard libraries are being developed after initial ground rules. Designers can have access to the sort of blackbody representation of the anticipated manufacturing model and they can see at their desktops literally, if they are working with some design library, what the anticipated wafer print will be. They can look at anticipated hotspots and make changes to the polygons so that they can enhance the process window. By doing this, the designers do
not have to be litho experts. They can have, almost like standard DRC checks, LSD checks that will tell them whether the design may not be manufacturable very early in the process. Designers can make manipulations with the awareness of that anticipated process.

The other feedback loop is at the time of manufacturing when a design tape outs. Assuming it is design rule clean and the libraries have already gone through LSD, you would anticipate very few problems for the fully assembled chip. After DRC, the OPC is applied and then this full chip contour generation is done. If there are any hotspots, then the OPC team and the design team can look at what happened and see if there is something such that we have to redo the OPC model, the OPC recipe. This is another opportunity before committing to generating masks to correct those problems.

How much computational horsepower is required to carry out the RET techniques? A laptop, a compute farm, …?

The LSD application, while it can be a full chip, typically can be done with a desktop system, a designer has access to with 4, 8, 16 CPUS, which is pretty standard to do. It is pretty standard to use Linux farms for low cost computing. For the manufacturing tape out in production quite a bit more hardware is typically used. The standard industry benchmark is to be able to apply OPC to a given layer and do it overnight or within 24 hours. That is the typical turnaround expectation. The compute complexity for model based OPC and post OPC simulation of contours to verify those requirements have been going up and up with each generation. For the first model-based OPC generation at 130nm or 90nm, typically the number of CPUs was 8. If you look at 32nm now, which is just starting to go into production, the number of CPUs to maintain 24 hour turnaround time can approach 1,000 or more. In order to contain the cost and the runtime, which impact time-to-market, we have developed new technology that goes beyond just farming out the compute to conventional Linux-based computers, albeit 500 or 1,000 to sending specific portions of the compute to highly efficient cell processors. We had a press release earlier in the year. This was another project where we worked with IBM. The complexity of the compute challenge is soaring geometrically. We are trying to contain that in both by
software improvements and by judicious use of a sort of hybrid compute platform that has conventional Linux based system and some microprocessors.

You said 24 hours per layer. How long for a full chip?

This cell technology, which we introduced with IBM, was first adopted at 45nm. At 45nm there is typically 25 to 30 layers that would use model-based OPC. Most designs now in logic will have 7 or 8 layers in metal getting OPC. It is quite a few. At 32nm there are over 40 layers that require model-based OPC. Many of those layers, especially with the cell acceleration, can run in a matter of a couple of hours. One, two, three or four hours. Typically metal 1 and poly are the most computation intense layers. Those will push the 24 hour limit in some cases.

If there is a design change, do you have to go start all over again?

There a couple of different kinds of design changes. If there is an engineering change order that comes from the designers, sometimes that cell can be redone, re-OPC’ed and merge back into the overall design. We call this re-OPC and that way you do not have to do the entire chip. The same thing can happen, if a problem is found in manufacturing, when you do this full chip verification. If there is one location or one cell, the OPC on that location could be redone and then merged back into the overall design without having to redo the OPC.

Under what circumstances would one have to redo the entire chip?

That varies quire a bit. We have customer that will do it both ways. In some cases customers, will find for just bookkeeping purpose, since turnaround time in less than 24 hours, they have to keep it clean and if there is any engineering change, they will do the whole OPC.

I am not sure I could give a strict guideline on when the customer should do it one way versus another way. I would say that the majority of the time to date it has been to redo the entire layer, if there is an engineering change.

Mentor recently announced a relationship with IBM. Is that a new relationship or a continuation of an existing relationship?

It is really a continuation of a long relationship we have had with IBM. We have been working with them on model-based OPC going all the way back to 130nm. So it is sort of a logical extension of that relationship. I also mentioned that last year our relationship took a new form with this cell processor that IBM supports. In so many ways this relationship was a continuation of our relationship.

That’s the cell broadband engine (Cell/B.E.)?


Would expand a little bit on that?

We looked at this compute challenge that was growing. In a typical OPC flow the full chip is simulated at multiple locations. I mean billions of locations across the full chip to predict where the printed pattern will be. Then we distort, we break up the layout polygons into much smaller fragments and start moving edges in a manner consistent with the predicted profile in or out according to what the simulation says. Then we go back and re-simulate. This process of iteration is done 4, 5, maybe up to 10 times in order to get the entire chip to converge. So the final predicted wafer image and the target are within some tolerance. That’s where you take billions of simulations and do that maybe 10 times. The thought was let’s find the most efficient compute platform to do that. So we looked at a wide variety of options. We looked at FPGAs and other special GPUs and quickly came to the conclusion that for the type of simulations that are done, which are typically fast-fourier transforms, the cell block was uniquely postured to be the most cost effective compute platform. It is something we worked with IBM and Mercury Computer Systems to port the Calibre simulation engine onto that cell. The result is that our customers can reuse their existing farm of conventional Linux computers that they already have, (typically hundreds or thousands). By adding literally only a few dozen of these cell broadband blades, the customer can mix and match a variety of combinations of existing conventional CPUs plus the cell. That way they can continue to recoup the investment they have made in the Linux systems. We looked at using FPGAs and realized that that would require completely dedicated hardware to OPC job. We did not think that that was the most cost effective solution. Our solution, with the addition of a small number of cell blades, users can still do their DRC, their XRC and their LDEF; all the different things that are needed to support the design and OPC. They can use their existing investment in hardware plus a small incremental investment beyond that. By
doing that, we can see that runtimes decrease by a factor of about four in typical configurations versus not using the cell hardware.

Does anyone other than IBM have a similar offering?

I didn’t explain that very well. Our partner to deliver this solution is Mercury Compute Systems. IBM also manufactures cell processors themselves. We have a third party in addition to IBM, namely, Mercury Compute Systems. Several customers have adopted this technology in manufacturing.

Editor: Mercury provides a pre-integrated coprocessor acceleration (CPA) cluster comprising Cell/B.E. blades in an IBM BladeCenter H system. Customers simply connect the CPA cluster to their existing standard compute cluster using a standard Ethernet connection.

Would you expand on Mentor’s recent announcement regarding 22nm?

Going back to our discussion on lowering K1 or resolution lowering and the fact that in going to 32nm and 22nm the industry does not have access to lower wavelength or higher NA, we are looking for ways to enable this generation of technology without hardware. What we looked at with IBM was how we could do much more sophisticated engineering in the illumination space. I mentioned that the first RET method used in the industry was off-axis illumination. In that mode the light source that was shaped to go and impinge on the reticule, would be for instance instead of going from flash light being completely on, we would obscure the central region so we get an annulus or ring of light
around the outside edge. That is a very simple example. We realized that we can do much more complex interrogations of the intersection of design layouts with the illuminator to engineer that. The result is much more sophisticated illumination sources. The joint product that we are working together with IBM to develop and then bring to market will take a given design and an awareness of a model that will be used to pattern the wafers and to highly optimize the OPC pattern that the mask will use and the shape of the light in the illuminator far beyond something like a simple annulus.

We have pictures of sample outputs. They are highly non-intuitive distributions of light in the illumination source. The net is that you get simultaneous source and mask optimization on a per design basis. That will do a couple of things. We have already seen it deliver incrementally better process latitude in manufacturing for a given target design, for instance improved exposure latitude or depth of focus. In some cases at 32nm and 22nm, the industry has started looking at decomposition of layouts into two distinct mask patterns to better enable resolution but that of course comes at a severe cost. We have seen example at 22nm like in metal where a layout that without this
technology that would require two separate mask layers, say one that prefers layout in one optimization, say x, and the other in y. But by using this source mask optimization to engineer the mask and the source, equivalent or better CD control can be delivered with a single masking layer. That has huge cost implications.

Then there is a final one we started to interrogate. For several generations designers have been required to work with certain design restrictions imposed by the patterning process and these have become more and more severe. For instance, restricting design rules that limit certain spacing or pitches between lines because the lithography patterns process window is not sufficient. We have seen examples that with SMO technology those restrictions can be eliminated or narrowed in design space. So it gives the designers access to more degrees of freedom in how they will layout the design.

Is the output in the form of instructions to lithography equipment?

Good question. It is exactly that. It is actually two things. Number one it is a modified GDS to go to the fracture tools that will write a mask pattern that is consistent with how we do an OPC today. The second thing is exactly as you describe. It is a sort of map of intensity versus x and y locations that the scanner tool will deliver. So this technology is designed to work in conjunction with evolving capabilities that scanner providers have to deliver a highly pixelized intensity map to the reticule. It is software that is highly sophisticated to output an exact map that the tool will deliver.

You said that there were three leading manufacturers of lithography equipment. Does Mentor support all three?

Yes. Out current and future software will work in conjunction with all three of these suppliers.

Who are these suppliers?

Nikon, Canon and ASML.

Does your software accept input from various design flows such as Cadence and Synopsys?

Our software can use any of a variety of upstage design tools. They work both with the litho-friendly design offering that I mention and with the standard full chip flow. Any of these design tools is supported. All we need is a target design and the OPC model. The OPC tool merges that target with the model for the process and will output the post OPC GDS.

In what form does the user supply the target design?

Either in GDS or more commonly today in OASIS, a more efficient format.

Are there any foundry restrictions?

No, none what so ever. In fact this enabling technology going to 22nm is really targeted to be for all foundries and memory manufacturers.

Would the output of your product be any different, if the user knew the target foundry and lithographic equipment to be used?

No. The 193nm market is dominated by two of the three players. I do not know exactly where Canon is in terms of their market penetration. I do not know of any restrictions or any reason why the output of our software would not work fine with any of these suppliers.

It gets a little complex to look at the degrees of freedom in our output given the way we are constructing our software today. I don’t know how deeply to go into this. For a completely free form illuminator, at least Nikon and ASML we understand, will be delivering hardware modifications to equipment or new models that will fully utilize that. In the short term and absent the ability to have a free form illuminator, diffractive optical elements that are used in equipment today can be engineered in such a way that you can still get a huge variety of output shapes. The net result is that there is nothing that we believe will limit the ability for our instructions to be used by the
scanner equipment for 22nm.

Concerning the relationship with IBM, are they providing the test cases and are they involved in testing or are they working on algorithms and software development or both? What is the division of labor?

Sure. This is unique in our relationship with IBM. It is truly a joint development effort. We have a combined team of mathematicians and software developers from both sides that are working together in both our San Jose location and IBM’s Yorktown Heights T. J. Watson Research Center and their East Fishkill, New York Development center. We have bodies going back and forth, working on the core algorithms. We have a team of really remarkable mathematicians that have developed some of the core algorithms for optimization that are used in this technology. Beyond that we are certainly accessing their development environment to do just what you describe; develop test cases and early
validation of results.

Is there a target time frame for releasing this as a product?

Our target for full manufacturing is for sometime in early 2010. We will have beta software available in June 2009. Our first internal alpha software is coming out in the next month or two. But beta capability in June ’09 and full production support in early 2010.

We want to emphasize that IBM is not just a beta site for our software. It is truly a joint development effort where both companies will put IP into the product.

Where do you see 22nm adoption today?

The timing we have for our software we feel is consistent with leading edge customer needs. Certainly IBM is one of the first tier, leading edge customers. There are an increasingly small handful of others that are pushing the envelope on technology development. Right now there is a very early residue of design rules being formulated and that will be revised over the next several months at leading foundries and at leading microprocessor manufacturers; IBM and other places. The one thing that is very interesting is that typically lithography technology has gated the next generation R&D for processor development and in the past it has always meant that you are waiting. You pay your
money and get in line for the next generation of scanners from Nikon and Canon. Everyone is sort of waiting on that. It is frequent that select customers will send development wafer off to the scanner companies to get on a prototype tool, to get early exposure. The nice thing (I guess it is a nice thing) is that with 22 nm the equipment is fixed. Lithography R&D groups have access to the patterning equipment. In fact work has been going on for nine months or so on 22nm patterning development. It is still very early on. There is certainly no 22nm production. Our understanding is that 22nm production won’t start until late 2010 or early 2011.

Where does the competition stand versus Mentor’s current and future products?

I don’t really know. I have not heard about the competitive products in this realm. We believe that we have an early jump by virtue of our collaboration with IBM. Both IBM and we have been working independently on core technology, in our case for at least three years and IBM for more than that. So we believe that we will be uniquely postured through this joint development agreement. Beyond that I actually don’t know. We will probably hear more in the coming months from what our competitors are thinking in this space.

Editor: On October 8th, Cadence announced the availability of software that optimizes custom lithographic source illumination, a new capability in its integrated source mask optimization (SMO) technology family for IC manufacturing at 22 nanometers and beyond. Cadence collaborated with Tessera Technologies, Inc. to incorporate the custom source illumination manufacturing awareness into its SMO software technology family. The new capability is integrated into the Cadence resolution enhancement technology (RET) flow for both single- and double-patterning lithography. The collaboration between Cadence and Tessera focuses on Tessera’s DigitalOptics technologies, which
provide conventional, gray-tone, and free-form litho source illumination. Effective source mask optimization requires that the full degrees of freedom and actual constraints of the illumination design are incorporated into the design algorithms. Incorporating these more advanced models into the Cadence SMO software provides powerful new capabilities to the entire user community.

The top articles over the last two weeks as determined by the number of readers were:

Cadence Board of Directors Creates Interim Office of the Chief Executive; Michael Fister Resigns  Cadence announced that its BOD has formed an Interim Office of the Chief Executive to oversee the day-to-day running of the company's operations, effective immediately. The Interim Office of the Chief Executive includes: John B. Shoven, Ph.D., Chairman of the Board of Directors of Cadence, who has been appointed to the position of Interim Executive Chairman, Lip-Bu Tan, a director of Cadence since 2004, who has been appointed Interim Vice Chairman of Cadence's Board, and Kevin S. Palatnik,
Senior Vice President and Chief Financial Officer. Charlie Huang, Senior Vice President - Business Development, has been named Chief of Staff of the Interim Office of the Chief Executive.

The formation of the Interim Office of the Chief Executive followed Michael Fister's resignation as President, Chief Executive Officer and a director of the company, by mutual agreement between Mr. Fister and the Board. 

Apache Design Solutions Achieves Record Sales for the Twenty-Third Consecutive Quarters The Q3 growth came from increasing investments by existing customers that represent the top tier semiconductor companies and adoption by new customers facing power and noise challenges as they move towards 45/32nm technologies.

Berkeley Design Automation Delivers Industry's First Fractional-N PLL Transistor-Level Noise Analysis Berkeley announced the industry's first closed-loop noise analysis of fractional-N phase-locked loops (PLLs) at the transistor level. Combining transient noise and periodic noise analysis in the company's Noise Analysis Option™ device noise analyzer, designers can now optimize and characterize all fractional-N and integer-N PLLs for phase noise and jitter prior to silicon fabrication. The result is improved performance, lower power, and faster time-to-market.

Gemini Unveils Industry's Fastest SPICE-Accurate Analog Simulation Technology Gemini Design Automation, a start-up company focused on the challenges of verifying complex analog and mixed-signal designs, unveiled the industry’s fastest SPICE-accurate simulation technology specifically developed to leverage the throughput advantages of multi-core computing. The company’s native multi-threaded technology has demonstrated run times and capacity of up to 30x that of earlier generation analog simulators, and up to 10x improvements over first-generation multi-threaded approaches.

The MathWorks Enables Deployment of Parallel MATLAB Applications and Extends Parallel Programming Language With this new release, MATLAB users can convert parallel MATLAB applications into executables or shared libraries and provide them to their own end-users royalty-free. This is possible by running applications developed with Parallel Computing Toolbox through MATLAB Compiler. The resulting executables and libraries can take advantage of additional computational power offered by MATLAB Distributed Computing Server running on a computer cluster. As a result, a broad class of professionals
who do not work with MATLAB directly, are able to benefit from parallel MATLAB capabilities.

46th Design Automation Conference Names Executive Committee DAC announced the Executive Committee for the 46th DAC, which will take place July 26-31, 2009 at the Moscone Center in San Francisco. The Executive Committee is charged with overseeing the exhibition, planning the technical program, establishing new initiatives, and managing the conference’s operations and publicity. Andrew B. Kahng of the University of California at San Diego will serve as General Chair and lead the 46th Executive Committee.

Other EDA News

DS Reports 2008 Third Quarter Financial Results Well in Line With its Objectives (Revenue up 12%)

Moai Electronics Accelerates Flash Memory Controller Tapeout With Cadence Logic Synthesis and DFT Solutions

Reminder - ISQED'09 Extends Call for Papers Deadline

Silicon Image Reports Third Quarter 2008 Financial Results (Revenue up 11%)
Technology Luncheon and Panel Discussion to Focus on Low Power Design During GSA Semiconductor Leaders Forum Taiwan

Teradyne and Teseda Providing Time-to-Market and Yield Enhancement Solutions for the UltraFLEX and J750 Test Platforms

Avnera Standardizes On Magma Implementation Flow -- Cites Fast Design Completion

Cadence Encounter Test Helps Hitachi Improve Product Quality and Lower Manufacturing Test Cost

ASSET joins Synopsys in-Sync program to advance embedded instrumentation tools

Grace Semiconductor Adopts Cadence Virtuoso 6.1 PDK Development System
Accellera Announces Election of Officers for 2008/9

ISQED'09 Extends Call for Papers Deadline

Synopsys DFT MAX Compression Achieves Mainstream Usage at 90 Nanometers and Below

Movidia Selects Virage Logic's Intelli(TM) LPDDR Interface IP Solution to Meet Stringent Mobile Video Application Requirements

Mentor Graphics Improves Customer Experience, Satisfaction with Brightidea.com

STDF Fail Data Standardization Group Releases Standard Specifications for Public Evaluation

ISQED'09 Extends Call for Papers Deadline

Calypto's PowerPro CG Cuts Power Consumption in Pixim's Latest Video Image Processor

Presto Engineering Introduces New Thermal Solution for Analysis of High Power Devices

Mentor Graphics Announces Nucleus Platform Media Player for Rapid Delivery of Multimedia Applications

46th Design Automation Conference to Feature IC Design Chain in Exhibit Floorplan

MunEDA Extends Licensing Agreement with Altera For WiCkeD

AWR and UMS Announce "Try the Power" GaAs MMIC Design Incentive Program

DFI Technical Group Releases Low Power Features with New DDR PHY Interface Specification Version 2.1

Teseda and Mentor Graphics Partner to Speed Defect Diagnosis

Arteris Delivers Major Productivity Features for Its Network-on-Chip Interconnect IP and Toolset

Web-Based Electronic Design Automation Tools for LUXEON Power LEDs Simplify and Speed Development of SSL General Lighting Solutions

Si2 Announces Sponsorship of 3-D Architectures for Semiconductor Integration and Packaging Conference

46th Design Automation Conference Names Executive Committee

SEMATECH Announces Speaker Line-Up for 3D IC Design and Test Workshop

Innovative Floorplanning EDA Company boosts sales and support in USA and Europe

The MathWorks Delivers Latest Versions of MATLAB and Simulink Product Families

EMA TimingDesigner 9.1 Adds SDC Support and Integration with Altera's Quartus II Software

Tensilica Presents "Everything You Wanted to Know About Video Processing."
Accellera Announces Call for Nominations for 2009 Annual Technical Excellence Award Honoring Contributions to Electronic Design Automation Standards

IEEE Council on EDA (CEDA) Hosts Opening ICCAD Reception, Talk on New Frontiers for EDA, Meeting to Promote EDA Blogging

Calypto Strengthens PowerPro CG with New Power Optimizations, VHDL Support

Artisan announces Early Availability Program for Artisan GSN Modeler

LogicVision Reports Third Quarter Financial Results

Cadence Helps Staccato Launch Ripcord2(TM) Single-Chip, Ultra-Wideband IC Family

Gemini Unveils Industry's Fastest Spice-Accurate Analog Simulation Technology

Solido Design Automation Expands Support to Address Process Variation Challenges Facing the European Analog/Mixed-Signal Market

Gemini Unveils Industry's Fastest SPICE-Accurate Analog Simulation Technology

Virage Logic to Report Fourth Quarter and 2008 Fiscal Year Financial Results on Monday, November 3, 2008

Virtutech(R) Presents at the International Conference on Hardware-Software Codesign and System Synthesis

Other IP & SoC News

Amkor Reports Third Quarter 2008 Results (Revenue up 4%)

Techwell Reports Third Quarter 2008 Financial Results (Revenue up 22%)

Atmel Board of Directors Rejects Unsolicited Proposal From Microchip Technology and ON Semiconductor

MIPS Technologies Reports First Quarter Fiscal 2009 Financial Results (Revenue up 18%)

Pericom Semiconductor Reports Fiscal Q1 2009 Financial Results (Revenue up 14%)

Atmel Reports Third Quarter 2008 Financial Results (Revenue down 4%)

TranSwitch Corporation Announces the First Licensing of Its DisplayPort Transceiver IP by One of the World's Leading Semiconductor Companies

TI expands 1394 portfolio with industry's most secure content protection

SMIC Reports 2008 Third Quarter Results (Revenue up 10%)

Broadcom Chips Bring Full 1080p DivX(R) Video to Digital Televisions

Powerchip Semiconductor Settles Litigation With MOSAID

Mitrionics Announces Complete PCI Express Plug-In Processor

Altera's Stratix III FPGAs Enable Accverinos to Accelerate High-Performance ASIC Prototypes

Xilinx Announces Development Platform for Building Dual Processor Embedded Systems Using Virtex-5 FXT FPGAs

UMC Reports 2008 Third Quarter Results: Weak Global Economic Conditions Impact Business Performance (Revenue down 2%)

Amkor Announces Interim Ruling in Tessera Arbitration and Reports Favorable Results in Alcatel and Motorola Proceedings

STMicroelectronics Reports 2008 Third Quarter and Nine-Month Revenues and Earnings (Revenue up 11%)

HVVi Semiconductors Announces First HVVFET Power Transistors for DME Applications at European Microwave Week

Cypress Introduces CyFi(TM) Low-Power RF: World's Most Reliable 2.4-GHz Solution for Embedded Control Applications

UMC Announces Foundry Industry's First 28nm SRAMs

IDT Reports Fiscal Second Quarter 2009 Results (Revenue down 2%)

Rambus Reports Third Quarter Financial Results (Revenue down 30%)

QuickLogic Announces Third Quarter Fiscal 2008 Results - Operational Realignment Improves Financial Results (Revenue down 31%)

Microchip Technology Posts Record Net Sales for Fiscal Second Quarter 2009 (Revenue up 4%)

Monolithic Power Systems Announces Record Third Quarter Revenue and Net Income Results (Revenue up 22%)

Hittite Microwave Corporation Reports Financial Results for the Third Quarter of 2008 (Revenue up 14%)

MagnaChip Semiconductor Reports Third Quarter Results (Revenue down 12%)

Ikanos Communications Reports Results for Third Quarter 2008 (Revenue down 13%)

Leadis Technology Reports Third Quarter 2008 Results (Revenue down 52%)

Lattice Semiconductor Reports Third Quarter Financial Results (Revenue down 1%)

Actel Announces Third Quarter 2008 Financial Results

Toshiba to Launch 43nm SLC NAND Flash Memory

Diodes Incorporated Introduces Industry Smallest Bridge Rectifier for Power-Over-Ethernet Applications

WIN Announces Desktop Networking Platforms with Intel(R) EP80579 System-on-Chip (SoC)

Broadcom Completes Acquisition of Digital TV Business from AMD

MoSys (R) 1T-SRAM(R) Embedded Memory Technology Meets TSMC 90 Nanometer eDRAM Process Standards

Freescale Microcontrollers to Power GM's Next-Generation Electronic Engine Control Systems

STMicroelectronics Adds New Library for STM32 MCU, Opening Up New Options for DSP Application Developers

AnalogicTech's 4-Channel LED Drivers Reduce Component Count by 25%, Save Space in Entry-level Cell Phones

Toshiba to Launch 43nm SLC NAND Flash Memory

Micrel Reports Third Quarter Financial Results

Zoran Corporation Reports Third Quarter 2008 Results

Altera and Arrow Electronics Custom Logic Solutions Group Issue Invitation to Technical Workshop

TranSwitch Corporation Completes Acquisition of Centillium Communications, Inc.

DFI Technical Group Releases Low Power Features with New DDR PHY Interface Specification Version 2.1

Cypress Introduces CyFi(TM) Low-Power RF: World's Most Reliable 2.4-GHz Solution for Embedded Control Applications

Atmel and Adeneo Announce Windows Embedded CE 6.0 Training on AT91SAM ARM9 Embedded MPUs

Actel Breaks New Ground With nanoPower and nanoSize FPGAs

Atmel's picoPower 8/16-bit XMEGA MCU Cuts Power and Processing Requirement By 98%

Video: TI Introduces the Lowest Power 1.8-V Audio Codec With Integrated miniDSP in Production Today

ANADIGICS Announces First in New Family of Power Amplifiers Engineered Specifically for 3G and 4G Femtocell Markets

Sonics OCP Library for Verification Now Shipping

Cabot Microelectronics Corporation Reports Results for Fourth Quarter and Full Fiscal Year 2008

Innovative Integration Inc, and R-Interface announce a partnership to co-market & develop IP products & hardware for wireless applications

K-Will Corporation Licenses HiveFlex VSP 2200 Processor From Silicon Hive

ViASIC’s Place and Route Technology Chosen by Achronix Semiconductor

Imagination Technologies -- Further Licence Agreement Signed with Samsung Electronics

STMicroelectronics Expands (U)SIM Card IC Offering with Two New 32-bit ARM-Based Families

SiliconBlue Secures $24 Million in Series-B Funding

SMSC's MediaLB(R) Interface Implemented in Freescale's i.MX35 Applications Processors

NVIDIA GPUs Deliver 'Graphics Plus' Features to New ASUS Gaming PC

National Semiconductor Introduces Industry's First Differential Wideband Amplifier With Programmable Output Limiting Clamp

Tundra Semiconductor Continues RapidIO Leadership with Addition of Cost and Space Sensitive Switch

New Bluetooth(R) + FM Solution from Broadcom Delivers an Enhanced Audio Experience and Extended Battery Life

Achronix Completes First Close Of $52 Million Series B Preferred Stock Financing

Movidia secures funding to enable budding “Spielbergs on the move”

National Semiconductor Introduces Industry’s First Differential Wideband Amplifier with Programmable Output Limiting Clamp

QLogic Reports Second Quarter Results for Fiscal Year 2009

Broadcom Reports Third Quarter 2008 Results

Altera's Stratix III FPGAs Deliver Advanced Computing Power and Performance in Quasonix's Latest Telemetry Products

NetLogic Microsystems Announces Production Shipments of the Industry's First Knowledge-based Processor Capable of Achieving 1.5 Billion Decisions per Second

IPextreme(R) Delivers Free ColdFire Processor for Altera Cyclone III FPGA

NXP Appoints Experienced Semiconductor Sales Leader

Diodes Incorporated Introduces Synchronous Controller Optimized for External Power Adapters

IDT Acquires Silicon Optix Assets and Technology

Atmel's Proven GPS Single-chip Receiver ICs with Highest Navigation Accuracy Now Also Automotive Qualified

ANADIGICS' New Fully Integrated 1GHz Digital Tuner Delivers Exceptional Performance for Space Sensitive Video Applications

Sigma Designs Leverages MIPS Technologies' IP Cores in New High-performance Media Processor

Atmel Launches Unique Single Development Platform Development Kit for Automotive Applications

ARM Profiler for Symbian OS: ARM and Symbian Collaborate to Enable Feature-Rich, Low-Power Advanced Mobile Applications

TI reduces car audio system cost by up to 50 percent with highly-efficient Class-D audio amplifiers

New High-Performance LED Drivers From Toshiba Produce Superior Illumination on Consumer Electronics Displays

You can find the full EDACafe event calendar here.

To read more news, click here.

-- Jack Horgan, EDACafe.com Contributing Editor.