Deprecated: Using ${var} in strings is deprecated, use {$var} instead in /www/www10/htdocs/nbc/articles/view_weekly.php on line 750

Deprecated: Using ${var} in strings is deprecated, use {$var} instead in /www/www10/htdocs/nbc/articles/config.inc.php on line 369

Deprecated: Using ${var} in strings is deprecated, use {$var} instead in /www/www10/htdocs/nbc/articles/config.inc.php on line 392

Deprecated: Using ${var} in strings is deprecated, use {$var} instead in /www/www10/htdocs/nbc/articles/config.inc.php on line 369

Deprecated: Using ${var} in strings is deprecated, use {$var} instead in /www/www10/htdocs/nbc/articles/config.inc.php on line 392

Warning: Cannot modify header information - headers already sent by (output started at /www/www10/htdocs/nbc/articles/view_weekly.php:750) in /www/www_com/htdocs/html.inc.php on line 229

Warning: Cannot modify header information - headers already sent by (output started at /www/www10/htdocs/nbc/articles/view_weekly.php:750) in /www/www_com/htdocs/html.inc.php on line 230

Warning: Cannot modify header information - headers already sent by (output started at /www/www10/htdocs/nbc/articles/view_weekly.php:750) in /www/www_com/htdocs/html.inc.php on line 237

Warning: Cannot modify header information - headers already sent by (output started at /www/www10/htdocs/nbc/articles/view_weekly.php:750) in /www/www_com/htdocs/html.inc.php on line 239

Warning: Cannot modify header information - headers already sent by (output started at /www/www10/htdocs/nbc/articles/view_weekly.php:750) in /www/www_com/htdocs/html.inc.php on line 240

Warning: Cannot modify header information - headers already sent by (output started at /www/www10/htdocs/nbc/articles/view_weekly.php:750) in /www/www_com/htdocs/html.inc.php on line 241
TSMC's Reference Flow 7.0 and DFM Initiative - August 07, 2006
Warning: Undefined variable $module in /www/www10/htdocs/nbc/articles/view_weekly.php on line 623
[ Back ]   [ More News ]   [ Home ]
August 07, 2006
TSMC's Reference Flow 7.0 and DFM Initiative

Warning: Undefined variable $vote in /www/www10/htdocs/nbc/articles/view_weekly.php on line 732
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
Jack Horgan - Contributing Editor


by Jack Horgan - Contributing Editor
Posted anew every four weeks or so, the EDA WEEKLY delivers to its readers information concerning the latest happenings in the EDA industry, covering vendors, products, finances and new developments. Frequently, feature articles on selected public or private EDA companies are presented. Brought to you by EDACafe.com. If we miss a story or subject that you feel deserves to be included, or you just want to suggest a future topic, please contact us! Questions? Feedback? Click here. Thank you!


Warning: Undefined variable $module in /www/www10/htdocs/nbc/articles/view_weekly.php on line 843

Warning: Undefined array key "upload_with_nude_flag" in /www/www_com/htdocs/get_weekly_feature_ad.inc.php on line 69
Introduction

On July 17th TSMC (Taiwan Semiconductor Manufacturing Company) introduced Reference Flow 7.0 that features a powerful statistical timing analyzer (SSTA), a set of new power management techniques and an array of DFM enhancements. It also added a Magma Design Automation design implementation track to the existing Synopsys and Cadence design tracks for easy adoption of TSMC's 65nm process technology. On the same day TSMC announced that multiple Design Service Ecosystem partners have achieved DFM compliance for their 65nm tools.

I had a chance before DAC to interview Ed Wan, TSMC's Director of Design Services Marketing.

From a marketing perspective my team handles the reference flow and the DFM areas which are two of our announcements coming up at DAC. At every DAC we announce a new version of our reference flow. With respect to DFM we delivered on a promise that we made during our symposium in San Jose in May. At that time we said we were working with eight EDA companies in qualifying their DFM tools to make them TSMC DFM compliant. We said we would make them compliant by July.

Would you give us an overview of Reference Flow 7.0?
TSMC leads the industry in addressing the major design challenges in advanced technology. We have been messaging out for about a year and a half now that process technology leads EDA tools not the other way round. Advancements in process technology necessitate the creation of new EDA tool capabilities. That's why we work with leading and promising upstart EDA companies and show them what new process techniques and technologies are coming up with and make sure that there are tools and automation techniques available for our customers to use to take advantage of these new process technologies. Reference Flow 7.0 focuses on three things: power management, DFM and statistical timing analysis. The first two topics are enhancements of what we have already done. At 6.0 last year we introduced power management and DFM techniques. The third item, statistical timing, is new.
The whole idea of reference flow is that it succeeds to TSMC's design ecosystem in helping our customers lower their design barriers for advanced technology and increasing the adoption rate of our advanced technology. Reference Flow 7.0 has expanded this EDA ecosystem because we've added an additional track, a third implementation track which is Magma. For several years now we have had a dual track system with Cadence and Synopsys. The whole idea of creating tracks is that we want to help customers preserve their EDA tool investment. Most of our customers have either Synopsys or Cadence tools in the majority of their flows. We do not want to drastically change tools. If they have Cadence, they should not have to switch to Synopsys tools. So we have developed a reference flow for those two major tracks for the last several years. Now our customers have told us that Magma has become very popular. This year we are announcing a third track which is Magma. That's a major announcement. That's a lot of work on our part. We don't take that lightly. I do not want to comment on how others do their reference flow but in order for us to add a third track, we have to fully validate that the tools in that track meet all of our requirements for the flow. It's a major investment for us to add a third track

How is this version than previous versions?
In terms of design challenges facing our customers, this is the second generation reference flow addressing 65nm design. We started this last year with reference flow 6.0 and continue with 7.0. At 65nm there are divergent requirements. It is more difficult to do timing closure because margins are a lot tighter. Power management requirements are critical not only for the mobile segment but also for the stationary segment because even though you plug a system into the wall, the power needs an extra fan or cooling system that adds to the cost. There is always an opportunity to enhance yield with some DFM technique.


In Reference Flow 7.0 under the topic of logic and system design we have added statistical timing and low power synthesis. Under the area of physical implementation we have added techniques for cell insertion, power routing and DFM techniques for critical area analysis. Outside of this area we have another DFM technique called VCMP (Virtual Chemical Mechanical Polishing) and dummy metal insertion. The reason why dummy insertion is outside physical implementation is because it has got full place and route activity. Timing would of course appear several times in the design flow and there would be various feedback loops.

Would you expand on the how Reference Flow 7.0 addresses low power?
I want to iterate the point that in designing a circuit for low power it takes integration of technology on both the process and the design side. You can not do it only with process technology nor alone with design technology. In some areas where process technology has created a need for innovation, we have created innovations, for instance transistor architectures and new dielectric and gate oxide materials. I am not saying for 65 we have introduced anything new from last year. At 65 we intend to lead with lp or low power node in advance of the g or generic node in contrast with all previous technologies, e.g. 130 and 90, where we announced the g node first. This is due to our recognition that at 65nm our customer adoption rate has shifted from high performance guys to the consumer guys. Design technology innovation includes areas of system design, low power libraries and IP, design flow and EDA tools to automate all of this. The new items in 7.0 in terms of low power techniques are multi-corner multi-mode timing closure and coarse grain MTCMOS.
The timing closure task is much more complicated now because you have to simulate and verify under a lot of conditions, each one creating its own separate corner. For example, for process you have three separate scenarios you need to work with: worst case, typical case and best case. For voltage it used to be best case, worse case and maybe nominal. But now with the low power techniques, you can have multiple voltage islands that could span 4, 6 or 7 different voltages. The number of corners has increased. For temperature there are four standard corners, sometimes three and sometimes more. In the process area we take into consideration the RC delay which is Cmax and Cmin. The permutation of these four areas is a lot of corners. Add to that the different modes the chip can work on: normal mode, low power mode, power transition mode. The separate modes can actually be viewed as additional corners. Taking these into account there are even more number of permutations. The delay estimation and static timing analysis of all these is a big task. Designers have been taking care of this by writing their own scripts and trying to identify which corner is the worse case and ranking the order of corners to run the analysis. They try to make sure that they don't run into what is called the ping pong effect where you fix one corner and create problems in another corner.

We've worked with our EDA partners. We have come up with a methodology and integrated their tools into the flow. The advancement in EDA tools such that you can take care of these multi-corner multi-mode timing closure automatically with the tools based on a specific methodology. For example, DVFS (dynamic voltage and frequency scaling) is a low power technique that allows us to do timing closure for multiple RC corners and multiple operational and power modes.

The problem with multi-corner timing is due to voltage scaling. Designers can use low Vt transistors and high Vt transistors. In any path a designer can use any combination of the two depending on the speed that they need. However, these two types of transistors scale differently as you scale the voltage. For the high Vt transistor the speed decreases faster with decreasing voltage compared to low Vt transistors. So we can see another reason why we have this ping pong effect. You tend to fix a critical path at 1 volt and get another problem at .8 volt. Having the ability to do simultaneous timing closure at multi-corners is very important.

The second low power technique we introduced at 7.0 is coarse grain MTCMOS. We talked about it last year but this is the first implementation in the reference flow. We implemented fine grain MTCMOS at 6.0. The idea behind fine and coarse grain is similar but implemented slightly differently. The idea is to insert a power gating switching cell in series with a standard cell element so that it shuts off the current path when that portion of the logic is not being used or in sleep mode and so reduce the leakage current tremendously. The implementation difference is that in fine grain we insert a power gating switching cell into every standard cell. In coarse grain the power switching cell is tied in series to a block of standard cells. There are tradeoffs, pros and cons with both approaches. The pro with fine grain is that it is easier to implement; there is one set of standard libraries. The disadvantage of fine grain is that it takes up more area. The advantage of coarse grain is that it takes up less area, sharing a single switching cell with a block of standard cells. It also provides better leakage control. The disadvantage is that it is a much more difficult methodology to figure out how big that switching cell is and how big a block one switching cell can take in. There are ground drop differences between standards cells on one edge of the block versus another edge. There are timing closure issues related to this. The methodologies of both have been worked out for 7.0 and implemented in the flow.

Although power management is not a methodology, the library plays an important role in terms of low power enabler. We have introduced some new cells in the TSMC Nexsyssm library at 65nm, for example a new level shifter cell for multi-Vdd design. These are 50% smaller in area. They are more flexible in terms of place and route. We allow abutment on all sides of the level shifter. We have optimized switching cells for coarse grain MTCMOS, optimized for leakage and area tradeoffs. We prevent electromigration because during wakeup you have a large current surge in the cell through the switching cell. We also minimize the ground drop across the switching cell.

We still have the existing standard cell library with low power features: footer cell for fine grain MTCMOS, multi-Vt, retention flip flops, isolation cells and back bias.

We've also announced collaboration between TSMC and ARM on industry's first 65nm low power test chip based on ARM926EJ-S. We had this at the time of the Symposium in May but decided to wait until DAC to announce it. The test chip encompasses DVFS, power gating and data retention. It was implemented on TSMC's 65nm lp technology. We achieved up to 80% dynamic power reduction from DVFS and 95% leakage reduction from light sleep.

What about DFM?
At the symposium in May we announced a dramatic enhancement of our previous strategy on DFM. As background in 2005 we announced internal DFM services whereby we could take a customers GDS database, run DFM and tell the customer hot spots to fix and make recommendations for that. At the Symposium in May we provided customers DFM capabilities at their desktop. We also announced TSMC's DFM Compliance Initiative. There are 18 initial members. This encompasses not only EDA companies but also library companies, IP firms and design services companies. This is because in order to have fully compliant DFM on design we need a more complete ecosystem. These 18 companies agreed with us that DFM is important and committed to work with us to make their portion of the ecosystem DFM compliant. For different players this means different things. For EDA companies for example it means getting their tools qualified and DFM compliant. For IP firms there are specific checks.

There are 8 EDA companies we announced that we are working with to qualify their DFM tools. There are three major categories:
LPC - lithography process check
CMP - chemical mechanical polishing
CAA - critical area analysis
The larger companies have tools in all categories. The smaller firms have tools in one or two categories.

Editor: The eight EDA firms are Anchor Semiconductor, Cadence, Clear Shape Technologies, Magma, Mentor, Ponte, Predictions Software, and Synopsys.

At the symposium we announced DFT Data Kit that encompasses a unified data format. For over a year we have worked with the 8 EDA companies on a unified data format that transmits TSMC DFM data in a secure way. It is a single file that can work on all three tool categories from all these vendors. This is a great productivity enhancement for us, for our EDA partners and also for our customers. They do not have to deal with different text files for different tools from different vendors. We call the DFM Unified Data Format DUF and the DFM Data Kit DDK.

How do these DFM tools fit into the reference flow?
We have added Critical Area Analysis and fixes, VCMP and dummy insertion into the flow. You do not see LPC because there are two separate programs a tool qualification and then a reference flow. They are separate but connected. In the DFM tool qualification we want to qualify more EDA vendor tools. We work with a lot of EDA vendors. The reference flow we have three major tracks: Cadence, Synopsys and now Magma. Only the tools from these 3 vendors are included in the flow. We don't include DFM tools from other vendors in the flow. The reason is that we use the flow not to demonstrate how many tools we can qualify but to demonstrate to customers with their existing tool investment how to go from spec to tapeout on a proven path.

What about statistical timing analysis?
We see a paradigm shift in terms of timing closure. Statistical timing analysis is a probabilistic approach to analyzing the timing of a circuit by taking into account process variations modeled with statistical distribution. It is an intelligent way to do timing closure. It deviates from the current way that people are doing with worse case timing analysis.

We are not just looking at worse case corner delay. We are looking at the whole probability distribution. Given this distribution for two paths, you may find the one with the lower worse case has a greater probability of negative slack and hence of failure. Hence it should be the one to focus on. Otherwise you may encounter the ping pong effect. If you have a hundred such paths the decision of which path to fix is considerably more difficult.

With TSMC statistical timing solution is not just a tool, it involves statistical spice model, libraries supporting statistical timing, statistical timing delay calculation and EDS enhancements for SSTA.

Which tools are included in the reference flow?
Synopsys Time-Vx
Magma Quartz-SSTA
Cadence SignalStorm NDC

All the major tracks have SSTA capability.

How were the initial 8 EDA firms chosen? Did you approach them? Did they approach you?
We get a lot of requests, more than these 8, to evaluate their capabilities. Our intent is not to limit ourselves to 8. The 8 were our first group. We have intention to look at the next group which we have not announced yet. We have three major criteria we look for when we selected the first group to start the qualifying procedure. They had to be promising in all three areas. The first one is accuracy. First and foremost they have to be correlated to our own internal results. The second one is runtime. It can't take forever. The third one is user friendliness, the user experience. These initial 8 firms passed the mustard. We spent upwards of 6 months doing the qualification. Prior to those 6 months we were evaluating and so forth and then we actually did the qualification. This month (July) the qualification was completed and we announced these tools have passed and are now DFM compliant.

Was there any consideration given to the financial stability or the size of these companies?
It is always good for a company to have longevity. We also looked at innovation. This is very key. Also forward thinking. These factored into the decision.

What was the qualification process? How did you work with the candidates?
Since two years ago, we have had our own internal capability for running DFM checks. As I mentioned earlier we have our internal services that we have been providing to customers. The key point is that these tools have to be accurate; they have to correlate with our results, pass some test cases we exercise them through. We have guidelines in terms of what the thresholds are. We run those tests. This is fairly involved. It involves not only our design people but our product engineering people. We have a lot of manufacturing knowledge and expertise tied into the qualifying process.

The qualification process was run by TSMC internal people. Did you consult with the customers, the end users of these tools?
We do work with only engagement customers. We have done that. I don't think we have announced any yet. But throughout this selection and qualification process, we did involve some of our early collaboration customers in the DFM process. We want to spend our effort wisely. When we spend effort to qualify a tool, we want to make sure that there are users out there for it. It is not just a pure technical exercise. Hey, this is nifty, very accurate but nobody wants to use it. So what!

Did you work closely or independently with the vendors during the qualification process?
Yes, we did. We engaged them tightly. Many of them traveled to Taiwan to work with our folks.

Is the process of determining the reference flow any different?
The process is slightly different because in the DFM qualification the key is to correlate our results with these tools and test cases. The reference flow goal is to provide a proven path for designing a circuit in the advanced technology node of 65nm. The requirements are different. For every tool there may not be a standard circuit to compare the results against. For example for SSTA we do have a requirement to correlate with our internal simulation results. In areas like synthesis, that is important part of the flow, we don't do any correlation because it is well known and accepted in the industry.

The reference flow is based on offerings from three vendors. If I am a small startup that has a point tool that is integrated with one or more the major DA tool flows, how do I get endorsed, qualified, whatever by TSMC?
We do have smaller tool companies in our reference flow. There are some pieces of the methodology in the flow that the three major vendors do not have tools for. For example as part of our flow the interaction between the die and the package is important. We need to do die-package pin connection checks, make sure the pin is connected to the pad correctly. For example, we need the capability to do dynamic timing analysis across the die-package boundary. The EDA guys do not have those capabilities. We pulled in a small EDA firm to fill that gap. Another example last year was that dynamic power analysis was not available. It is important for 65nm. We worked with Apache using a product called RedHawk. That was announced as part of our flow. Since then Synopsys has introduced their power analysis tool, so we included it in the flow. Apache is still there because we think that it is fairly good. That's how small companies get in and basically get endorsed by TMSC by virtue of being in the reference flow.

Does the reference flow call out specific tools at each point?
The short answer is yes. But I want to make a distinction. There is a design flow which is tool independent. In other words we have to do dynamic timing analysis regardless of which tool we use. Then there are three tracks that tell you which particular tool fits into each particular piece of the flow. The reference flow is sort of a generic methodology. These specific steps are taken. The tools fit that flow in one of the three tracks including some point tools as well. It is worth while to add that you do not absolutely have to use any one of these three flows. It is just that these tools have been exercised. They do make up an entire methodology. The design community uses the majority of these tools anyway. Some of them do a mix and match. They do not have to use all of the tools in any one track.

Is there a document detailing the flow?
Absolutely! Our customers download them form our website. Each year at DAC we turn on the enable button. Customers start downloading application notes, scripts and the description of the flow.

Is this available to non-customers?
You have to be under NDA. The flow itself, the methodology is available. The particular tools within the tracks are more for the customers.

I understood you to say that the goal of the reference flow is not to maximally qualify the number of tools but rather to show three parallel flows which if followed will results in successful yield.
First, I do not use the word yield. Second, we do use the word qualify when we talk about the reference flow. These tools are qualified. Consider two examples. In one case, SSTA, we do correlation passes to make sure these new methods are correlated. For some of the older methods like synthesis we do not need correlation. The methodology is the key. It is the methodology of how to take a design at 65nm from spec to tapeout. It addresses issues, design challenges at 65nm such as low power, DFM and statistical timing closure. The tools are the way we do it but the methodology is the key. We provide the methodology and show that there are tools out there that fit the methodology. In doing this exercise it is useful for us and the whole industry because we begin to see what the emerging critical issues are. Now it's static timing analysis, now it's DFM and a generation ago it was low power. We are still addressing low power. We are still coming up with new techniques and strategies for low power. But every time we go down a new technology node, we are finding the need for new methodologies to be pulled together.

Other foundries with whom you compete have their own reference flows. At a high level are their objectives the same as TSMC?
We do not comment on what other companies do. We did spend a lot of time to study the effects of these new process innovations and how we can come up with new methodologies to help our customers take advantage of them. We do lead the industry in reference flow. I would say that. We know how much effort we put in. As you can see, we run test chips on low power working on 7.0 and show the silicon results. We are in the process of defining 8.0. We work not only with big EDA companies but also with small ones. It takes time to identify the holes. We know we need to create a methodology but if the tool doesn't exist in a major track we need to find other vendors who may have a tool to fill that methodology need.

You say you are the leader in reference flow. How do you conclude this?
Every year there is a big anticipation at DAC. We have made this an almost yearly event that TSMC will announce the next generation reference flow in terms of what new issues we are trying to address. Being a leader in the foundry industry in terms of advanced process technology, we are typically the first to see these new requirements that come up for new methodologies.

Editor:
Perhaps the biggest beneficiary of TSMC's Reference Flow 7.0 is Magma who was added as a third implementation track. Clearly Magma thinks so. During DAC I attended a “free” dinner hosted by Magma. Prior to the meal Magma and TSMC gave presentations. So much for the “free” dinner. During his presentation Rajeev Madhavan, Magma Chairman and CEO, said
“We worked with TSMC for a long time. Achieving this Reference Flow for Magma is a major, major significant achievement. For the last few years we have had customers talk about that we are not in the TSMZC reference flow and that you are not compliant with TSMC. Even though Magma has a lot of customers, this was a major stumbling clock. .. I really want to thank TSMC for the excellent support they provided us in making this happen.”
TSMC is not the only foundry with reference flows. For example from UMC (United Microelectronics Corporation) website we have:
Given the deep sub-micron design challenges that circuit designers are facing, UMC Reference Design Flows provide customers with silicon-proven design methodologies that reduce time-to-market by enabling manufacturability. The UMC Reference Design Flows incorporate 3rd-party EDA vendor's baseline design flows to address issues such as timing closure, signal integrity, leakage power and design for manufacturability and adopt a hierarchical design approach built upon silicon validated process libraries. The UMC Reference Design Flows cover from schematic/RTL coding all the way to GDS-II generation and support Cadence, Magma, Mentor and Synopsys EDA tools. All of these tools have been correlated to UMC silicon and can be interchanged for added flexibility.
Chartered Semiconductor Manufacturing also has reference flows with Cadence, Synopsys and Magma. ARM has the ARM Implementation Reference Methodology.



The top articles over the last two weeks as determined by the number of readers were:

More than 11,000 Attend 43rd Design Automation Conference for Highest Turnout in Five Years Preliminary DAC attendance numbers broken out by category are: 3,231 registered conference attendees; 3,421 registered exhibit attendees and 4,700 total exhibitors, visitors and guests.

Agilent Technologies Signs Agreement to Acquire Xpedion Design Systems, a Leading Provider of RFIC Simulation and Verification Software Agilent Technologies and Xpedion Design Systems, Inc. announced they have signed a definitive agreement for Agilent to acquire Xpedion, a privately held company that provides software for wireless and high-speed digital circuit and systems design in the communications industry. The transaction is subject to standard closing conditions. Financial details were not disclosed. Xpedion's GoldenGate product enables RFIC designers to analyze their designs at the transistor level faster and more accurately.

Pinebush Technologies Expands EDA Product Lines Pinebush Technologies, Inc., a leader in printing and plotting software, has expanded its EDA product lines with the introduction of HyperStudio, a suite of viewing, transformation, printing, and plotting tools for the physical design engineer.

Cadence Reports Q2 Revenue Up 12% Over Q2 2005 Cadence reported second quarter 2006 revenue of $359 million, an increase of 12 percent over the $321 million reported for the same period in 2005. On a GAAP basis, Cadence recognized net income of $30 million, or $0.10 per share on a diluted basis, in the second quarter of 2006, compared to $0.5 million, or $0.00 per share on a diluted basis, in the same period in 2005.

Intel Unveils World's Best Processor; New Product Line Delivers Record Breaking Performance While Consuming Less Power Intel unveiled 10 Intel Core 2 Duo and Intel Core 2 Extreme processors for consumer and business desktop and laptop PCs and workstations. The desktop PC version of the processors also provide up to a 40 percent increase in performance and are more than 40 percent more energy efficient versus Intel's previous best processor. The Intel Core 2 Duo processor family consists of five desktop PC processors tailored for business, home, and enthusiast users, such as high-end gamers, and five mobile PC processors designed to fit the needs of a mobile lifestyle

STARC Adds Magma's Quartz SSTA Statistical Analysis and Optimization Solution to Variation-Aware 65- and 45-nm Design Methodology Magma announced that the Semiconductor Technology Academic Research Center (STARC) of Japan has chosen Magma's Quartz SSTA statistical static timing analysis and optimization software for its process-friendly design flow methodology project for 65- and 45- nm and smaller geometries. Quartz SSTA is part of the variation-aware design methodology that's currently under development and slated for early 2007 release.



Other EDA News

Agere Systems Adopts Sigritys Unified Package Designer for Single and Multi-die Package Design

Intel, AMD, Wind River and Atmel to Showcase Latest Technologies at Global Sources' Embedded Systems, EDA & Test Conferences

MoSys, Inc. Reports Second Quarter 2006 Financial Results

ADVISORY: Synopsys Announces Earnings Release Date and Conference Call Information for the Third Quarter Fiscal Year 2006

Cre8Ventures Houses Start-up, Mirics Semiconductor, at Fleet Offices

Fujitsu Microelectronics Solutions Selects Mentor Graphics Catapult Synthesis Based on Quality of Results in Wireless Communication Applications

Synopsys Extends Liberty Modeling Standard to Enable Variation-Aware Design

Altera Granted Conditional NASDAQ Listing

Ansoft Releases ePhysics v2

Faraday Collaborates With Novas to Accelerate Debugging of Designs Containing Memory IPs

Mentor Graphics Delivers First Hardware/Software Processor Support Package for the ARM Cortex-M3 Processor

Amalfi Semiconductor Turns to Cadence Kit to Speed Up Product Development for Cell Phones

Siemens Expands Its Use of Cadence Incisive Formal Verifier to Improve Time to Market


Other IP & SoC News

ANADIGICS Announces Production Shipments of Dual-Band Power Amplifiers for LG Electronics EV-DO Chocolate Phone

AMCC to Acquire Quake Technologies

TI Introduces Complete Current-Shunt Monitor with Integrated Comparators and Voltage Reference

RF Transmitter from Semtech Offers Low System Cost Implementation for Wireless Security, Home Automation Applications

Atmel Adopts Configurable ARC(TM) Video Subsystem for Multimedia Handheld Devices

EMCORE Corporation, Group4 Labs, and Air Force Research Laboratory Announce World's First GaN-on-Diamond Transistor

MOSAID Brings Motion to Dismiss Micron Complaint

Texas Instruments Releases to Production PCI Express x1 Physical Layer Device

Avago Technologies Announces New Gate Drive Optocouplers in Stretched Small Outline Packages for AC and DC Brushless Motors

Fujitsu and Tokyo Institute of Technology Announce the Development of New Material for 256Mbit FeRAM Using 65-nanometer Technology

Toshiba Develops LBA-NAND(TM) Flash Memory With Logical Block Addressing for Embedded Applications

TI's Receive Signal Chain Demonstration Kit Simplifies Ultrasound Designs

MagnaChip Launches New TFT One Chip Solution for Mobile Phones and MP3 Players

Leadis Technology Announces 262K QQVGA Color STN Driver IC

Tundra Semiconductor Host Bridge for PowerPC(R) Selected for Industry's First PrAMC Module for High-Performance Telecom Applications

Wisair Introduces Multi Mode Chip Based on Certified Wireless USB
MoSys Launches 65nm Macro Program

Micrel Rolls out Latest in Family of 2-Port Ethernet Switch Solutions for Networking and Industrial Applications

UMC Reports 2006 Second Quarter Results

ARC International plc Announces Unaudited Preliminary Results for the Six Months Ended 30 June 2006

WJ Communications, Inc. Reports Second Quarter 2006 Financial Results

Tessera Technologies Announces Second Quarter 2006 Financial Results

NEC Electronics and NEC Unveil Innovative System-in-Package Technology

IBM Delivers Breakthrough "Energy-Smart" Business Computing Systems With AMD Opteron Processors to Extend IBM's Advantage in the Worldwide Server Industry

Integration Announces Silicon DAA Chipset With Analog Interface for IP Telephony, Modem and Line Monitoring Applications

Atmel Completes Sale of Grenoble, France Subsidiary

ModViz Announces Support for New NVIDIA Quadro Plex VCS

FSA Adds Three New Directors to Its Board of Directors

Avago Technologies Introduces Long-Life, High-Brightness Surface Mount White LED for Automotive, Electronic Sign, Signal Applications

MOSAID Creates Chief Technology Officer Role and Appoints Silicon Valley Veteran to Head Semiconductor IP Group

Semtech Debuts High-Speed Pin Electronics Driver for Next-Generation Memory Testers

Marvell Breaks Ground for New Regional Headquarters in Singapore

AMI Semiconductor, Inc. to Acquire Select Businesses of NanoAmp Solutions

AnalogicTech Reports Record Revenues for the Second Quarter 2006

TechnoConcepts Appoints Chun Lee as VP of ASIC Marketing

Actel Broadens Support for CoreMP7-Based Designs With New SoftConsole Tool

National Semiconductor's PWM Controller Utilizes Unique, Low-Noise, Emulated-Current-Mode Architecture for Design Flexibility

RFMD(R) Extends Leadership in Edge Power Amplifiers with Mass Production Orders from Samsung

Zilker Labs Introduces Industry's Most Integrated 3 Amp Power Management and Conversion IC

California Micro Devices Reports June Quarter Financial Results

AMIS Holdings, Inc. Reports Second Quarter 2006 Financial Results

ON Semiconductor Reports Second Quarter 2006 Results

Intel Unveils World's Best Processor


You can find the full EDACafe.com event calendar here.

To read more news, click here.


-- Jack Horgan, EDACafe.com Contributing Editor.