Deprecated: Using ${var} in strings is deprecated, use {$var} instead in /www/www10/htdocs/nbc/articles/view_weekly.php on line 750

Deprecated: Using ${var} in strings is deprecated, use {$var} instead in /www/www10/htdocs/nbc/articles/config.inc.php on line 369

Deprecated: Using ${var} in strings is deprecated, use {$var} instead in /www/www10/htdocs/nbc/articles/config.inc.php on line 392

Deprecated: Using ${var} in strings is deprecated, use {$var} instead in /www/www10/htdocs/nbc/articles/config.inc.php on line 369

Deprecated: Using ${var} in strings is deprecated, use {$var} instead in /www/www10/htdocs/nbc/articles/config.inc.php on line 392

Warning: Cannot modify header information - headers already sent by (output started at /www/www10/htdocs/nbc/articles/view_weekly.php:750) in /www/www_com/htdocs/html.inc.php on line 229

Warning: Cannot modify header information - headers already sent by (output started at /www/www10/htdocs/nbc/articles/view_weekly.php:750) in /www/www_com/htdocs/html.inc.php on line 230

Warning: Cannot modify header information - headers already sent by (output started at /www/www10/htdocs/nbc/articles/view_weekly.php:750) in /www/www_com/htdocs/html.inc.php on line 237

Warning: Cannot modify header information - headers already sent by (output started at /www/www10/htdocs/nbc/articles/view_weekly.php:750) in /www/www_com/htdocs/html.inc.php on line 239

Warning: Cannot modify header information - headers already sent by (output started at /www/www10/htdocs/nbc/articles/view_weekly.php:750) in /www/www_com/htdocs/html.inc.php on line 240

Warning: Cannot modify header information - headers already sent by (output started at /www/www10/htdocs/nbc/articles/view_weekly.php:750) in /www/www_com/htdocs/html.inc.php on line 241
What Will My Chip Cost? - August 08, 2005
Warning: Undefined variable $module in /www/www10/htdocs/nbc/articles/view_weekly.php on line 623
[ Back ]   [ More News ]   [ Home ]
August 08, 2005
What Will My Chip Cost?

Warning: Undefined variable $vote in /www/www10/htdocs/nbc/articles/view_weekly.php on line 732
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
Jack Horgan - Contributing Editor


by Jack Horgan - Contributing Editor
Posted anew every four weeks or so, the EDA WEEKLY delivers to its readers information concerning the latest happenings in the EDA industry, covering vendors, products, finances and new developments. Frequently, feature articles on selected public or private EDA companies are presented. Brought to you by EDACafe.com. If we miss a story or subject that you feel deserves to be included, or you just want to suggest a future topic, please contact us! Questions? Feedback? Click here. Thank you!


Warning: Undefined variable $module in /www/www10/htdocs/nbc/articles/view_weekly.php on line 843

Warning: Undefined array key "upload_with_nude_flag" in /www/www_com/htdocs/get_weekly_feature_ad.inc.php on line 69
Introduction

“What will my chip cost?” is a question whose answer that should be of great interest. An accurate estimation early in the design flow should be quite valuable when changes are easily made before considerable time, dollars and resources have been invested. Giga Scale IC is a firm that offers a tool called InCyte that attempts to answer this and related questions. Moreover a version of this product is available for free downloading from the web.

I had an opportunity to interview Adam Traidman, President and VP of Business Development at Giga Scale IC. Prior to joining Giga Scale, he ran North America West sales at Hier Design. Previously, Traidman has served in various management and technical roles at Adaptec, Monterey Design Systems, Texas Instruments, and the NASA Jet Propulsion Laboratory

What attracted you to Giga Scale?
It was really the uniqueness of the technology. This company happens to be funded by the same lead venture capital firm that funded the last company that I worked for, Hier Design, which was acquired by Xilinx in June of last year. But what attracted me to Giga Scale irrespective of that other relationship was the unique technology. Having a background in ASIC design, specifically ASIC physical design, I was well aware of the frustration that the typical designer and design manager goes through when they are trying to accomplish a number of things that InCyte helps with. First of all, the main thing is the early estimation of die size and power consumption. Very early in the design flow it is often very difficult to quantify and to use that information to your advantage to help choose between different process variants, different IP libraries, and different IP options. It was something I personally struggled with for months in wrestling with all the technical and economic tradeoffs between different options. When I saw that the tool was in this solution space, a tool that could help you make the decisions accurately very early in the flow, it appealed to me greatly from a technology perspective. In terms of market perspective, the InCyte product is also very unique. There really hasn't been any mainstream EDA company in this space- what we call the early architectural exploration and estimation space. It just seems like an area under served by EDA so far. So it was a unique product which I happened to believe in based upon my technical background and also in the context of the market it was a unique space. It felt like a good opportunity.

On the company website George Janack is listed the founder and executive chairman. You are the president. Who is in charge of day-to-day operations?
I'm in charge of operations. George is also the CEO at Silicon Navigator. I fill that responsibility here.

Any challenges with the founder still actively in the picture?
It's a sign of a strong leader when you realize that you've grown a company or a concept to the point when from a day-to-day perspective can be run by someone who can execute on the vision that you initially laid out. There are all sorts of entrepreneurs. There are the technical visionaries, the technologists who come up with the fantastic new product ideas, like to found companies and move on to the next one. They are those who take it all the way to exiting or going public. George fits someway in between. He is a fantastic technical visionary, literally a genius. I can honestly say that he is a brilliant engineer. His passion is the technology and the capacity he plays at Giga Scale is precisely where he can be the technical and business visionary. He is the sounding board for ideas, a strategist and sort of high level system architect. One of his strengths is that he recognizes when the technology has been sufficiently developed to a point where it makes more sense to bring in a team of people around him to execute the initial vision he laid out and then do what he does best which is to focus on long term vision and long term strategy.

What is Giga Scale IC's mission?
Giga Scale's mission as a design automation company is to be best in class to produce what we call IC estimation and architectural analysis tools. As I mentioned before this is a very unique space, one you won't find any larger or even smaller EDA companies playing in. The goal of course as it is with most design automation tools is to lower chip costs but still meet functional and performance goals. The company is just over two years old and based in Cupertino, CA. We are venture capital backed. As of today we have more than 2,000 seats of the InCyte tool.

Does this number include paid and unpaid licenses?
The current number of users is about 2,300. That figure includes folk who have downloaded the free version of the tool as well as those who have upgraded to subscription version and those who are enterprise customers. The figure includes both free and paid customers.

In December 2003 there was a press announcement of a $1.2 million investment round led by ITU Ventures. Was this the total investment in the company to date?
ITU was the lead investor. There are also a number of angel investors. The total that has been invested is a little higher than that.

Would you give me a high level overview of the product?
There are two main components. Fast and accurate chip estimation. A user will typically use the tool by defining their chip specification in the software. This spec is a high level description of their chip. From that InCyte estimates the die size, yield, chip costs as well as power consumption and leakage. When we talk about chip cost, we are talking about comprehensive economic analysis. We are utilizing defect density data, wafer pricing data as well as package pricing data. The InCyte tool can essentially give you a complete budgetary quote for a finalized state which means in addition to the quote estimation we can give you a complete cost estimation of the final packaged chip. This economic analysis capability was just released prior to DAC.

Once the chip specification is in InCyte for some users what is more important than even the initial estimation is what we call architectural exploration. Users have the ability to trade off what their design would look like in the context of different manufacturing and IP options, something macro like switching from 130 nm to 90 nm or switching a process variant by switching to LV-LowK or different IP libraries like Artisan high performance versus high density standard cell. The important thing if you make these tradeoffs is that you can quantify the impact that they have on the metrics that InCyte calculates such as die size and power consumption. The goal here is to let the design teams explore which of the different options will allow them to meet whichever goals are important as well as expose the interaction between all these metrics.

Why this product now?
One of the reasons we think that the market is ripe for this type of functionality and this type of tool at the moment is what we are calling the paradigm of design for cost. In the high volume, low margin world such as cell phones, price pressure on the final products are effectively pushing down the cost of all the components within those products including the main ICs which contribute highly to the cost. That cost sensitivity is trickling from the end user buying the cell phone all the way down the semiconductor supply chain literally to the desk of the IC designer. The design teams are being asked very early in the flow to calculate how much the chip will cost and to use that cost number as input to the design flow. The trouble is that the design teams are struggling to figure out what the costs will be and how the technical decisions will impact those costs because of a lack of architectural analysis tools. There are literally dozens of manufacturing options at 90 nm and 130 nm and hundreds of IP libraries. It is unclear which combinations are optimal.

If you look at the architectural tradeoffs you may be surprised to see that we are not talking about 2% to 3% deltas for some of these metrics. In some cases we are talking 40% delta in power or 150% in leakage when you move into process variants such as LVLK versus generic technology node. What we expose here is that there are both economic and technical ramifications to explore some of these tradeoffs. As a design manager if you don't explore these tradeoffs, you may be leaving a lot of money on the table.

How accurate is the InCyte estimate?
What our customers tell us is that InCyte is accurate to within 5% to 10% of final silicon; 90% to 95% correlation to silicon. Keep in mind that this is at the architectural pre RTL level in the design flow. Contrast that with the existing design flow where you really don't get a good idea of the physical die and power consumption of the chip until you are down in the physical design stages, which is often six months into the design project.

What is the input to InCyte? What does the design team need to use InCyte?
The answer is basically a high level spec. For instance: How many blocks are there in the design? How fast do you want to run those blocks (clock speed)? What is the number of clock domains? What type of I/Os?

For most digital ASICs these days 50% to 60% of the die area is memories or other IP blocks, if not a higher percentage. We actually bundle the tool with an extensive IP catalog. Our IP catalog is populated by our IP partners. It contains several thousand hard and soft digital, mixed signal and analog IPs.

Is VCX your partner?
We have a relationship with VCX. We have direct relationships with many IP partners who provide us with direct data and we also offer an integration with VCX. VCX as you know has been acquired by Beach Solutions. I believe this happened last year. It is unclear where their business is migrating but we still do work with them. This IP catalog gives customers access to the data sheets of IP. More important than that they can literally drag and drop from the IP catalog into their design. They can quantify the impact in terms of the metrics that they care about like die size and power that the IP will impact their design. They literally drag an ARM 9, a MIPS 4000 and see what the deltas are. Likewise with memories. We model them so that users can select from vendors like Virage Logic, Artisan, ARM, Virtual Silicon and Faraday. We model all of the memories such that the users can choose memories as needed fro their designs.

What are the outputs?
A size estimate, a cost estimate and a power estimate. All of this data is in the form of data sheets which are available graphically within the tool. You can also print them out in a report form. That report looks very much like a semiconductor data sheet or a report you get back from an ASIC vendor including all the technical stuff plus an estimated floorplan. This floor plan can be exported and sent to downstream floor planning tools like Cadence First Encounter and Synopsys Jupiter, products that provide a level of continuity between the initial spec and estimate with the actual implementation in the world of physical design that helps design teams mitigate risk so that the final implementation would not look different from the initial spec which is very common if you don't have this conduit between them.

How long does InCyte take to generate an estimate?
InCyte is actually very quick. You may be surprised to hear that it takes 2-3 seconds to perform an estimate. The reason we focus so much on the time is that we want the designers to be able to make quick tradeoffs and do what-if analysis. The reason it runs quickly is that we have done all the hard work ahead of him. We build a set of models with a tool we call technology macro modeler. These models are built from the same design kits the place and route tools read in to understand the IP and process data. The point is that we get our accuracy because we build models off the same design and layout data that P&R tools use. Number one this enables us to estimate very quickly by doing the model beforehand and number two this enables us to get accuracy because we are using the same data as the implementation.

What do you see as the total available market for chip estimation?
This is kind of a new space for EDA. It helps to get an understanding of what we think this market looks like. In order to do that we went out and talked to our customers. A few gave us feedback such as Intel who said that for each PDA design they did they do half dozen architectural revisions. HP told us that for each ASIC they tape out, they send RFQs to over a dozen ASIC vendors who each do their own estimate. HP also does its own estimation as well. The bottom line is that if you agree and most analysts agree that based upon data coming from companies like Gartner you're looking at 7,000 to 8000 tapeouts per year. We are estimating that the number of actual design estimates is about 20x the number of tapeouts. We see that number growing over time as design teams move from 130 nm to 90 nm and beyond. The number of different process variants and IP libraries that are available has been growing tremendously. That forces the design teams to spend more time early in the flow exploring these different options. If you go out today and ask design teams what tools are you using to perform estimates, it usually is literally pencil and paper or a custom Excel spreadsheet. What we are banking on is that custom Excel spreadsheets can't scale and they can't keep up with the rapid pace of technology change.

Do you see anyone competing or overlapping InCyte?
I'll be completely honest with you. The biggest competition we face is internal home grown Excel spreadsheets. Several services companies use our tools. It is actually helpful for them. They usually have spreadsheets as well and are happy to replace them. There was a tool from ISynergy different from InCyte that operated a bit further down the design flow which could be misconstrued as a potential competitor. But that tool has all but gone for the moment.

There really is no tool out there even Excel spreadsheets for a design team to make rapid tradeoffs between different technology nodes, different IP options. Even the spreadsheets don't do that. They typically choose one technology node.

What is your market strategy?
In terms of our market strategy with the InCyte product it is to replace Excel with a tool for chip estimation. We divide our target market into three major segments. The first is the IDM and captive foundries. Firms like Intel. Large IDMs like HP who have internal need for chip estimation often for technical and business viability for new chips or architectural analysis. The second group of customers is ASIC, IP and service vendors. This is in interesting group of folks. These are folks who typically use InCyte as part of the selling process to help quoting a customer. AN ASIC or IP vendor will use our software in an effort to showcase the ROI benefit of their IP versus a different piece of IP. The third market is the mainstream sort of COT design community. This is the largest market by far but there are some challenges. Most of these firms are doing between one and a dozen ASICs per year. They have more of a temporary need for this chip estimation and architectural analysis, meaning typically the first 30% of their design flow. If they are only doing only or two ASICS per year, they don't need the tool for a long time. They have a low ASP and a temporary need. A direct sales approach is not very scalable. Also in the mainstream market there are so many foundries that you need to amass a myriad of data to really be of value.

For the mainstream market we launched chipestimation.com last February. The goal with this website was to serve the mainstream design community, literally to flood the market with the InCyte product, create a barrier to entry and give the designer an alternate to their own internal spreadsheet at little or no cost. With the free version anyone can download it. There are over 2,300 people who have done so. They can perform industry average estimates. The estimates are not tied to a specific foundry or an IP vendor library. When they want to get access to IP vendor and foundry data in order to make their estimates more accurate and in order to compare and contrast or if they want to gain access to economic analysis engine to get pricing they go back to the website or give us a call and we sell them an upgrade subscription. These range from $1,500 to $3,500 per month. Users can subscribe for as short as 30 days and as long as three years. The response has been just overwhelming. Another reason we are happy with the validation that this is an area that is very important to the design community.

If someone signs up for a subscription is it limited to a single designer?
Yes. One user per license.

Does that user have access to all the various libraries?
The difference between $1,500 and $3,500 is how much IP and foundry data you have access to and whether you have access to the economic analysis engine. A typical user in the mainstream may say: I would like to purchase 3 months access to data from TSMC with the ARM foundation libraries. They will get access to that data for say $2,500 per month and they will get access to the economic analysis engine. There are over 2,300 users who have run over 9,000 designs thru the tool just in the last 5 months. We are literally seeing dozens of users signing up every day. It's been overwhelming. This is about 5 times higher than we had initially forecast.

What is Giga Scale's business model?
It is a little different from a traditional EDA vendor strategy. We primarily sell the InCyte TBL. They are broken down into two major segments. One is enterprise deployment. These are run just like a traditional sales process at Synopsys or Cadence. Direct sales, multiyear deals, 6 figure yearly pricing, and so forth. Prospects like Intel, Lucent, and LSI Logic. These folks have a lot of users, a lot of need for estimation every week if not every day. They are much more like a traditional EDA deal. This is where we derive the bulk of our revenue at the moment. Our enterprise customers are our bread and butter.

The mainstream ASIC design community is the other segment which we address with chipestimation.com, mostly fabless ACIS companies. But there are foundries, design service companies, IP vendors, all sorts of folk. We also derive some additional revenue from chipestimation.com in order to subsidize the free version of our tool we do share sales lead data with IP vendors and we make that known to our users. But we do not collect any data or any design. Everything is 100% confidential. All the actual estimation is running on the user machine. There is no tracking.

What is this economic analysis engine?
It is literally the only product on the market that has any foothold in this space. The idea is to give designer reams an early estimate of the final packaged chip cost. Help them to understand as they make changes to their designs, how they are impacting cost.

For example: If I am the design manager, I have certain performance goals to hit for my chip. If I'm using this Artisan standard cell library and architecturally speaking I don't think that I will be able to meet timing, I call Artisan and say I need faster cells. Have you got anything? They say sure, we have these overdrive cells. You are going to use different process variants and your wafers will cost a little more but you are going to get 30% performance improvement. Fantastic. I go through the whole system implementation for 12 months, get it all done, get ready for tapeout and then in physical design process like normal I run a power analysis. This tells me that I'm over my power budget. More important than that now I am going to have to use a thermally enhanced ceramic package. The package is going to cost three times the silicon. It will price my product right out of the market. The reason I didn't know is that I didn't understand the interrelationship between things like performance, power and cost. That's precisely what the economic analysis engine with InCyte exposes. What the tool does is build upon the technical estimation results, recommend packaging for volume based pricing for that package. We work with package consolidators who work with the package vendors to build data bases of packaging price data. With that we also have wafer pricing data from a variety of sources around the industry. We get industry average defect data, so that we can realistically assess lifecycle yield assembly costs, mask costs and NRE costs. All of that data comes together in the form of a budgetary quote which is presented to the user. Once you have that quote you can do lifecycle and ROI analysis. We can forecast for a design team what the cost of a chip will be over a 3 or 5 year span given quarterly based changes in wafer pricing and defect density. You can see the cost of the chip going down over time. With that data you can estimate how long it will take to reach profitability or amortize NRE costs. More on the financial side of the analysis but tied to the technical side.

Why important now? Why are people so interested in this?
Increased sensitivity to cost. In the late 90s when I was a designer it was all about features and performance. Companies like Cisco were willing to pay top dollar for EDA tools to hit line rates and speed requirements. Today's designers are focused on cost and yield. The so-called DFM bubble is EDA's response to cost and yield thing. But silicon cost and yield is only a piece of the equation, only one aspect of economic variability of a chip. Design teams need to look how all of their technical decisions impact final chip costs. They need tools to help analyze the economics of their designs very early in the process. That movement is what we call design for cost.

The second reason why we think there is so much importance at the moment is the explosion in IP and manufacturing options. Literally at 90 nm TMSC has over a half dozen process variants. For each of these process variants you can find over a half dozen standard cell libraries from major vendors. How can you realistically decide which one to use? You have to go through place and route for dozens of options. It could take a year to make the decision.

You also see design teams moving up to higher levels of abstraction. They are sitting at 30,000 feet writing ESL code. They don't know what is physically possibly in silicon not to mention what the different implementation options are from an IP and library perspective. The issue is that the designers are working at a very high level but way down in the back end there is an explosion in different options that is available to them. They need something to bridge this gap and help tie what the designers are doing to what can really be implemented. We call that concept-to-design. Design teams that we are working with are telling us that they need something to take these high level block diagrams that they have and quantify them in real IP and manufacturing choices and tell us what is realistic and what is not.

Is the free industry average tool useful in itself or only as a way to understand the capabilities of the subscription product?
We have tried to make it useful but not something that can be relied on because obviously we need an incentive for folks to upgrade. Let me explain how we see this evolving, we see this in practice today. Designers have their own custom spreadsheets that are cumbersome and difficult to keep up-to-date. A designer ends up downloading InCyte and dumping the spreadsheet. The designer uses the free version to perform high level estimates to do early feasibility analysis and makes first decisions about the chip. When the designer wants to make real technical or business decisions based upon the estimate he can not use industry average data. The industry average data is skewed pessimistically. In order to make a real decision they come to us and say that they want access to the real data for a period of one to six months. They make the key tradeoff decisions and then proceed with the rest of the design flow.

I'm curious about how you convinced people to supply the data on yield and defect density and so forth particularly when Giga Scale was starting out.
It is difficult to get people to cough that data up. The answer is that we haven't. If you walk in to TSMC or UMC, they are unlikely to give that data out to their customers let alone to an EDA vendor. But we don't have data from them. We get the bulk of our data from key IP partners. Their data kits describe not only their IP within their cell library, they also describe the process that the IP is specific to. So from IP vendors we also get foundry and technology process data. When it comes to yield and wafer pricing and package pricing you can't go to the foundries. They won't give it to you. But there are consolidators out there in the industry like Semico. Research and analyst firms that basically have the reports and make that data available in an industry aggregate form. These are some of the sources we use to collect that data and consolidate it.

The breadth of data must be a challenge. How do you keep your data up to date?
Two points. Number one the architecture of InCyte technically speaking lends itself to this issue very well. It is client-server based like an anti-virus program. It is the client that does the estimating. Every time you launch the InCyte tool it goes out and downloads the latest IP data like the latest virus definitions. You always have the latest and greatest IP and economic models. That keeps the user from being out of date. More important than that is how we keep the data up-to-date. There are two methods. First we work with IP vendors. We have a highly automated IP modeling tool called the technology macro modeler. When we get a new IP library from a company like Artisan we run it through the modeling process and release it to our users. For the cores, e.g. ARM processors, it is much easier for us to model. We make available to our IP partners a special part of the chipestimation.com website where they can login and self-manage their IP catalog. Any IP company can go there, login and register at no cost to them. They can add their IP to the catalog and give their IP exposure to all of our users. Within 24 hours all of our users will have a copy of that IP core. They can drag and drop that IP into their designs. That provides a more automated fashion where IP vendors are pushing IP data to us which reduces the burden on us from a technical perspective of keeping IP data up-to-date.

Any plans to support FGPAs, DPSs, …?
Right now InCyte is primarily geared at cell based ASICs. In the future we will look at supporting structured ASICs as well as FPGAs. No development has begun yet. It is driven by customer demand.

How do you estimate the probability of re-spins which impacts the NRE cost estimation? We don't estimate the probability. It is difficult to quantify. Personal experience as an ASIC designer is that the mindset with respect to respins has changed over the last five to seven years. In the mid to late 90s it was all about single pass and no respin. Nowadays as complexity grows any shrewd design manager or VP of engineering is planning fiscally for respins. You have to assume that you can never get perfect functional coverage of a design. I wouldn't be comfortable with the degree of accuracy of any tool that estimated respins only because it is so subjective based upon the quality of design and individual design team testing methodology. There is not enough standardization in that area where you can effectively estimate those things. The only way you could estimate it is if you were looking at historical data from that same design team or similar designs. That's something we have looked at but haven't done quite yet. In terms of how InCyte does handle respins, they are factored in. Users can input how many full repsins they are budgeting and how many partial respins. For partial respins that can state explicitly how many metal layers they are anticipating to repsin. From that data InCyte will estimate respin mask costs and add that to the NRE expenses.

The number of options from a mathematical perspective is huge. But in any practical case the options are far more limited. One doesn't realistically debate between 180 nm and 65 nm.
Some folks know that they are going to use 90 nm but it may be unclear which process variants within 90 nm. Other folks say no, I'm using a generic node because I need to get my cost down. Within 90 nm design architects want to make tradeoffs: How fast can I run the block given a power or leakage budget? How much memory can I afford to put on the chip economically and technically from a performance perspective? Which I/Os should I be considering? These types of chip architecture variations.

The number of combinations is growing dramtically. Is there any automated way versus trial and error to optimize, perhaps on a single variable?
There is no automated way to do that today. We have had that request from several customers. We do have comparative analysis. We take several different project files on variations of a design. It will compare and produce a multidimensional graph showing delta cost, delta yield and so forth for several design alternates. You can compare and choose which will work best.



The top five articles over the last two weeks as determined by the number of readers were

Mentor Graphics Announces 2005 and Preliminary 2006 Outlook For the full year 2005, Mentor expects revenue of about $700 million to $705 million. Note: Quarterly EDA industry review will appear after Synopsys reports its results.

Agilent Technologies Signs Agreement to Acquire the Business of Eagleware-Elanix, a Leading Provider of High-Frequency EDA Software Agilent's EEsof division is a leader in the high-frequency EDA market, especially in high-end tools. Eagleware-Elanix is noted for products that are easy to use and for its technological leadership in tools for high-frequency design synthesis.

Atmel Reports Results for the Second Quarter of 2005 194 Revenues for the second quarter of 2005 totaled $412.2 million, versus $419.8 million in the first quarter of 2005 and $420.8 million in the second quarter of 2004. Net loss for the second quarter of 2005 totaled $42.6 million or $0.09 per share. These results compare to a net loss of $43.0 million or $0.09 per share for the first quarter of 2005, as well as a net income of $11.7 million or $0.02 per share for the second quarter of 2004.

Mentor Graphics Enables PCB Design Companies to Comply with Hazardous Materials Regulations Customers are able to leverage the component library data field flexibility to implement filtered search and Bills of Material capabilities, enabling OEMs to design for Restriction of Hazardous Substances (RoHS) compliance and avoid expensive redesign time.

Sun Microsystems Reports Preliminary Results for Fiscal Year 2005 and Fourth Quarter Revenues for the fourth quarter were $2.975 billion, a decrease of 4.3 percent as compared with $3.110 billion for the fourth quarter of fiscal 2004. Net income for the quarter was $121 million or a net income of $0.04 per share as compared with a net income of $783 million or a net income of $0.23 per share for the fourth quarter of fiscal 2004. The fiscal 2004 fourth quarter results included $1.6 billion of other income related to a legal settlement with Microsoft.



Other EDA News

Low-Power Leader Sequence Design Sponsors 2005 ISLPED; Premier Tech Forum Focuses on Low-Power Design

ADVISORY/Synopsys Announces Earnings Release Date and Conference Call Information for the Third Quarter of Fiscal Year 2005

EMA Design Automation and AEi Systems Announce New Power IC Model Library for PSpice

SIGMA-C Names Peter Feist CEO; Growing Company Expands Focus to DFM Market

Fujitsu Introduces the LifeBook S2000 Thin and Light Notebook with Next-Generation AMD Turion 64 Mobile Technology

Fujitsu to Ship Initial Production Volumes of New Structured ASIC Built Using Cadence Encounter

Global UniChip Improves Quality of Silicon with Cadence Synthesis Technology; Encounter RTL Compiler Global Synthesis Reduces Die Area for ARM9 by 8 Percent

Sunplus Shrinks DVD Chip Size, Design Time and Cost with Cadence Encounter RTL Compiler; Patented Cadence Synthesis Technology Enables Smaller Die Size and Faster Time to Market for Consumer Electronics Chip



Other IP & SoC News

Aware, Inc. Reports 2005 Second Quarter Financial Results

Tripath Secures $6 Million Line of Credit with Bridge Bank and Updates Guidance for the Fourth Fiscal Quarter of 2005

ANADIGICS Expands Intellectual Property Portfolio

Potentia Semiconductor Introduces Digital Power Management Controller for Complex Power Systems

Altera Ships Industry's Largest, Low-Cost FPGA

New Visual's Chipset Moves Toward Commercialization; 100 Mbps Technology Reaches Development Milestone and Moves Into Next Phase

All American Semiconductor Reports Second Quarter Results; Second Quarter Sales At Highest Quarterly Level Since 2001 Representing 18% Sequential Increase and 4% Year Over Year

Advanced Semiconductor Engineering, Inc. Reports Consolidated Year 2005 Second-Quarter Financial Results

Elpida's 4 Gigabyte Fully Buffered DIMMs Deliver the Highest Performance, Highest Density and Thinner Module Design for Server Main Memory

Boston Circuits Names Gregory Recupero as Director of Hardware Development; Engineering veteran to lead design, verification and manufacturing team for next generation multi-core processors

SIA Selects Vanderbilt University to Conduct Chip Industry Worker Health Study; Industry-Funded Study Will Assess Cancer Risk to Semiconductor Workers

AMD Launches AMD Opteron 100 Series Processors With ECC Unbuffered Memory Support

New Programmable Mali200 IP Core From Falanx Microsystems Delivers PC-level Graphics Quality for Mobile Devices

TI Introduces Single-Chip Power Controller with Sequencing and Margining for 4.5-V to 18-V Systems

World's Fastest Analog-to-Digital Converter Integrating 1:4 Demultiplexed Outputs Targets High-Speed Data Acquisition and Test Equipment

LSI Logic Delivers Ultra-Sleek LSI403US DSP to Enable Growing Voice Over WLAN Handheld Market

Supertex Expands "Green" 3-Pin Switch-Mode LED Driver Family

Tower Semiconductor Announces Second Quarter and Six Months 2005 Results

MagnaChip Semiconductor Reports Second Quarter Results

National Semiconductor Introduces World's Smallest Advanced Lighting Management Unit for Handheld Devices

BOXX Unveils the First 4-processor, Dual-Core AMD Opteron(TM)-based Graphics Workstation




More EDA in the News and More IP & SoC News


Upcoming Events...



--Contributing Editors can be reached by clicking here.


You can find the full EDACafe.com event calendar here.

To read more news, click here.


-- Jack Horgan, EDACafe.com Contributing Editor.