Characterization – Altos Design Automation
[ Back ]   [ More News ]   [ Home ]
Characterization – Altos Design Automation


Altos Design Automation is a startup that provides ultra-fast, fully-automated characterization technology for the creation of library views for timing, signal integrity and power analysis and optimization.  The San Jose company has two products, Liberate and Variety.  Liberate is a standard cell and I/O library creator. It generates electrical cell views for timing, power and signal integrity including current source models (CCS, CCSN and ECSM).    Variety is an ultra-fast standard cell characterizer of process variation aware timing models. It generates libraries that can be used with multiple statistical timing analyzers without requiring re-characterization for each unique format.

I recently had an opportunity to talk with Jim McCanny, CEO and co-founder.

Would you provide us with a brief biography?
I have been in EDA for my entire career.  I started my professional life as an EDA developer at Texas Instruments, working in the Bedford, UK facility back in 1980.  I spent 10 years developing all sorts of internal tools: logic simulation, timing analyzers, Pcell editors, testability analyzer.  You name it and I probably put my fingers on it.  After about 10 years I left there.  I got an opportunity to move out here to the Valley and join a startup called EPIC Design.  I came out here in 1990 and was at EPIC through its IPO.  I stayed at EPIC for 6 years.  At EPIC I started out working as their engineering manger.  Then I kind of segwayed into being a developer on their transistor level timing tool, PathMill.  From there I ended up working for the field organization on major accounts.  That came about because as we tried pushing PathMill we found that the tool was very much aligned with major accounts, firms like Intel, AMD and TI.  I spent a lot of my time working with these people.  The situation became such that I had to spend my time either with the accounts or spend my time with the code.  I decided to try something new.  I moved to the dark side as they say and have been there ever since.  After EPIC I went to Ultima.  I joined their marketing department.  After being at Ultima for about a month  the CEO left and I was given the job of acting president of the company and also running marketing.  I did that for about 18 months.  Then I teamed up again with some of my old EPIC colleagues at CadMOS.  At CadMOS I was VP of Marketing.  About 3 years later Cadence acquired us.  I spent 4 years at Cadence as Marketing Director for Timing and Signal Integrity.  Then some of the CadMOS team left to start up Altos.  Six months later they contacted me and said that they would like me to come on board as CEO.  They were great guys.  I knew them from a personal working relationship.  I was positive that they had the technical skills to do something distinctive.  Further the area they were working on was a problem that was not getting much attention from the big players.  It looked like a great opportunity, so here I am.  I have been at Altos since July 2005.

You said that you went over to the dark side. Which side was more interesting, more challenging, more satisfying?
They both have equal amounts of interesting stuff.  Having spent so many years on the technical side, it was interesting to try something new but still be able to leverage what I had learned on the technical side.  On the technical side you tend to be point focused on one customer, one user.  You can add a feature or make a change that helps one user and it is satisfying that that you are having an impact.  But when you go into marketing, you can make pretty broad strokes that have impact across a whole market.  That was new, exciting and very fulfilling, especially at a small company like CadMOS where we had to evangelize signal integrity, which was a new issue.  People would rather ignore some of these issues that are coming along.  You get some resistance.  Not only did we build a successful company we transformed the digital design flow.  They had to have signal integrity as part of the flow.  We had a lot of influence on how signal integrity was measured and how it could be dealt with in the design flow.

The marketing side appeals to sort of the broad strokes.  Occasionally you miss the one-on-one connection with the end user, particularly when you get to Cadence and you know you would be doing things on a broad scale.  But when you make contact with a customer, you never got to see them through an entire project.  At EPIC we were working a lot with Intel and AMD, going down and spending a couple of days a week, working onsite with these guys, seeing their issues and the progress they were making on their big microprocessors.  That was very satisfying too.  I like both.  The good thing about being in a small startup is that you stay close to the technology without having to spend all the evenings and hours writing code.  Once you start writing code, it sort of drags you down.  You have to spend so much time thinking about it.  It can become very isolating after some time.  So I think the dark side has its benefits in the sense that it is a little more social and you get to interact with more people.

Large firms often acquire smaller firms for their technology.  This gives the founders and the investors an opportunity to cash in.  However, it would appear that a lot of employees eventually drift away from the larger employers because of the difference in environments.
Yes.  I think the big companies are nice in terms of the security and the hours are a better (not a lot but a little better).  There are some more benefits but everybody likes to feel useful.  At times at a big company like Cadence, you may work with a customer and then the sales guy will do an all-you-can-eat deal.  You have no idea whether the little piece you worked on was important or not.  You can’t relate what you do to the bottom line.  In small companies everybody is involved.  Everyone gets to experience the highs and lows.  They understand what they do really matters.  Once you have had that kind of excitement, that drug, it is hard to give it up.  You can wear yourself out in a startup and then go spend a few years in a big company, make some good contacts and find your feet again.  Then you just feel like it is time to do something new again, to get out there.  At least that was it for me.   I felt that Cadence was very good to me.  I had no complaints.  They had great people there.  There were interesting things to work on but I missed the excitement of being at a smaller company, really interacting with the customers and the developers, and working with a very focused team.  The reason EDA has so many startups is not only the potential to get acquired.  You can grow the company to a large size and make good money.  It is very exhilarating to solve real problems for customers and work very closely with the technical team.  You feel like you are changing the world a little bit.

Is Altos self-funded?
Initially we were self-funded. Last December we took a small amount of Series A funding ($1.5 million). Most of that was from a private investor. We got some money from Jim Hogan at Vista Ventures.

How big a firm is Altos?
It is still very small. We are 7 people. Now we are trying to add a few more.

You said that there was a problem that Altos was addressing that others were not. What is that problem?
The general problem of characterization.  People have made efforts to solve this problem before.  There are solutions in the marketplace.  It is not that there have not been products for characterization.  What has happened was that there has been a lot of new factors that have all come along at the same time and have been looming on the horizon that have put the existing characterization solutions under undue stress.  This is just going to put a big hole in the whole ecosystem of people using existing digital design flow.  Things like low power.  As you introduce low power you start to do new things.  You need to look at multiple voltages on a chip, which means you have to characterize libraries at multiple voltages.  You start seeing thermal effects such as temperature inversion where worse case corners no longer occur at the highest temperature.  You may get worse cases occurring at lower temperatures.  You see people using multiple threshold devices which increase the size of the library typically by 3x and doing power shut off, things like state retention flops.  In addition there are new model formats for more accurate modeling like CCS and ECSM.  People are also starting to look at yields, trying to come up with an alternative set of libraries that would tradeoff performance for yield.  All these factors were exploding the number of potential library views that you are going to need.  The complexity of the models is going up too.  You have the kind of perfect storm of more complex cells like some of these state retention flops, and more complex models like current based models.  Then looming on the horizon of course is statistical timing.  The complexity in generating statistical models is kind of like the hurricane.  The other factors are more like gale force winds.  Together all these are like a double perfect storm.  This is the area where people are sort of making do with older technology and getting by living with huge run times, large computer farms and dedicated teams.  A lot of people are doing it in house with a lot of homegrown tools.  We just felt that it was a time to take a fresh run at this.  I think we are starting to see that this was the right decision.

What do the acronyms CSM and ECSM stand for?
CSS stands for Composite Current Source and ECSM stands for Effective Current Source Model.  They are new delay models which use a current source.  These give you more accuracy than the table lookup model that has been the industry standard for 15 years that Synopsys introduced in the early 90’s or late 80’s.

CCSN is the extension of the CSS Synopsys model to address signal integrity.  Synopsys had an equivalent to the NLDM model called Liberty SI.  That has been deemed to be very hard to characterize and takes a very long time and may not be as accurate as some people need at 65 nm and below.  CCS Noise is a much more accurate model and it takes less time to characterize.  However it is a more complex characterization task as it requires a lot of internal details not just the boundary information.  A lot of in-house tools are written for treating cells as black boxes.

Are CCS, ECSM simply generic terms or are they formal standards?
There are competing standards.  ECSM come from Cadence plus some stuff from Magma.  CCS is the Synopsys equivalent.  CCS is part of the Liberty standard that gets blessed by the TAB at Si2.  Si2 Initiative formed a technical advisory board to facilitate the evolution of Liberty library modeling standard.  CCSM is a Si2 standard.  They are both open to us.  They do essentially equivalent things but they use different data.  ECSM derives the current model from voltage waveforms while CCS requires you to actually capture current information to characterize.  That is the main difference.

Editor: Si2 is an organization of over 100 semiconductor, systems, EDA, and manufacturing companies focused on improving the way integrated circuits are designed and manufactured in order to speed time to market, reduce costs, and meet the challenges of sub-micron design. Si2 focuses on developing practical technology solutions to industry challenges.

The Open Modeling Coalition (OMC) was formed by Si2 in mid-2005 to address critical issues - such as accuracy, consistency, security, and process variations - in the characterization and modeling of libraries and IP blocks used for the design of integrated circuits.

The OMC technical objectives are to define a consistent modeling and characterization environment in support of both static and dynamic library representations for improved integration and adoption of advanced library features and capabilities, such as statistical timing.  The system will support delay modeling for library cells, macro-blocks and IP blocks, and provide increased accuracy to silicon for 90nm and 65nm technologies, while being extensible to future technology nodes.  Technology contributions from Cadence Design Systems, IBM, Magma Design Automation, Synopsys, and other companies are in support of these goals.

Tell us about the Altos products.
Since our inception we have built two products.  The first one we call Liberate which is a standard cell and IO library characterizer.  It builds Liberty models and plugs into existing digital implementation flows.  That product took us just one year to build.  We released Liberate in December 2005.  We were engaged with 3 beta customers at that time.  Early in 2006 we were able to turn one of those beta sites into a paying customer.  They were able to put it into their production flow.  The second product, Variety, which obviously leverages a lot of the technology from the first product, was released in September 2006.  Before the end of the fourth quarter we were able to get the first deal for that product.  We have been able to bring these products to market and to get paying customers within the last year.

What is the main differentiation of your product?
The main differentiation of our products is that we do a lot of things to make characterization go faster.  Basically characterization was a bottleneck with all the different views and models that people were starting to require.  It was going to become self evident that it was so costly to do that people would start cutting corners and would not do certain things.  Statistical timing would not become a reality unless models were readily available.  That’s how we can play a role and add value.  It is very easy to use.  A lot of the characterization tools require the user to tell them I want to characterize it this way.  There is a lot of the manual intervention, a lot of setting up vectors and conditions. We automate all of that.  We track the optimal set of vectors that you need to fully characterize the cell.  We can filter out duplicate vectors exercising the same path.  Because we are automated we have found that we do better than a lot of other people do with a more manual approach.  Things may be missing with the manual approach.  With about 90% of the libraries we get from other people, we are able to pinpoint some holes, some areas they have missed.

We support the latest models like CCSN.   CCS Noise is a model which is very familiar to us because it is similar to what we were doing when we did CeltIC at CadMOS.  We are very familiar with signal integrity models.  We are probably ahead of everyone else in the market, certainly in supporting these models in an automated way.

What is the vision of the company?
Statistical timing is very useful for people at 65 nm but it is essential at 45 nm.  The reason for that is that the worse case corner method that people use today is way too pessimistic.  This has two very bad side effects.  One is that you spend time trying to meet a target that you have already met.  You are wasting your design resources doing timing closure when you are easily meeting the marketing target.  There is very large on-chip variation (OCV) factor in use today.  What OCV says is that for every slow path, I am going to add 10% to 15% for every delay and subtract 10% to 15% from the clock path delays and still try to meet my setup time.  What on-chip variation is trying to do is account for all the different delay effects that could possibly happen.  It’s sort of like Murphy’s Law.  You could have lithography effects, you could have dishing, random particle effects, non-uniform doping, and so on and so forth.  They kind of lump everything into some worse case number.  What happens even if you can meet timing under these extreme conditions is that you end up blowing your power budget because you are making your gates much bigger than you need to be.  You have a lot of lower Vth devices that increase your leakage.  With SSTA you are getting more realistic models.  If you model statistically the things that do change in the process, then you get a much better idea of where you are with respect to the yield you are after.  You have the potential of catching some of these corner cases that sit outside those worse case guard bands.  You benefit on both sides of the coin. 

As an example Renesas saw approximately 6% improvement in clock frequency and 35% reduction in the number of paths they had to fix once they got through timing closure. As you get to bigger size blocks, you will see these numbers approaching 15% to 20% potential increase in clock frequency and the number of hold variations will drop considerably which means a lot less ECOs required at the end of the design cycle.

The promise of statistical timing has been talked about by many companies, by all the big EDA companies and by the foundries.  However there is a piece missing.  It is like everyone is talking about electric cars but no one is building the fuel cell to make it all work.  We see our role is providing accurate libraries quickly so that people can make the transition from corner based models to statistical models.  You are starting to see a lot of people pushing statistical signoff.  There are SSTA tools from the big guys at Cadence, Synopsys and Magma.  There are also startups such as Extreme DA pushing statistical signoff.  Once that gets acceptance, I think you will see SSTA as part of the implementation flow, probably at 45 nm.  One of the great things about statistical analysis is it will give you a better optimized design if you are trying to optimize across multiple metrics such as power or yield as well as timing.

Today our product supports standard cells for statistical characterization.  We are planning to go to IO, memories and cores because on any given chip you will have all these different components.  You need these models for everything.  As a byproduct of doing characterization, we characterize each cell’s sensitivity to variations even down to transistor within a cell, i.e. how sensitive it is to minor perturbations to the process like a change in channel length or a change in Vth.  We can feed that back to standard cell designers.  You can tell them that they can make this channel a little wider. It won’t impact delay and it will improve leakage, power consumption.


What type of variations do you cover?
In terms of cell characterization we look at two types of variations.  Some people call it global and local.  Most people refer to them as systematic and random.  For systematic inter-cell variation the process varies in the same direction by the same amount for each transistor in the cell.  Everything kind of moves in one direction.  For example, you length may vary by 5 nm.  You characterize the cell with nominal conditions and vary the length by 5 nm, characterize the cell and capture the sensitivity to that parameter.  You do that for any parameter the user thinks is significant.  Random variation is more challenging because it is similar to mismatch in analog world where you are actually looking at the sensitivity of each transistor within the cell to a particular type of variation, things like Vth variation.  We have to vary each transistor.  We characterize and capture the sensitivity.  That would be very, very slow if you ran a full blown Monte Carlo simulation on the cell, a few thousand trials for each point in your table for every slew and load combination.  That would be prohibitive.  This is the area where we have some very significant breakthroughs and are able to speed up the characterization of random variations.  It takes 3x to 4x longer than the traditional characterization for nominal or systematic variations.

The only difference between statistical and regular characterization, the input is basically variations and sigmas for the parameters you want to look at. IDMs have that information.  You type it in as a series of commands and we run the characterization.  Other people like the large foundries have been collecting this information in the past mostly for use in analog and statistical SPICE models where they will actually measure some parameters and their 1σ variation.  Sometimes those parameters are captured as principal components to ensure that each of the parameters can be modeled as independent effects.  If you know the correlation between different parameters, you can give us that information and we will account for that during characterization.  Mostly what we have seen so far is that people are treating parameters as being either fully correlated or fully uncorrelated.  We support both.

We generate our own internal database.  Essentially we characterize once and store the characterization data in our own internal generic format.  In a post processing step we can spit it out in different formats for all of the vendors.  Right now there is no standard in this area.  Once again each SSTA tool is doing its own thing.  Luckily from what we have seen so far people are using some variation of the Liberty and its extension.  The type of information we are supporting is very similar but maybe there is a slightly different form or format, maybe stored in a different place in the file but pretty consistent in how it is done.  Cadence has talked with us and also with Extreme and Magma to provide input to Cadence’s ECSM format which they have donated to Si2.  Si2 is going to review the format and decide whether they will put it out as a standard.  We are involved in that effort.  Synopsys made an extension to the Liberty format called VX to support variation.  I think what will end up in the industry will be two competing standards unless those two giants of EDA get together.

How do you validate your models?
Generally we have people trying to validate SSTA by generating long chains of gates on the same cell or a mixture of cells, running a Monte Carlo analysis on that critical path and comparing the results they get from SSTA.  We are seeing very good accuracy looking at both systematic parameters and random parameters.

What is unique about what you do?
As I mentioned with random variation you are looking at the sensitivity of delay, leakage or any parameter in your library to variation on each transistor.  On average, in the libraries we have seen there are about 25 transistors per cell.  We have some small ones, inverters that have 2 or 4.  Some of the large cells have 100 or even up to 200 transistors.  If you did it in a brute force method, even assuming linear sensitivity, you end up with 25x average run time increase.  We do not get that.  We get 3x to 4x run time increase.  The reason is that we do something we call the inside view.  We understand the cell being characterized and we understand the paths through that cell, and which transistors are sensitive to which type of variation; for delay it can be a different set of transistors than for capacitance or leakage.  We have a number of techniques to try to avoid having to do ~25 simulations.  We have validated what we are doing against Monte Carlo simulation.  We can run Monte Carlo on everything we do if you are willing to wait long enough to get a result you can compare against.  We have done that for a number of cells and got excellent agreement with the Monte Carlo results.

What kind of performance do you get?
As shown in the table below for the first almost 400 cells it took 6 hours on 8 CPUs (48 CPU hours) to characterize three systematic parameters and 1 random parameter.  If you just do the nominal characterization, we are able to run this in 1 hour on 8 CPUs (8 CPU hours), which is a phenomenal improvement in run time over what people are running today which is on the order of 30 CPU days to characterize a typical library cell.  If you look at the 6 hours versus 1 hour, there is 1 hour for nominal, 3 hours for systematic and 2 hours for random.  The second example we modeled 9 systematic parameters, 1 random parameter.  The run time was 2 hours for nominal, 18 hours for systematic and 7 hours for random.  There is a 3 ½ x runtime overhead for the random parameter.


Library CPUs Variety Runtime Systematic Parameters Random Parameters Liberate Runtime
387 cells 8 6 hrs 3 1 I hr
504 cells 16 27 hrs 9 1 2 hrs


This is the whole difference between what we do and what others do.  In the past people have used shell scripts, just big wrappers around running SPICE.  We do not do that.  We actually read in the SPICE circuits, read in the models, analyze the circuits and try to figure out how to optimize the circuits for characterization.  We have our own built in SPICE engine to do this analysis.  You can use this with characterization or we can do a kind of final characterization using the golden SPICE simulators HSPICE, Spectre or ELDO.  We have interfaces to all of those.  This piece is what gives us a big performance boost.

Basically when we characterize, you can specify what the variation is for anything that is named in the SPICE model.  It can be 1σ or 2 σ.  That is completely up to you.  If you combine type parameters in the command they are assumed to be 100% correlated.  If you keep them distinct, they are assumed to be 100% uncorrelated.  We can also account for the fact that things like random variation are related to the size of the transistor, basically proportional to one over the square root of the area of the transistor.  We will account for the fact that larger transistors have less random variation than smaller transistors.  It is up to you how many parameters you want to specify, what we are typically seeing is L, W, Tox (oxide thickness) and Vth as the major contributors to process variation.

When we do sensitivity analysis, we can do linear or non-linear.   It is linear if I increase or decrease by 5% and the sensitivity is the same.  In a lot of cases it is different.  Right now we can do multiple points, typically at least 2 points, and run some simulation to determine if it is linear or non-linear.  The formats right now are still supporting only linear, so there is work to be done in the formats to support the full blown non-linear sensitivity.  That is coming.  We have seen some proposals for that.

We felt it was important to establish a base with our Liberate product that shows that we can do all the regular non-SSTA characterization.  Because our belief is in SSTA, if you say variation equals zero, you should get the same results as SSTA.  If you do not have the same library characterization system with the same assumptions and same mindset, it is very difficult to ensure that consistency.  We support multiple vendors right now.  You will see people with mixed flows where one vendor’s tool will be used for signoff and another vendor’s for implementation.  That situation has been quite common in the past.  I think it will happen with SSTA.  Again our ability to do random variation very quickly remains the main distinguishing factor of our tool. People believe that random variation will become a larger component of the total variation as people understand more and more how to deal with systematic variation: basically design it away or remove it from the actual process.  People generally accept that there is always going to be some element of randomness and that random contribution will probably increase over time.  To be able to model very efficiently is very important.

Who are you going after in terms of market?
We see the market in three different segments.  There is obviously the IP and foundries, people who deliver libraries.  They are very interested in this stuff.  One of our first customers was Virage Logic who use our Liberate product for standard cell characterization.  We are also working with IDMs.  Renesas was the first customer for our statistical characterization tool.  We also see quite a bit of interest in the COT market.  If you look at the top 10 COT companies, they all either develop their own libraries or they re-characterize somebody else’s.  There are probably good reasons for that.  With the volume of business they are doing, it makes sense to reduce margins, to improve quality of the cell whether it is actually on the layout itself or in the model.

What is the pricing and availability?
Variety is available now, U.S. pricing starts at $95K for a 1 year license. Liberate is available now, starting at $95,000 for the one-year license.   Altos’ products are sold directly in North America and via a distributor, Marubeni Solutions Corp., in Japan.

The top articles over the last two weeks as determined by the number of readers were:

Magma and Synopsys Agreement Narrows Delaware Case
The companies jointly stipulated to: Synopsys withdrawing infringement claims against Magma with regard to two of the three Synopsys patents at issue; Magma withdrawing infringement claims against Synopsys with regard to one Magma patent at issue; and Magma withdrawing claims of antitrust violation by Synopsys.

Synopsys Posts Financial Results for First Quarter Fiscal Year 2007 
For the first quarter, Synopsys reported revenue of $300.2 million, a 15 percent increase compared to $260.2 million for the first quarter of fiscal 2006. Net income for the first quarter of fiscal 2007 was $23.4 million, or $0.16 per share, compared to $1.7 million, or $0.01 per share, for the first quarter of fiscal 2006.

HP Reports First Quarter 2007 Results 
HP announced financial results for its first fiscal quarter ended Jan. 31, 2007, with net revenue of $25.1 billion, representing growth of 11% year-over-year from the $22.7 billion. Net earnings were $1.5 billion up 26% from the $1.2 billion in the year ago quarter.

Magma Agrees to Drop All Anti-Trust Claims Against Synopsys 
Synopsys announced that Magma Design Automation has requested that the Court dismiss all antitrust claims against Synopsys. In return, Synopsys agrees not to pursue Magma for malicious prosecution or any other claims related to making these anti-competitive accusations against Synopsys. The Court has been asked to dismiss all of these claims 'with prejudice,' meaning they cannot be revived.

Analog Devices Announces Financial Results for the First Quarter of Fiscal Year 2007
Total revenue for the first quarter of fiscal 2007 was $692 million, which included $657 million of product revenue and $35 million of revenue from a one-time technology license. Product revenue for the first quarter of fiscal year 2007 increased approximately 6% compared to the same period one year ago and increased approximately 2% compared to the immediately prior quarter.

Net income for the first quarter of fiscal 2007, under generally accepted accounting principles (GAAP), was $153 million, or 22% of total revenue, compared to $121 million for the same period one year ago and $138 million for the immediately prior quarter.

Other EDA News

  • EDA Consortium Chooses Market2Lead Automated Marketing System  
  • Next Inning Technology Updates Outlooks for Harmonic, Packeteer, Altera, and Xilinx  
  • Synopsys CEO Aart de Geus to Speak at the 2007 Morgan Stanley Technology Conference March 5, 2007
  • Cadence Logic Design Technologies Give Asia-Pacific Chip Designers a Competitive Edge  
  • Magma Prices $47.4 Million Convertible Note Exchange 
  • VaST Promotes Colin Lythall to VP of Corporate Engineering
  • SAMSUNG Releases Multilayer Ceramic Chip Capacitor Library for Nexxim and Ansoft Designer  
  • Synopsys Proteus OPC Delivers Superior Cost of Ownership on Intel(R) Core(TM) Microarchitecture  
  • MathStar, Inc. and Mentor Graphics Announce Partnership for Field Programmable Object Array Design Tools  
  • Magma's QuickCap NX selected by IBM as Golden Extraction Standard  
  • Cadence Hosts 2007 Investor and Analyst Conference Web Cast  
  • Dai Nippon Printing, Takumi Technology and a Major Semiconductor Maker Jointly Developed Automated Criticality Aware Photo-Mask Inspection System  
  • Appro Unleashes Four Processor Workstation with Industry Leading 128GB Memory for High Performance Computing
  • MOSAID Announces Conference Call to Present Patent Opportunity Provides Corrections to Yesterday's Releases
  • Sandwork Design Now Shipping SpiceCheck  
  • MOSAID Declares Quarterly Dividend of $0.25 per Share
  • Magma's Roy E. Jewell to Speak at D.A. Davidson & Co. Technology Conference
  • IPextreme to Sell and Support IP Portfolio from National Semiconductor including AMBA Peripheral Library and CR16 Processor [23 Feb 2007]
  • EDA Consortium, IEEE Council on EDA to Jointly Sponsor Prestigious Kaufman Award  
  • Infolytica Corporation Announces Special Pre-Conference Seminar at the 2007 Magnetics Conference
  • Jasper Design Automation Advances Verification Planning With GamePlan(TM) Verification Planner v1.1
  • Atrenta Gains Key Patents for Chip Design Analysis Technologies  
  • Dataram Reports Fiscal 2007 Third Quarter Financial Results  
  • Bell Helicopter Standardizes on Mentor Graphics CHS to Cut Costs and Reduce Schedules for Helicopter Electrical System Design
  • Averant Announces Sales Channel Expansion
  • ACM Names Bluespec Founder Arvind 2006 ACM Fellow
  • Embedded Systems Veteran Isao Yumoto Named Corporate Vice President and General Manager, VaST Japan
  • Magma and Synopsys Agreement Narrows Delaware Case  
  • Magma Agrees to Drop All Anti-Trust Claims Against Synopsys  
  • Synopsys Posts Financial Results for First Quarter Fiscal Year 2007  
  • Blaze DFM and Aprio Technologies Announce Agreement to Merge  
  • LogicVision Strengthens Presence in Europe With the Addition of the ISS Group as European Distributor  
  • ArchPro Boosts Capacity, Performance of Multi-Volt Tools  
  • Jasper Design Automation Announces JasperGold(R) Verification System 4.3 With Major Advances in Performance, Modeling and Ease-of-Use  
  • Silicon Image Announces Upcoming Investor Event Webcast
  • ARC Announces Return of Successful ConfigCon(TM) Conference Series With New Locations Added for 2007

    Other IP & SoC News
  • Avago Technologies Announces High-Brightness 3-Watt Power LED Emitter for Use in Solid-State Lighting Applications
  • Xilinx Delivers Industry's First 90nm Non-Volatile FPGA Solution With New Spartan-3AN Platform
  • Discera and Vectron Ship MEMS-based Oscillators to Customers  
  • National Semiconductor Introduces Lowest-Power, Easy-to-Use Cable Extender Chipset  
  • Brion Introduces Tachyon 2.0 Computational Lithography System
  • Transmeta Reports Fourth Quarter and Fiscal 2006 Results
  • Denali's Chief Technical Officer to Speak at the Fabless Semiconductor Association Event
  • New 4GB Fully Buffered DIMM from Legacy Electronics Receives CMTL Intel(R) Validation & Certification
  • Three New Advanced Packaging Products from Shin-Etsu MicroSi, Inc. Address Demands of Flip Chip and Wafer-Level Packaging Markets 
  • Sipex Bolsters Broad RS-485 Portfolio With New Family of Advanced, High Performance Transceivers
  • TI Simplifies Audio Design with a New Ultra Low-Jitter S/PDIF Receiver
  • FuturePlus Systems and DFT Microsystems collaborate on next generation High Speed Serial IO Jitter Tools
  • Axiom Microdevices' CMOS Power Amplifier Achieves Cellular Handset Full Type Approval  
  • Xilinx Features PCI Express Technology Solutions on National Semiconductor's Analog by Design Show
  • Micrel's 4A MIC68400 Extends LDO Family Aimed At FPGAs, DSPs and Microcontrollers 
  • ARM7-based Motor Control Development Kit from STMicroelectronics Simplifies Vector Drive Design for Cost-Sensitive Applications
  • Xilinx Announces Production Volume Shipments of Its Low-Cost Spartan-3A Platform
  • Samsung Now Offers Comprehensive NOR Flash Portfolio  
  • Marvell Technology Group Ltd. Reports Preliminary Revenue for Fourth Quarter and Full Fiscal 2007  
  • Discera Ushers in Next-Generation Device for $3.5B Timing Market  
  • Tundra Semiconductor Low Power PCI Bridge Now in Production  
  • Toshiba Announces High-Efficiency Switching MOSFETs for Synchronous DC-DC Converters
  • Chartered Extends Technology Development Agreement with IBM to 32nm  
  • Cypress Adds CapSense Support To PSoC Express(TM) Design Tool
  • Tensilica's New Energy Estimator Tool Guides Designers to Energy-Efficient SOC Architectures  
  • Semiconductor Lithography Guru Joins Forces With Patterning Synthesis Provider Dr. Chris Mack Accepts Role as Advisory Scientist for Invarium  
  • Texas Instruments Launches New Development Platform for DaVinci(TM) Technology to Speed Development of Digital Video Products  
  • PMBus(TM) Consortium Unveils Version 1.1 Specification Simplifying System-Wide Power Management
  • LucidPort Introduces Certified Wireless USB Peripheral Controller With Both Wired and Wireless USB Connectivity
  • Marvell Enters Serial Attached SCSI (SAS) Storage Controller Market  
  • Atmel Launches RF Chipset with Industry's Highest Integration Level for 5.8 GHz Cordless Phone Applications  
  • National Semiconductor Introduces Industry's First 100V Current-Mode Buck Controller
  • Semtech Debuts Industry's First Dual-Channel Step-Down Controller for DrMOS Power Devices
  • NetLogic Microsystems Leverages TSMC 80nm-GC Process for Highest Performance Knowledge-Based Processor 
  • Micrel Joins Z-Alliance With A Full Line Of Z-One(R) Compatible LDO Regulators  
  • IBM Supercomputing Simulations Support Chip Breakthrough  
  • NXP Delivers Industry's Smallest ULPI Hi-Speed USB Transceivers for Mobile Phones  
  • Sidense Strengthens Market Position With ViXS License Deal  
  • STMicroelectronics Introduces First Auto-Shutdown LED Driver to Address Energy-Saving Programs
  • Samsung Speeds Up World's Fastest Graphics Memory
  • MOSAID Wins Competitive Bid for Essential WiFi and WiMAX Patents from Agere Systems
  • MOSAID Announces Third Quarter Fiscal 2007 Results  
  • MOSAID to Sell Memory Test Assets to Teradyne - Focused on Intellectual Property  
  • Toshiba Small Signal MOSFETs for High-Speed Switching in Portable Electronics Achieve Low ON-Resistance and Capacitance  
  • MagnaChip Disputes Pixelplus Allegations  
  • Arasan Chip Systems Licenses the Compact Flash IP Core to Samsung Electronics
  • MagnaChip Launches Production of 0.18um EEPROM Process Featuring High Reliability With Small Cell Size and Low Cost Solution
  • Pixelplus Files Defamation and Tortious Business Interference Lawsuit and Preliminary Injunction against MagnaChip in Seoul Central District Court  
  • Renesas Introduces 324MHz, 583-MIPS SuperH Microprocessor for Multimedia and Office Automation Applications
  • Virage Logic to Present Technical Webinar on Next-Generation Embedded Non-Volatile Memory  
  • STMicroelectronics Enhances Cable TV Demodulator to Deliver Increased Performance and Lower BOM Cost
  • InsideChips President and Senior Analyst, Steve Szirom, to Moderate Panel on Start-ups at the Multicore Expo
  • Chipidea Opens New Radio Frequency Design Center in France  
  • WJ Communications Announces Fourth Quarter and Fiscal Year 2006 Financial Results  
  • Analog Devices Announces Financial Results for the First Quarter of Fiscal Year 2007  
  • Alliance Memory, Inc Announces New Asynchronous CMOS Low Power SRAM Product Line  
  • SEMATECH to Demonstrate EUV Leadership, Technical Breakthroughs at SPIE
  • Zarlink's Innovative ZLynx Optical Cable Products Simplify Data Center and Computer Cluster Interconnects
  • Xilinx and NSA Deliver New Design Flow and Verification Process for High Assurance Industry  
  • Elron Announces Acquisition by ChipX of Oki's U.S. ASIC Business Assets

  • Rating:
    For more discussions, follow this link …