November 02, 2009
What is Different at Magma?
Please note that contributed articles, blog entries, and comments posted on are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
Jack Horgan - Contributing Editor

by Jack Horgan - Contributing Editor
Posted anew every four weeks or so, the EDA WEEKLY delivers to its readers information concerning the latest happenings in the EDA industry, covering vendors, products, finances and new developments. Frequently, feature articles on selected public or private EDA companies are presented. Brought to you by If we miss a story or subject that you feel deserves to be included, or you just want to suggest a future topic, please contact us! Questions? Feedback? Click here. Thank you!

When I was at Magma before we had the data model which is kind of central to what we do. What we have today is the full suite all built on the same data model; everything at the front end from how do you characterize libraries that is the SiliconSmart product. How do you do a 100 million gate chip? You need to be able to floor plan, budget, and partition. There is a product called Hydra that does that. All the digital implementation tools; place and route. All the analog analysis and implementation tools, customer editors and so forth. Then we have a suite of analysis. And this is really important, chip finishing. You have a team working on these blocks. Now you need to put them together as a chip. It turns out what we need to do at the top level is not necessarily something that a standard place & route system is going to solve for you. For example there is a shape based router, tools that help with pin assignment and so forth. Then at the back end we have full fledged DRC/LVS system that is extremely competitive. The biggest change there from a year ago was that when they first rolled out it did not read Calibre decks. Well, Calibre is the industry leader. That’s all changed. It reads Calibre decks. We actually have a have lot of customers now that are adopting that product. It is really fast. Why would somebody care about this? Because in the end designers
need to get a chip done. I would venture to say 90% of the chips being done have stuff on them that is not just pure digital logic. They have memories, they have an ARM core, they have SERDES, and they have other analog blocks. This for the first time, at least for Magma, we have really brought it out together in this one unified model. We have all the pieces to tie it together.

This image kind of illustrates what I said about a moving target. If 20 million gates were standard at 65nm, we expect to see 400 million at 22nm. Now, I just mentioned, we have just seen a 500 million gate design that somebody pulled off at 40nm. That’s huge. That is more of an outlier. We just had a press release out on eSilcon. They did a 400 million gate design at 65nm. That is probably more the exception than the rule. This is what I was talking about, PVT corners, we call them scenarios. We think you are going to see a lot more of them.

Power is very important. Of course, as these things get bigger, the big worry that most of these companies have especially in this economy, “We can’t have a lot more engineers, so we have to figure out how we can get more of whatever your metric is in terms of gates or cells per day per engineer designed.” There continues to be a lot of challenges. Based on this, the focus at Magma for 2009 with the release we came out on the digital side was really focused on getting the turnaround time, cutting the time out, going back to that productivity thing. How can we do bigger designs, even faster, get timing closure and get good quality of results? At the same time everybody
knows low power is really important and that it is not just consumer battery operated. It is everything. All the server guys are looking at this. There is a very strong push to go green. ECO flow. Why because every chip that has ever been designed and gets close to tapeout, somebody forgets that “Oh yeah, we need to add this or we changed the spec so that you need to do this”. There are always last minute changes. You need to be able to not have that derail the whole schedule. The result of that is what we call Talus 1.1, the new release we started shipping in June.

I picked 6 recent experiences we hvad, four of them at 40nm. It ranges from a very large design like that 500 million gate designs to one where the customer was going nuts. They had a fairly simple ECO. But every time they tried to put it in, they got a bunch of new DRC errors. It was taking them a week, just to cycle the chip through the flow, trying to get rid of those. They would go a week and still not get it right. We finally got them to bring in Talus 1.1. They got through this ECO in 30 minutes. It was dramatic. It was less than an hour. There are some others in area and power.

I always have a tough time explaining the concept of the data model. If you look at those EDA tools, routers and so forth, most of them have their own data model that comes with them. If you want to capture the output of that or interface it to other tools, there is usually an API or some format that gets exchanged. In the case of Magma, from day one, the engineering team came up with the concept of having just one data mode and then all the tools would read and write directly from that model. In that sense there are no APIs. The APIs are that you read and write the data model. All the software is built on it. In other words, it’s a single executable which sounds kind of odd but it is one big executable. So where does this really pay off. In the first incarnation of Magma, so when I was there the first time around, the thing that really put Magma on the map, was that we figured out how to do optimization during placement. So Blast Fusion. The reason Blast Fusion ran so quickly and was able to get timing closure was that during the actual placement of the gates, we were concurrently optimizing. As we placed a gate, we looked at timing and if timing was wrong, we moved it, resized it or buffered it. If you look at competing flows, even today, optimization is usually a separate step. I do a placement and then I optimize it. If it is still not working, I go back and I replace and reoptimize. The benefit of this is that it cuts out a lot of time. The real breakthrough with this release by focusing hard on the routing algorithms, the timer and how they work on the data model is that we were actually able to bring concurrent optimization to routing. There are really two key benefits here. One is that is cuts a lot of time out of the flow, so you can push out a designs more quickly. Second, which is less obvious, is that you get this timing predictability, starting at the top where you have the place gate through the end because these are really all tied together in the same optimization. We are seeing +/- 5% shift in timing both ways. It might be a little
bit worse or a little bit better. But if you are close here, the message is that you will be very close when you pop out. If you look at the competing flows, this discrepancy from placement to final router GDSII can sometimes be 30%. It can be huge. If you are going to have a timing problem, you would rather know about it here because the more time you spend implementing the design, the longer it takes to roll that back and start all over again. This is a pretty big deal. The customers are very excited about that.

This gets to the idea that I have a chip that I have to model. I want to make sure it works in all different scenarios. Two years ago maybe there were two or three scenarios. Today there might be twenty scenarios. In a couple of years there might be 100 scenarios. These are different combinations of process, temperature and voltage. These are chip operating modes. So you might have self-test mode, idle mode, power down mode, full power mode, low speed mode. Now the question is “How do I make sure the design actually works under all these scenarios?” What engineers have had to do until this new technology came along, is that they would either have to run simulation for each one of these scenarios for example run timing analysis and see if it were still working. Or they would try and guess and pick a subset. I know that this case and pick a subset. I know that this case here PVT1 is always going to be hard to achieve than this one so I don’t have to worry about that one. But it is very heuristic. What our developers come up with and it works very well is what they call acceleration. It is a built in piece of the system now. You give it the corners and the operating modes. It uses (and I do not claim to know how algorithmically how they figure this
out) but they are able to go in and very quickly figure out which subsets really need to be analyzed. It is all automated. It goes and runs all the analysis routines in parallel and comes back and says here is when you are across these scenarios that represent the work space. It takes a process that used to be very sequential; I’ve got 8 runs to do one after another. It both automates that and basically runs it all in parallel so that you basically get to the final answer a lot quicker.

« Previous Page 1 | 2 | 3 | 4 | 5  Next Page »

You can find the full EDACafe event calendar here.

To read more news, click here.

-- Jack Horgan, Contributing Editor.


Review Article Be the first to review this article
Featured Video
More Editorial  
ASIC Design Engineer 2 for Ambarella at Santa Clara, CA
Verification Engineer for Ambarella at Santa Clara, CA
Timing Design Engineer(Job Number: 17001757) for Global Foundaries at Santa Clara, CA
Technical Support Engineer Germany/UK for EDA Careers at San Jose, CA
Lead Java Platform Engineer IOT-WEB for EDA Careers at San Francisco Area, CA
Staff Software Engineer - (170059) for brocade at San Jose, CA
Upcoming Events
CDNLive Silicon Valley 2017 at Santa Clara Convention Center Santa Clara CA - Apr 11 - 12, 2017
10th Anniversary of Cyber-Physical Systems Week at Pittsburgh, PA, USA PA - Apr 18 - 21, 2017
DVCon 2017 China, April 19, 2017, Parkyard Hotel Shanghai, China at Parkyard Hotel Shanghai Shanghai China - Apr 19, 2017
Zuken Innovation World 2017 at Hilton Head Marriott Resort & Spa Hilton Head Island NC - Apr 24 - 26, 2017
S2C: FPGA Base prototyping- Download white paper

Internet Business Systems © 2017 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy