October 12, 2009
Roundtable: Virtualization & Simulation
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
Peggy Aycinena - Contributing Editor

by Peggy Aycinena - Contributing Editor
Posted anew every four weeks or so, the EDA WEEKLY delivers to its readers information concerning the latest happenings in the EDA industry, covering vendors, products, finances and new developments. Frequently, feature articles on selected public or private EDA companies are presented. Brought to you by EDACafe.com. If we miss a story or subject that you feel deserves to be included, or you just want to suggest a future topic, please contact us! Questions? Feedback? Click here. Thank you!

We also need cycle accurate models for ourselves and customers to determine application timing 
- oftentimes before the silicon is available. We’ve worked with Virtutech to develop a hybrid model, a single model that’s switched between a high level of abstraction and a cycle-accurate model, and can zoom in on an area where you need more fidelity with timing to gather performance data.

- Do you have an internal methodology that guides simulation and virtualization decisions within your own organizations?

Michel Genard (Virtutech) 
- I’m going to play this like Mark. The question is wrong. The fact is, between our customers there are different views. We agree about mapping the level of accuracy to the use case, and we all agree which level of accuracy is appropriate to the use case, but this is not about methodology. From an industry point of view, there’s been good progress made in the simulation area, especially with TLM. Now there is probably a good enough level of interface that some people don’t have to design their own interfaces, and can focus instead on the key pieces [of development].

Mark Burton (GreenSocs) 
- Michel’s hit at least one nail on the head. There are an awful large number of use cases here, with different people using modeling in different ways. Everyone has different projects, different needs, and different use cases.

One good thing 
- now in industry, with SystemC and TLM 2.0 being recognized, [we see] that the language and the interconnecting glue [are in place]. [Today], it’s reuse, reuse, reuse, and as reuse becomes so critical, it’s good that we’ve got TLM 2.0 and SystemC as shining light standards to enable reuse.

As far as methodology, we all provide part of the flow. And yes, Michel’s right. We all kind of dovetail together. It comes down to both cooperation and conflict with most of the industry indeed in cooperation.

Tomas Evensen (Wind River) 
- I do see that education is needed. A lot of people are saying they can’t possibly develop models that are exact enough to develop products. It’s a little bit of an uphill battle to make them realize that models can help a lot. But they say, ‘Hey, why use hardware if I don’t have to?’

But models and simulators are available at the various levels. Since we have three different types of simulators here, it’s fairly straightforward as to which one to choose; it’s usually it’s the one they used last time.

Jason Andrews (Cadence) 
- We don’t develop software, but from our customers’ point of view, there’s a trend to simulate more and emulate less. They’ve trusted a lab [in the past], but now they’re shifting to simulation. It’s takes education [to sort through the options], mixing the software and hardware domain [in a way that] people don’t necessarily [have experience with].

We get a lot of customer feedback that says, their simulation is great, but the hardware guys don’t accept it. Everybody wants to stick to whatever their current technique is. It’s true, there’s definitely a trend to SystemC, but it will take more time to [make the change].

Kent Fisher (Freescale) 
- From Freescale’s point of view, we definitely have internal processes which we follow, as I mentioned previously. We use hardware emulation to do SOCs, and use the hybrid simulators which we developed for our own software and to help customers. Prior to obtaining silicon to essentially do what simulators are supposed to do, to catch up on time so the software is up and running before the hardware [is delivered].

- In a perfect world, isn’t programmable hardware the best solution?

Jason Andrews (Cadence) 
- That’s actually the worst of everything. Programmable systems are the worst because nobody pays attention to the quality and the details of making it right. And FPGAs? Management says, this is programmable. But to get the right combination of programming can take anywhere from 1 minute to 10 years. It’s all still very ad hoc; the procedures are not there.

It’s also very bad from the standpoint of visibility, which is horrible. Even if you could give me silicon immediately, I don’t know what my software’s doing. I don’t have control, because the controllability and observability is bad. As a software engineer, I would much rather have a simulator than programmable hardware.

Mark Burton (GreenSocs) 
- Jason is absolutely spot on. The large issue with us for years has been supporting the software. Yeah, models might be useful for debugging hardware, but the single biggest value at this point is helping the software engineer. Hardware is not necessarily [the bottleneck].

Tomas Evensen (Wind River) 
- I agree. Hardware availability is just one of the reasons you use models, and of course we do it quite a bit. But that’s just one reason. With multicore, where you are defining race conditions, you have crashes at different places each time. But an exact simulator will run the system the exact same way multiple times, so you can get insight into what’s really going on.

Kent Fisher (Freescale) 
- From our perspective, [programmable hardware] would be ideal but it’s definitely more complex than that. I agree with what’s been said previously; doing hardware revisions with the current process technology is evil, between out-of-pocket expense and time to market. That’s actually one of the things we are using emulation and simulation for, to reduce mask spins, and to get products and silicon to market faster. We get paid for shipping silicon that’s successful.

Michel Genard (Virtutech) 
- We are not designing hardware at all, so obviously from the Virtutech point of view, we like the idea that hardware being late or buggy can be solved with more virtual platforms. What’s interesting here is the idea of the ‘perfect world’, but what’s missing in your picture 
- there are still significant costs in assuming your design decisions are correct.

From a timing point of view, if you go [to hardware] after your flow, it’s extremely late in the game to be finding a problem. Early simulation, early on in the design process, even before you do any RTL 
- with some validation - is the way to a much shorter design cycle where you already have [timing information].

And to Mark point: software content is not just an issue of debugging, but also an issue of impacting how your SOC behaves. Which means that if you really want, for example, to design an SOC that’s [functional], you cannot develop it from a hardware point of view. It’s the software that exercises the system that requires you to work hard. The point is, in theory your [programmable hardware] flow looks good, but it actually makes more work, will delay the project, and will make people work harder.

- If not programmable hardware, then what about a vanilla platform that can be used across a range of applications?

Jason Andrews (Cadence) 
- Yes, there is some move in that direction, but it won’t succeed because of competition. Sure it would be useful, and from an engineer’s point of view, it might be better 
- but in a real world, everybody’s going to try to leapfrog over and outdo the other guy. A combination of hardware and software [is still required], with engineers improving on it. There’s just no way to stop the [hardware-software] mess.

Mark Burton (GreenSocs) 
- We are an open source organize, so from our perspective, I want everyone to move to open source software and hardware platforms, but there are only a few that are really open. OMAP from TI, for instance, is very much
not open. So yeah, I agree with Jason. It’s a nice dream, but certainly in the hardware space, it won’t happen. In the software space, the move to Linux is substantial. And now with the modeling from Intel, there’s a very interesting move that’s going to rapidly [change things].

« Previous Page 1 | 2 | 3 | 4  Next Page »

You can find the full EDACafe event calendar here.

To read more news, click here.

-- Peggy Aycinena, EDACafe.com Contributing Editor.

Review Article Be the first to review this article
Featured Video
More Editorial  
Upcoming Events
MPSoc Forum 2017 - July 2 - 7, 2017, Les Tresoms Hotel, Annecy, France at Les Tresoms Hotel Annecy France - Jul 2 - 7, 2017
SEMICON West 2017 at Moscone Center San Francisco CA - Jul 11 - 13, 2017
11th International Conference on Verification and Evaluation of Computer and Communication Systems at 1455 DeMaisonneuve W. EV05.139 Montreal Quebec Canada - Aug 24 - 25, 2017
DVCon India 2017, Sept 14 - 15, 2017 at The Leela Palace Bengalore India - Sep 14 - 15, 2017
NEC: CyberWorkbench
Verific: SystemVerilog & VHDL Parsers

Internet Business Systems © 2017 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy