Posts Tagged ‘EDA’
Wednesday, August 25th, 2010
Liz Massingill interviews InPA’s Joe Gianelli
This month, a new company announced its entrance into the rapid prototyping space. It goes by the name of InPA Systems. I was lucky enough to be able to grab a few minutes with its VP of Marketing and Business Development, Joe Gianelli, in order to learn a little bit about this new start up, its exciting new technology and how it could impact the future of rapid prototyping.
Liz: InPA….not an obvious name. What does it stand for?
Joe: Yeah, that’s an obvious question. It stands for integrated prototype automation, which are the characteristics of the technology we bring to the market.
So what InPA Systems is integrating is the RTL simulation and FPGA prototyping environments and automating a critical portion of the “bring up” that verifies that the mapping of the RTL code into the multiple FPGAs correlates to the original RTL code.
Liz: So InPA is in the rapid prototyping area, a segment that’s been around for, what, 20 years? What do you bring to the market that’s new?
Joe: InPA’s mission is to more fully harness the power of today’s FPGA rapid prototyping systems. Our most noteworthy technological capability is bringing debug visibility to users – who used to have to fly blind.
Basically, Tom (Huang) and Michael (Chang) saw the need for a more complete rapid prototype environment that integrated today’s RTL verification and rapid prototype environments with better visibility.
Liz: So technically, how does this work?
Joe: Without getting into a technical schpiel, InPA Systems integrates the RTL code and FPGA prototype environment so that engineers can debug in their RTL code while accessing their captured faulty conditions with full visibility. The automation here is to cross-link the RTL code with the captured faulty condition and to expand full signal visibility around the faulty condition.
We’re also enabling full system debug. This is when engineers are integrating the software and hardware design components enabling engineers to catch issues easier when integrating both HW/SW in the FPGA prototype environment. The automation here enables full system debug with “active debug” technology to dynamically control HW and to cross-trigger between FPGAs.
And finally, we’re automating the full capture of faulty conditions across multiple FPGAs. Today, engineers must capture and debug one FPGA at a time.
Liz: That’s got to be key! Why is it important or noteworthy to integrate and automate this?
Joe: It’s extremely tedious and difficult to isolate a hardware problem when it spans RTL code over multiple FPGAs. Giving the engineer the ability to fully capture the faulty scenario leads to much quicker isolation of the actual problem.
Liz: What does this new technology offer to the user that he or she hasn’t been able to accomplish up until now?
Joe: Right now, engineers probe around in the dark looking for problems in the hardware, one FPGA at a time. We give them the tools to explore various scenarios without having to recompile FPGA place and route…this is a real pain for engineers today. And we give them full visibility around their problem, making it easier to detect and fix.
Liz: What does “active debug” mean?
Joe: It’s allowing the engineer to remain active in the debug process; forcing certain circuit states, capturing data at speed, analyzing the data, and essentially remaining active in the debug process as opposed to probing around in the dark and waiting for another FPGA P&R iteration. What we call Active Debug is a combination of technology and methodology that increases the productivity of engineers who are integrating hardware and software and validating in-system with a rapid prototype .
Liz: So it’s an answer to the old debug visibility problem, right?
Joe: You got it.
Liz: So I have to ask, how is it different from existing debug? Passive debug, is it?
Joe: Yes. As most current systems use the passive debug approach, they only probe the circuit looking for possible problems with limited visibility, which doesn’t allow the user to dynamically create different conditions in the circuit that allow for testing of those conditions while running in the FPGAs.
In contrast, active debug allows the user to force various conditions in the circuit, capture over multiple FPGAs, analyze in a user friendly simulation environment, while reducing the number of FPGA P&R iterations.
Liz: Why is it important to debug in your “active” mode?
Joe: One of the biggest challenges of the SoC design team is debugging problems when integrating SW and HW together. Today, most SoC design teams are integrating their SW and HW on FPGA prototype systems and using the debug tools from the FPGA vendors which were not architected to debug large SoC designs over many multiple FPGAs. Consequently, engineers are not very productive using these tools as they search in the dark, one FPGA at a time, with limited visibility. Allowing engineers to become more “active” in their debug process moves them closer to isolating the bug much faster. It’s really allowing them to do their jobs much more efficiently.
Liz: I’m trying to hone in on the visibility function InPA brings to designers. What do you mean by “visibility” and how is that different from current prototyping methods again?
Joe: Visibility is really two things. First, it’s allowing engineers to capture their faulty conditions over multiple FPGAs as opposed to one FPGA at a time. This gives them much greater visibility into the potential problem. Secondly, our technology expands all the signals in the captured scenario giving engineers full signal visibility.
Part 2 of this interview will air on September 6.
Monday, July 12th, 2010
The pre-DAC acquisitions of Denali and Virage drastically realign the core of the EDA industry. When IP first came on the scene here in the US, (I think 3Soft was the first IP company I saw), many people figured that IP would become another form of delivery for chip designs – and that they would come from the semiconductor companies.
The EDA executives’ explicit remarks about how IP is key to their continued growth could turn EDA into an industry of IP haves and IP have nots.
How does this EDA realignment affect customers? We asked Atrenta vice president of marketing and industry voice Mike Gianfagna, ” What does the EDA industry realignment mean for customers?”
Here’s what he said:
Realignment can mean two things that are related, but a bit different.
One form of realignment we’re seeing is the IP market merging into the EDA market. This is definitely good for IP customers. Effective IP reuse requires a blend of quality, highly validated IP and a good reuse methodology. The methodology need is for both authoring IP to be reusable and implementing the reuse itself. EDA is a good place to bring all this together. Most larger EDA companies understand what it takes to deliver high quality, validated designs. They also understand what a reuse methodology should include. A lot of the smaller IP shops don’t have this perspective.
Another realignment is the “annexation” of embedded software into EDA. Synopsys is validating this trend with their buying spree, and Cadence is validating the trend with their EDA360 proposal and some buying, too. This is also good for the customer. If software development teams can help to drive the silicon creation process, we are going to see some new killer apps emerge as a result.
What do you think about the combination of IP and EDA? Let us know in the “comments” section.
– end –
Saturday, June 12th, 2010
We asked three EDA figures to comment on how the Synopsys purchase of Virage would impact the EDA and IP industries. Here’s what they said.
This acquisition puts Synopsys squarely in the front of the pack as far as IP suppliers go. This trend could be quite significant. Successful IP reuse is a combination of the right EDA tools, best practices methodology and well-designed IP. The EDA vendor is a pretty good place for all that to come together. ARM remains the exception to this rule, and several other rules for that matter.
Vice President, Marketing
I don’t see how this doesn’t make Synopsys a competitor with ARM on physical IP and ARC processor. ARM should start feeling like it is getting surrounded by Synopsys.
With EDA trying to expand its scope and grow beyond its traditional boundaries (see EDA360), and with small and medium size IP vendors struggling to grow, basic economy forces are pushing this trend.
Synopsys has already been a formidable IP player and Cadence now entered it with its recent acquisition of Denali.
There are still plenty of smaller IP players so we’ll see further consolidation playing out. The IP segment has been trying to define and position itself between EDA and semiconductors. We all wondered if IP would become an intrinsic part of the semiconductor industry, the EDA industry, or stand on its own. These days we clearly see that the IP pendulum has shifted toward EDA.
The outlier is of course ARM which is a different beast, in some ways closer to semiconductors: i.e., look at how ARM competes with Intel. With a market cap equivalent to Synopsys and Cadence put together, ARM is simply too big for that.
– end –
Monday, May 24th, 2010
Jim McCanny, co-founder and CEO of Altos Design Automation, Inc., is one of the most vocal voices on the use of characterization technology and what trends will be coming down the chip design pike.
I was able to catch Jim to talk about where EDA was heading and how characterization technology plays into those trends issues and chip design challenges.
Ed: I was at an event, recently, where the premier investor in EDA startups cited Altos as one of his startups that did it right. Altos also got mentioned in Paul McLellan’s book, EDAgraffiti, as a company that did it right. What did Altos do that was “right?”
Jim: The things we did right? Well, I’d say that we focused on a real need – characterization run-time was too long to support the electrical analysis needs of 90nm and below. We used an experienced team and got a product to market quickly. And finally, we took only a relatively small amount of funding and relied mostly on organic growth and kept control of the company.
This last item, I think, is the one that has resonated with private investors. It made us somewhat immune to the big economic downturn in early 2009, as we had always been operating in a very fiscally responsible way.
Ed: Good point.
Jim: Finally while it was nice to be mentioned as a company who did it right, I don’t think we can be the “model” for every EDA startup. We did it right for the particular market we were going after and the current economy. Other target markets at another time might require a different approach.
Ed: I’m still fuzzy on what characterization is. Can you give me the 30 second elevator explanation?
Jim: I’d be glad to lessen some of the mystery, Ed. It’s elevating the behavior of a group of related analog transistors to a higher level of abstraction that is fundamental to digital design. For example a simple Nand gate typically has four unique analog transistors. Characterization enables each Nand gate to be modeled as a cell with equivalent timing, power and noise characteristics. That is equivalent to a 4X reduction in the circuit size to be analyzed.
Ed: So how big are we talking about?
Jim: For complex cells and blocks, there can be hundreds or even thousands of transistors and for memory instances there are often millions of transistors so the abstraction dramatically reduces the number of distinct elements that the digital design tools have to work with. Without characterization, there would be no synthesis, place and route or static timing analysis. There would be no IP reuse, basically no SoC design flow.
Ed: So characterization is obviously extremely significant to chip design. I recall that Altos started off back five or six years ago, touting the onset of statistical timing analysis (SSTA) and how characterization would be a required element in SSTA-based design flows. Adoption hasn’t really been overwhelming, yet it appears that characterization helps with static timing analysis driven chip design as well as SSTA driven chip design. What’s the difference in productivity and value that characterization brings to static timing analysis and SSTA based chip design?
Jim: SSTA is one of the areas that we saw as driving the need for faster characterization.
Ed: Now, can you remind me what SSTA is again?
Jim: Sure. SSTA is a methodology for predicting the impact of process variation of the performance of your design. It requires an accurate library that captures the effect of variation on timing (delay, slew, constraints etc.). Creating accurate models in a reasonable time frame is a big challenge. For example, the most accurate method is to use Monte-Carlo simulation but that would take thousands of times longer than “nominal” characterization (which itself can take days or even weeks). Clearly this “brute-force” approach wasn’t going to work if SSTA was to be feasible. We are able to create an SSTA library hundreds of times faster than using Monte Carlo, but still with great accuracy. Without this capability, SSTA would not get anywhere.
Ed: So is the push to lower manufacturing processes a factor in the increasing use of SSTA?
Jim: Yes! We are now starting to see serious usage at 28nm. You actually bring up a good point. There are several methods for predicting process variation such as “corner” analysis or “advanced on-chip-variation” (AOCV). Both of these solutions require either more characterization or longer characterization run-time; so our “ultra-fast” characterization technology is still very relevant whether SSTA is used or not.
Ed: As we get down to finer processes, what problems will chip/SoC designers encounter?
Jim: For most of today’s designs, the key challenge is optimizing both power and timing. Variation can play havoc with this process which is why SSTA is starting to get some traction. If you add too much margin then you can kill your power budget. However if you don’t account for variation you can have a dead part on your hands or suffer from low yield.
Ed: What else will crop up?
Jim: Another key challenge is what to do with all the available silicon real estate. The most obvious thing is to integrate more and more components on-chip. To get to market quickly this means using off the shelf IP. Making sure all the IP works together in a consistent way is tough. If you rely on pre-built models from the IP vendor you may suffer from over-guard banding or simply that the models are not up to date with the version of the process you are using. The best way around this is to either re-characterize everything to a single well defined set of characterization criteria or run an independent validation of your IP before using it.
Ed: IP quality is definitely a challenge. Harking back to that EDA investor, he seems to be saying that the valued technology will be in the front end, going forward. What’s your take and how does characterization play into that supposed trend?
Jim: There has always been value at both ends in EDA. Layout verification, layout editing, place & route, post-layout simulation, static timing analysis are all back-end solutions and major EDA markets. Sure integrating systems and software has huge potential but so does any solution that can make sure your chip will work in silicon or can improve its yield.
Ed: So what’s ahead for EDA? Is it a stagnant, mature industry, as so many people were saying a year and two years ago? Or maturing but vital in the semiconductor supply chain?
Jim: I don’t think it’s mature. There is simply too much churn in customer needs. Current tools are continuously getting enhanced and new tools are always coming on the market. Just look at the Spice simulation market. Three years ago, I think everyone would have said it’s stagnant. But look at all the new players and new capabilities that have come out in the last few years.
Ed: What do you see here?
Jim: There have been big improvements in performance, capacity, new models, new integrations into other solutions and innovative use of distributed processing.
Ed: So what is the technology development/adoption cycle for EDA?
Jim: I think EDA has cycles of about 8-10 years from the leading-edge adopters to trailing-edge users. There were a lot of new solutions around 2000 that have served the industry well for the past decade, but are now aging. Obviously, sometimes the EDA industry gets ahead of itself and has to go through a few lean years like we have just done. The danger is that when the industry needs new tools and solutions they won’t be there, as the past year and half has been pretty brutal and instead of investing in the future, many of the big EDA companies had to make cuts. Key areas such as analog automation, IP integration and verification and system and software design still need a lot of work.
Jim: Tools that truly enable IP integration and verification. By verification I don’t mean “will the IP work stand-alone” but “will it work as desired in the integrated system,” e.g. at the voltage levels being used, at the process corners being used, with the expected amount of process variation etc.
Ed: And what issues will we see rise to crisis level in power? Timing? How will they get fixed?
Jim: Power is really dynamic but timing is usually analyzed statically. How do you really model dynamic, temporal effects such as IR drop, crosstalk and substrate noise using static methods without gross “worst-casing”. In addition noise effects can cause very analog like waveforms that break the assumptions of today’s delay models that assume a linear or piecewise linear ramp. There is room for better timing models and smarter ways to statistically model the impact of dynamic effects like noise and IR drop and possibly hybrid static-dynamic analysis tools.
Ed: So what’s ahead for characterization technology? For Altos?
Jim: Our focus is in “enabling a world of IP.” By that, we mean that we want to make reuse of any form of IP highly productive, be it cells, complex I/Os, embedded memory or custom blocks. To do this we are working on bringing the same kind of automation and performance we have brought to complex cell characterization to IP block characterization. We also see characterization as more than model creation but also as a means to validate IP. A characterization tool tells you how the block will perform under a range of different conditions but doesn’t tell you if it performs as expected or how much margin you have to deal with the “unexpected”. We are on a path to change that.
Ed: Seems promising! I look forward to hearing more on this front down the road. Thanks, Jim, for taking time out of your busy day to share your viewpoints on these topics.
– end –
Monday, March 1st, 2010
Steve Leibson in Leibson’s Law did a comprehensive and insightful job of covering the Hogan/McLellan entrepreneurial workshop in his blog on Wednesday. Thank you, Steve! And thank you, Jim and Paul, for enlightening us on how to start up an EDA company.
Jim and Paul made some very hard-hitting points in this valuable how-to workshop (at DVCon Tuesday night), one of which was emphasized by Leibson: “Sizzle is the highest leverage marketing point” said Hogan.
Afterward, a couple of attendees shared with us that “there is no sizzle in EDA!” And “as we all know, many engineering driven startups (even some engineering driven mature companies) undervalue or don’t understand the importance of sizzle – a big mistake.”
What is sizzle? How do you define it?
Is there sizzle in EDA? Why or why not? Who has it, if there is sizzle in EDA?
Let us know what you think……
Monday, February 1st, 2010
(As we all know, Richard Goering is a longtime EDA editor who went to work for Cadence in March 2009, where he writes the Industry Insights blog and works on various writing projects. I recently had a chance to talk with Richard about his year on the corporate side of editorial writing and the state of EDA editorial: where it’s going and what it’ll look like, if it continue to exist. It will, but…BTW, something’s different about Richard’s photo…)
ED: It’s been about a year since you moved from editorial over to Cadence. What differences, if any, do you see?
RICHARD: First, there’s a difference between blogging and news reporting. A blog is shorter and more personal, and is written in a different style. After many years of conventional news reporting, blogging has taken some adjustment.
Also, writing a corporate-sponsored blog is different from writing for an independent publication that covers news from all vendors. With the Cadence Industry Insights blog , I’m writing about most of the same issues I would have covered for EE Times, but where appropriate I’ll include a Cadence perspective or product mention. I don’t generally write about developments from other companies, unless some sort of Cadence partnership is involved. I should note, however, that since I’m focusing on issues rather than products, I don’t often write blogs about new Cadence products.
ED: So it’s been a change to come over to the dark side…not that there’s much of a “light side” any more, huh? What did you perceive as the dark side and what does it look like now, to you?
RICHARD: I don’t really think of it in terms of a “dark side” and a “light side.” Independent publishers are not doing charity work – they’re in business to make money like everyone else, even if they don’t succeed!
For me, working for a major EDA company has certainly been an educational experience. I now have a much better idea of how EDA companies function. Before EDA companies were mysterious monolithic entities that spit out press releases and products. Now I see the “people” side of the industry – lots of creative and diverse people who have many different ideas, and somehow come together with a consistent message.
ED: You’ve covered EDA for over 20 years. Clearly the publication world has changed, is collapsing as we speak. What lies ahead for EDA publications and coverage?
RICHARD: A lot less coverage, as we’ve seen already. Still, publications like EE Times, EDN, Chip Design Magazine and Electronic Design do have some EDA coverage. But a lot of the coverage going forward will come from blogs, forums, and various social media outlets.
ED: Where EE Times is concerned, it seems that there has to be some connection with a chip design issue for there to be EDA coverage. Otherwise, it goes to EDA Design Line. I think that’s fine, but it sure says something about how that once-mighty publication has changed, huh? Well, don’t let me put words into your mouth. How is the change in EE Times emblematic of what’s happened to EDA editorial?
RICHARD: It’s not just EDA editorial – EE Times has a lot less editorial, period. There is still some EDA reporting once in a while, but there seems to be more of a semiconductor focus. That probably makes sense given the lack of EDA advertising and the greatly-reduced editorial resources.
ED: What role will the new era bloggers (indie, corporate, editorial, PR) play? How will those roles evolve?
RICHARD: Blogging provides a new information channel that’s hopefully written in an engaging style, by someone with expertise in a given area. Given that some EDA bloggers are chip designers or consultants, it can be a “peer to peer” communications channel. It can also be a two-way channel if a conversation develops.
Independent bloggers, I suppose, are those who are not paid by a company to blog, although many do have employers. While every blogger has her or his own biases and points of view – a point of view, after all, is what blogging is all about – independent bloggers have the potential to be on neutral ground with respect to EDA vendors.
Corporate bloggers will reflect the positioning of their companies, but they can also provide a good deal of useful, in-depth information that you won’t find elsewhere. With Industry Insights, I have been able to write some “inside look” kinds of blogs that it would have been difficult to write from the outside. For example, I wrote a series of blogs about what it takes to port EDA software to multicore platforms, drawing upon Cadence’s experiences in this area.
Due to the lack of editors, there are very few EDA editorial blogs. Those that exist are picking up some of the coverage that’s missing from the electronics trade press. An example is Ron Wilson’s Practical Chip Design. I haven’t seen much in the way of blogs from PR people, although yours is an exception.
ED: OK, since you bring it up, what role do EDA PR bloggers have in EDA blogging?
RICHARD: I think PR bloggers would do best to focus on issues like social media, PR, and advertising, as opposed to technology. With all the changes in the media, there’s plenty to write about.
ED: But blogging seems more opinionated than EDA editorial, which you covered for so long and so rigorously. I mean, clients were intimidated by the perceived “wrath of Goering” and would oftentimes minimize their hype when being interviewed by you. Thus, we got a comprehensive and objective overview of the technology area from you, even when you covered new products. Will we see objective reporting disappear?
RICHARD: No. As I noted, there is still some EDA reporting in the traditional media, and some bloggers do objective evaluations of major new products and announcements. But the days when every EDA announcement would receive coverage are long gone.
ED: So what role will traditional press play?
RICHARD: I think there will be some continuing coverage of really big announcements or developments. But there will be a lot less product coverage and new company coverage than there used to be. Unfortunately, there are a lot of press release rewrites in the press these days. That doesn’t provide much useful information for the readers.
ED: How possible is it that an EDA press disappear? Why?
RICHARD: Very simple – lack of advertising. It’s part of the meltdown we’re seeing across the publishing world. Also, EDA stories don’t get tens of thousands of readers. There’s a very small, specialized audience, although they have big wallets.
ED: What’s there to keep EDA honest if there’s no longer an “industry press?”
RICHARD: There is an industry press – there’s just less of it. There are also a growing number of bloggers watching EDA developments. But more and more it will be up to the users to help keep EDA vendors on the right track. With the ability to start a blog or comment on blogs, join on-line forums, speak at user group conferences, and participate in Twitter groups like #EDA, EDA users now have a voice – and they will hopefully use it for the betterment of the industry.
ED: What’s your sense of pay for play in editorial? Good, bad or necessary?
RICHARD: I’m not going to say it’s bad, but if a company pays to have an article written, I think that should be made clear to the reader.
ED: Well, EDA’s benefited from your historic participation in the industry. Witness your DAC award a few years back. It’s been, what, over 20 years, starting at Computer Design? I’m not sure anyone can see an EDA industry without Richard Goering in place. Thanks for taking the time to catch up.
RICHARD: And thank you for the opportunity! After interviewing your clients for years, it’s an interesting turn of events to have you interview me.
– end –
Thursday, December 3rd, 2009
(Liz Massingill concludes her conversation with blogger Dan Nenni.)
Liz: I know that bloggers don’t want press releases. They want to talk about trends.
Dan: Every blogger has an agenda. I blog about experiences, companies, and technologies that I know, positive and negative trends that I see. I do blogs on TSMC and the other foundries all the time. My agenda there is to let people know that if you are part of the semiconductor design enablement supply chain you need to be very close to the foundries. When bloggers are really product specific, like some corporate bloggers are, it just looks like something from a company–a public notice. But if they talk about market trends and put their personality and their experiences into it, then it becomes interesting.
Liz: How long will it take the industry to be more social media savvy?
Dan: I don’t know if it will be in my professional lifetime or not? But if you look at it, we’re raising the Social Media Generation— Facebook, MySpace, and Twitter.
I have 4 kids, and all of them are really into it. They’re prolific texters–they communicate with their thumbs. When those people get jobs and become our target market you’re going to have to market to them, right?
Unfortunately, most people our age aren’t that savvy. I picked it up early because I have kids. I’m involved with them and their social media habits. I have 6 cell phones and I didn’t have texting because my kids were starting to drive. My Verizon bill was thousands of minutes. They begged me for texting so I got the unlimited plan. My calling minutes went from thousands to a few hundred. The thing is that they don’t communicate by phone, that’s just not the way their generation wants to communicate, period. I turned texting off on my phone to eliminate yet another distraction.
My attitude was that if you want to talk to me, call or email me. And they don’t (laughs). So those are the people we are bringing up now, the thumb generation, and this is happening in America, China, Iran, everywhere.
If you don’t GET social media, you are going to be at a significant disadvantage in business and life in general. I think we’re coming close on the business side. Companies should start now or they won’t be competitive. That’s why I’m an evangelist for social media because it’s THE most cost effective demand creation vehicle.
In our business, the average shelf life of a marketing message is like a loaf of bread, things/specs change so quickly. You need to refresh your message in a cost effective manner on a monthly basis; and that is Social Media.
Liz: There’s always press releases (laughs)
Dan: People don’t care. No offense but traditional PR does not work the way it used to.
Liz: What about print media vs. online media? Aren’t there many people who would rather read a hard copy than have to remember to go read something online?
Dan: I don’t read the newspaper anymore because by the time I get it, it’s old news, so I use Google Reader. I’m on my laptop anyway doing email, watching videos, etc… How much time do people spend on their computers? 50% of your day? Some people even eat in front of their computers.
(Liz raises hand sheepishly.)
Dan: So where are you going to get your news? In the newspaper, the only thing I read is the comics, the Jumble, Dear Abby, Safeway ads (I do the shopping). Nothing else, and I hate getting news print ink all over the place. Seriously, smudge proof ink, how hard is that?
Liz: What is it you want or don’t want from PR people?
Dan: I want PR people to embrace social media and make it their own, simple as that. Bloggers are easy to work with. Bloggers want blog views, views are empowering and feed our massive egos. You have no idea what a burden it is to support a massive ego, so anything you can do to help get blog views is greatly appreciated. Invite us to functions, buy us lunch, integrate Social Media into your business model, just don’t send us press releases!
Liz: Jim Hogan threw down this gauntlet in his recent presentation at ICCAD….that EDA is complacent. We’ve talked a bit today about how there doesn’t seem to be much of an interest in EDA but a lot of interest in foundries. How do you think that relates? Do you agree with Jim’s assertion?
Dan: Yes EDA is complacent, I agree with Jim. My audience is definitely interested in the foundries, also semiconductor IP and design services. So why not EDA? One theory is that EDA does not share the risks and rewards of semiconductor design, so EDA is not invested in/with the customer. EDA software is licensed upfront and gets paid whether the customer is successful or not.
Foundries, IP companies, and design services are more success oriented and get paid on volume silicon shipments. Based on that, customers view EDA companies differently, especially when licenses expire and their design has not taped-out yet!
Liz: How do FPGAs figure into the picture?
Dan: FPGAs are a big factor in the decline of EDA, and everybody knows it. I think that is a relevant point if you are talking about the state of EDA. FPGA design starts are going up and ASIC/EDA design starts are going down. FPGA’s are also success based with volume silicon shipments being the big payday for all, sound familiar? 😉
Liz: What do you think the trend for EDA will be for the next 10 years?
Dan: EDA is going to be interesting the next few years, and I am happy to be a part of it. I would like to send a strong but positive message: Change is coming. If EDA does not embrace this change, it’s going to be a very costly experience. Success based business models are key, working closely with the foundries is key, being an accretive member of the semiconductor design enablement community is the cure for EDA complacency. Believe it.
– end –
Tuesday, November 3rd, 2009
Jim Hogan and Paul McLellan gave an ICCAD audience their take on what’s ahead (over the next decade) for EDA.
They ended the session with the gauntlet statement: “EDA is too complacent.” And curiously, not one person responded.
If you’re interested in what Jim and Paul presented (and what the responses have been from industry bloggers and reporters), click on the Lee PR link here: http://leepr.com/Home.html
Monday, October 26th, 2009
A quick note: Jim Hogan and Paul McLellan (no slouches in knowledge and expertise) will be talking – and talking with the audience – about the future of chip design and silicon platforms from now til 2020.
This event will be held during ICCAD, on Monday, November 2, from 3 -4 pm in the Silicon Valley Room at the Double Tree Hotel, 2050 Gateway Place, San Jose 95110.
From what Liz Massingill and I hear, Jim and Paul will put several topics on the table for discussion:
__which silicon platform will become pre-dominant, ASIC or FPGA
__the role of software signoff in a traditionally-hardware world
__how these changes will affect the semiconductor supply chain (e.g., with EDA, semiconductor equipment)
This is a co-located event at ICCAD so if yo were not planning to attend ICCAD, there’s no need to register for this event.