In 2012, the industry discussed the qualities that reliable and reusable IP needs and the metric to measure those qualities. We think 2013 will be the year that the value of IP becomes tangible.
We tapped Warren Savage, CEO of IPextreme, to give us his thoughts on how to value IP.
Ed: So Warren, how do we figure out IP’s value?
Warren: In the most tangible sense, I think the question ought to be “how do we monetize IP?”
IPextreme has been at the forefront of this since we founded the company back in 2004, and it really was “extreme” back in those days to discuss licensing those “crown jewels.” But now it is increasingly mainstream and certainly the topic for industry discussion.
So one consideration revolves around the ton of licensing done in the industry today that is hidden. Primarily around patents and process technology. The transactional IP licensing that we specialize in, is really something that IPextreme invented.
Over the last couple of weeks we’ve been exploring the concept of stale IP – what it is and what to do about it. I’ve gotten insights from two industry experts in IP (Harrison Beasley of GSA and Manoj Bhatnagar of Atrenta). I will wrap up my series on this topic with one final view – from IP provider, Warren Savage, founder and CEO of IPextreme. He will challenge the whole idea of stale IP in this interview.
Liz: Stale IP – what is it?
Warren: Frankly, I’ve been working in IP for seventeen years, with most of the world’s largest IP and chip companies, and I have never heard the term before. I think people who think about IP being “stale” may be confused about the difference between IP and code. IP is certainly code, but code is not necessarily IP. I have argued vociferously for years on this topic, particularly opposing those who would claim that IP is a service business (see an old blog post by me “Repeat after me: IP is Product Business…” http://blogs.ip-extreme.com/2009/07/test-page.html). I think this notion of “stale IP” is sort of a regurgitation of the idea that there are classes of IP. For me, IP is something that is reusable indefinitely and valuable as long as there is a market for it.
In my last blog, Harrison Beasley shared his views on stale IP. This week we hear from Manoj Bhatnagar, Senior Director, Field Delivery and Support at Atrenta.
Liz: Manoj, what is stale IP?
Manoj:An IP may become stale because either its specifications have changed (e.g., USB 1.0 vs. 2.0 vs. 3.0) or there is a better implementation available (e.g., a graphics core is now running at 800Mhz instead of 500Mhz). Typically, people will use the latest version, and the older versions are no longer used. So the stale IPs in this case will die a natural death. What is more challenging, however, is a specific IP developed for a specific project and, over time, no other project used it. So the IP becomes stale. Most of my answers will apply to this type of stale IP.
Liz: What’s so bad about it?
Manoj: The main issue with a stale IP is the fact that nobody really knows the details about it. If I were to use that IP, I would be putting my design at risk because I am now adding some logic to my design for which I don’t have all the information and can’t find anyone who can provide that information either.
Liz: How do we prevent it from being stale?
Manoj: One of the key things that can be done to prevent IP from going stale is to document the IP. I don’t know how many people still remember the TTL datasheets but when you looked at the datasheet, you got complete visibility into what that component did. The same concept can be applied to present day IPs, where you document various characteristics of the IP. For a hard IP, this may be the timing characteristics, physical profile, etc. while for a soft IP this may be timing constraints, clock domain information, testability profile and power profile.
Stale IP is beginning to rear its ugly head. It’s like having too many books on your bookshelf – always an issue in my house. Where do you put the new ones? Which ones do you keep? What do you do with the ones you don’t want to keep?
I (Liz Massingill) recently polled some experts in the industry to get their stance on stale IP. Over the next few weeks I’ll share their views with you.
I’ll start with Harrison Beasley, Manager of the Technical Working Groups at Global Semiconductor Alliance (GSA). Here’s what he had to say:
Liz: Stale IP – what is it?
Harrison: IP becomes stale when the underlying code is out of date. This could be due to changes in a specification, errors found in use, soft IP not being updated, etc. My assumption is that stale IP will not perform the task for which it was created.
Liz: What’s so bad about it?
Harrison: Using stale IP could lead to non-functional silicon, tape out delays, end product failures, etc.
Liz: How do we prevent it from being stale?
Harrison: For internal IP, code checks before layout, during timing analysis, during verification, and before final tape-out help ensure the latest IP version is used. For third party IP, similar rules apply, but the user must coordinate with the IP Supplier to ensure changes are promulgated to the user.
What are the challenges in managing semiconductor IP?
How can we solve IP reuse integration?
If you’d like to know the answers to these questions and others, check out this presentation by Michael Johnson of Atrenta from the Constellations 2012 conference.
Johnson succinctly defines soft IP quality and proposes a way for the industry to get to a soft IP quality standard.
IPextreme’s Silicon Valley IP Users Conference 2012 edition has become a must-attend event for IP vendors and users, much more than a private tradeshow for IPextreme and its customers. I sat down with Warren Savage, IPextreme’s founder and CEO, and McKenzie Mortensen, the company’s mar com manager, to talk about the conference and its role in the chip design world.
Ed: So we’re talking about Constellations 2012…the program drew informative and opinionated speakers! Definitely more than a private tradeshow. When did Constellations begin? What were your goals?
Warren: I think it was a hit precisely because it was not intended to be just another private tradeshow. The world has changed a lot since the 1990s.
Ed: Hmmm…you mean for the chip design world? How has it changed?
Warren: Well, I think it’s time that companies start evolving to better understand how to serve their customers in a way that is not hitting them over the head with sales pitches.
Ed: And that customer service attribute is one that vendors to chip designers have been notoriously lax about. Back in the late 1990s or early 2000s, I remember an analyst, it could have been Jennifer Jordan, wagging her finger at the EDA world on this count, while taking us to task for doing a bad job of selling the industry’s value to the public markets.
So how does the conference and your Constellations program change this?
In a recent article written by EDA industry watcher Ann Steffora Mutschler, Atrenta’s VP of Product Marketing Piyush Sancheti pointed to the curse of the verification double whammy for engineers:
“For verification engineers and for designers, this is a double whammy,” noted Piyush Sancheti, vice president of product marketing at Atrenta. “If you ask a digital design or digital verification team, they will tell you that low-power design and the introduction of analog/mixed-signal components on what used to be a simple digital chip is a significant verification challenge. For verification engineers what this means is your finite state machines or your control logic just got that much more complicated. If you go from 2 domains to 20 domains, your verification complexity just increased an order of magnitude.”
We caught up with Piyush in the Atrenta hallway and asked him to elaborate on his statement. Here’s what he said:
Ed: So what is the double whammy and why should we care?
Piyush: With the onset of A/MS and low power requirements, digital design teams now have to contend with two new foreign entries to their previous monolithic design environment.
Ed: And they are…?
Piyush: New logic blocks that are completely foreign to digital designers and the implementation of power management techniques like power & voltage domains. Voltage domains allow the timing critical portions of the design at a higher voltage (overdrive), and the rest at a lower voltage (underdrive). Power domains, on the other hand, allow one to turn off the power on entire blocks of the design when not in use.
Ed: Haven’t digital designers always needed to be conscious and conscientious about power?
Piyush: Not to the extent they must be these days. Here’s the challenge – say you are designing a chip for a smart phone. When you are watching a YouTube video, you don’t need the phone function, so you want to make sure that the phone functions are off. What’s the result? You’re saving power, or in consumer terms, preserving battery life. But, if the smart phone gets a call, you have to be sure the phone function turns on instantly, without adversely impacting your video viewing experience. So designers have to make sure the domains turn off and on in perfect harmony, almost like conducting a symphony.
So what’s the problem? New power management logic that designers are not used to has been thrust on them rapidly and recently. They need to get up to speed fast. This is not an easy job. Not only that, but you now have very complex finite state machines that switch these functions on and off seamlessly.
Ed: So what’s the solution?
Piyush: A comprehensive methodology for functional and structural verification.
Ed: Can you elaborate?
Piyush: These complex finite state machines must be verified exhaustively for functional correctness. You need to make sure that the various functions on your smart phone wake up and shut off in a timely manner without adversely impacting the device behavior, and ultimately the user experience. With structural verification you need to make sure that the perimeter of the voltage and power domains are properly secured. When you have signals crossing one voltage domain to another, you need voltage level shifters. Similarly, you need isolation logic between power domains, to ensure that signals don’t float to unknown values when a domain is powered off.
Ed: So what sort of tools and methodologies do you see out there to meet the double whammy challenge?
Piyush: Well, of course, I’m most familiar with the Atrenta platform. There are undoubtedly other ways to go about this job. But from what I see, SpyGlass Power is being used by many large chip and system companies for static signoff of power and voltage domains. SpyGlass Advanced Lint enables exhaustive finite state machine verification using formal techniques. And with our recent acquisition of NextOp Software, we now have BugScope to ensure dynamic verification (simulation) is covering all the corner cases that are now part of your design because of this increased complexity.
Ed: So your final words of wisdom?
Piyush: Verification of modern day SoC designs is a daunting task. But like any complex problem a systematic approach using a combination of static and dynamic verification techniques will help you reach your device ambitions faster.
On the heels of EE Times editor Brian Bailey naming their article “Understanding clock domain issues” the number one article on EDA Designline, we checked in with authors Saurabh Verma and Ashima Dabare on what they see as developments and new challenges since they wrote their 2007 article. Here’s what they said.
Ed: It appears that your article got twice the number of views as the number two article. Congratulations on the EE Times recognition!
Obviously, CDC was an important design issue in 2007 and it certainly is today. What would you say to designers today?
Ashima: CDC design is evolving and so are the synchronization techniques and verification tools. Since we have written this article we have witnessed new challenges posed to CDC verification tools.
One that comes to mind is evolving synchronization styles. In addition to clever variations of synchronization techniques introduced by designers trying to meet their design objective or schedule, new architectures such as those required for a network on a chip (NoC) have been introduced which in turn require verification tools to re-invent themselves.
Recently CDC tools have introduced generic synchronization verification techniques that do not rely on the structure of the synchronizer and analyze clock domain crossings at the protocol level allowing them to better recognize synchronizers, reduce “noise” and improve root cause analysis.
Saurabh: Also, global chip design dictates blocks and IPs to be designed in various geographical locations. The person doing CDC verification is rarely the designer. CDC verification tools are now challenged with providing root-cause analysis of CDC problems to people who have little knowledge of the block.
I also see as a fact that design size is fast growing and so are the number of clocks and clock domains. Combined with the move toward global chip development, flat CDC verification of large SoCs would be a painful exercise where bugs can easily slip through.
The divide and conquer approach seems to be the best possible approach. To begin with, the lower level blocks should be analyzed and CDC issues, if any, should be fixed at the block level itself. Once all the individual blocks are CDC clean, their abstract models can be plugged in and the complete design can be analyzed for CDC issues at the interconnect level.
Ed: So how would you sum up what CDC design needs in 2012?
Ashima: With the ever increasing complexity of design styles, robust CDC verification is indispensable to enable successful chips in the first silicon attempt!
Note: as near as we can tell, Atrenta is the only company to place two articles in Bailey’s top ten. Narayana Koduri’s Power awareness in RTL design analysis came in as ninth most read. We’ll catch up with him next week, so stay tuned.
Why can 3rd party IP impede a design getting to tapeout? Why is IP reuse costing design projects more time and effort? And what can we do about it? Piyush Sancheti, VP of Product Marketing at Atrenta, explores these issues and answers some of these questions in the viewpoint below on the GSA IP Working Group blog:
Atrenta CEO Ajoy Bose and EDA visionary and investor Jim Hogan spoke at a recent National Institute of Technology (NIT) meeting on the momentous changes we see in who controls chip design these days. Clearly, systems companies like Apple define – even dictate – what they want from their silicon vendors..and these systems customers certainly want a lot more than they did ten years ago.