March 10, 2008
Closing the Verification Gap.
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
People spend the majority of their overall design time in verification. Anything that would shrink the time or improve productivity should be of great interest. In addition people are increasingly doing more simulation at the architectural level. Unfortunately, their investment in models and testbenches does not carry forward into the implementation phase. There is a gap between abstraction levels that needs to be bridged. On February 18th Mentor Graphics announced two new tools that do just that;
Questa Multi-view Verification Components is a product that can connect to any level of abstraction from system to gates while
InFact offers a new intelligent testbench automation technology. I had an opportunity to interview Robert Hum, VP and GM of the Design Verification and Test Division, before the press release who gave me a mini tutorial on the topic.
Would you give us a brief biography?
I started life in Bell Northern Research Telecom which would be the equivalent of Bell Labs in Canada. I started on the design side of the world. I did that for a number of years and part way through that bit of my career I discovered that it was hard designing things because the tooling that was available to us was just not up to the task. So I gradually turned myself into an EDA person back when it was fashionable to have EDA inside system houses. That was an era when IBM, Bell Labs, Nortel, Motorola and everybody had their own EDA groups. My career started in 1975 and then in 1994 I decided to join Cadence to become a professional EDA person where I was VP of Corporate Strategy. Then
in 1997 I joined IKOS Systems. They built the V Station Emulator. We competed quite successfully. We took the company from 0% market share to something like 42% market share in emulation in three years. Then Mentor had the wonderful idea to acquire IKOS in 2002. So I arrived here and took over the verification and design for test portfolio. AT IKOS I was COO and VP of Engineering. So eventually I ran marketing, manufacturing, engineering, IT and a bunch of other stuff internally.
At Mentor I am General Manager for ModelSim, Questa and Design for Test tools and generally make a nuisance of myself by telling all the other people who play in verification what to do and how to do it. That’s kind of where I come from. I find that having a design background helps a lot in having the perspective of what tools we should be developing and how they should be delivered in terms of GUI, interoperability and things like that. So I have different attitudes than people who grew up in the EDA world per se. When I was at IKOS because we designed real live hardware and had to deliver real live hardware, we were the users of our own simulation and emulation products. So we
developed a real deep understanding of what it takes to design complex equipment and make it work and deal with real situations. It is an interesting bunch of experiences.
What are you announcing?
Mentor Graphics’ verification tools live in the overall design cycle. What we have includes testbenches, assertions, verification components which are so called VIP (Verification IP) and coverage analysis which gives feedback on how well the verification testbenches are performing against the kinds of things they want to understand about their design.
We have a few specific announcements. One in the testbench area, a product we have called InFact, is a new way of generating verification test sequences inside testbenches for complex designs. It is not a replacement for constrained random or directed test. It is an adjunct to those other techniques. In talking to editors in the verification field, the question came up “Isn’t there going to be an ultimate solution once and for all where we solve verification and we can get on with our lives.” I made the observation that in verification we tend to add things and very rarely drop things. There does not seem to be a way of coming up with the definitive “finally we have solved the problem forever and our lives will be changed all time solution”. In the verification area we add things. We have added assertions, constrained random, coverage based and assertion based verification, formal equivalence checking, clock domain crossing, metastability analysis, etc, etc. The field tends to get bigger and more complicated. What the industry tries to do is to offer techniques that help users with productivity, that is help with getting more verification done in less time. We take a lot of the technology and techniques that are there and augment and extend them. Sometimes we are lucky enough to be able to invent new technologies
like static clock domain crossing tools.
The other thing we are announcing is what we call Multi-view Verification Components (MVC). MVC are unique because what they do is let designers bridge abstraction levels through a synthesis kind of technology.
What is new?
The way we think about InFact testbench automation technology is that you write the model once. In other word you write the generator for your testbench one time. Through the technology we have put together that testbench adapts itself to various levels of automation.
On the Questa MVC side of the world you write the model one time and the model adapts itself to various levels of abstractions. Both of these technologies together are key in bridging the architectural level and the implementation level which is generally RTL and gate level. Those two abstraction levels in the past have been very difficult to bridge. If it has been bridged, it has been by manually writing models and manually translating testbenches to work in these two environments. Today we are announcing some technology that will bridge the gap.
Why is this important?
The predominant design methodology in use today has evolved over time gracefully. More often than not, especially in larger designs, there is a fair degree of reuse. It is something people concentrate on, because it gives them productivity. It lets them design things that are more correct because blocks have been silicon before. It is less error prone. But if you are using blocks with complicated interfaces, the verification problem now includes this issue of how do you verify that the blocks you put down on your design actually communicate properly. You now have this issue of interface verification.
In the past, not that it wasn’t there, but it wasn’t one of the predominate areas of focus. You focused more on functionality and things like that. Today you still want to verify the functionality but you have this added burden of having to look at the boundary of blocks, how blocks communicate with each other. This is the essence of block-based design. With block-base design you also have a verification methodology that says look “I have models of my blocks for sure at the RTL level because that is where synthesis happens, probably at the gate level because I can synthesize and get full synthesis gate level netlist. I might even have some architectural models or ESL level models. I notice that my ESL models execute much more quickly than my RTL models. It would be very convenient if I could keep most of my blocks at the architectural level and then swap in some at the RTL level so that first of all I get a faster simulation and secondly of all verify that my implementation blocks in RTL has the same functionality and properties at its boundaries as the behavioral block, the block that I just swapped in there.” You want this mix and match kind of approach. This means that the testbench you have has to be able to deal with that mixed level of abstraction; some stuff at the architectural level and some stuff at the RTL level. How do I make
sure that the data I put in the testbench and the data I look at coming out bridge that an mount of abstraction? You also have the issue that your models, the verification IP models, also do this heterogeneous abstraction where you have blocks at more than one level of abstraction. That’s why it is important. It is because of block-based design, because of design reuse and because of productivity. We believe being able to bridge these abstraction levels will be critical.
We survey our customers once a year to understand where they are succeeding and where they are having issues. It helps us direct our development activities so that we can provide product that people actually want to buy. We own about 35% of the market in verification. So these survey numbers represents about one third of the marketplace. We tend to have more customers that are on the system house side rather than the silicon house side of the world. So the perspectives we get are from system design houses that are using silicon in light of designing a system. So this would be handset manufacturers, set-top box people and so forth. So not just the merchant semiconductor people but people who are assembling systems that include software and hardware. What we found out in the survey is that the amount of time people put into verification. More than one-half of the people put in more than 60% of their overall time into verification in one way or another; writing testbenches, running simulation, understanding the results of simulation, running regression test, etc. Lots and lots of time go into verification. People get all excited about being able to improve productivity, getting more verification done per unit mantime, per dollar, per simulation cycle. There are lots and lots of specific metrics. About 78% of the people we surveyed write directed test sequences. Directed test sequences are the most expensive ones to write because it takes a human being to sit down and figure out what has to go into the test bench. I have to emit some packets to fill up my FIFO while I am emitting packets. There are things in my design trying to drain the FIFO because it is trying to suck packets out of there. I have to do this and that. You have to sit there and figure out exactly what kind of test you are going to apply to your design. It is pretty tedious. It is expensive. It tends to focus you on what works. When you write directed tests, your attitude is to make sure that this thing works. You rarely get at the corner cases. In other words you don’t sit there and say I wonder how I can break my design because there are so many ways that my design might break, you just get overwhelmed. What happens in that case then is that there is another technique called constrained random which was pioneered in tools like Vera and e and is now available in SystemVerilog. Constrained random is a technique for generating test vectors. A designer might be able to say I want Ethernet packets. I want destination ports between Hex 01 and Hex 07. I want a normal distribution of these things. I want payloads that have 20% CRC errors in them. And so forth. You set up a series of constraints and the testbench will then generate packets that meet those constraints. The constraints are generated with a very light touch. In other words you don’t have very good control of the sequence of these things. Is destination port 3 always followed by destination port 4? The control mechanism is somewhat loose. You can generate lots and lots of vectors. The simulation vendors generally like constrained random because it generates a lot of traffic. When you have lots of traffic, you have to run a lot of simulators. When you run a lot of simulators, you buy a lot of licenses. For simulation people, constrained random was really great. The problem is that the coverage over time that you get tends to plateau. You get a lot of coverage up front and then you have this asymptotic curve that takes forever to get you more coverage.
For a user constrained random helps with specifying the testbench and then they have this backend problem where they have to buy thousand of workstations running simulators for along time. The other thing we found out is that 67% of the people we surveyed simulate at the functional level and about 36% at the architectural level.
You can find the full EDACafe event calendar here.
To read more news, click here.
-- Jack Horgan, EDACafe.com Contributing Editor.