June 06, 2005
Mentor's Questa Verification Products
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
| by Jack Horgan - Contributing Editor
Posted anew every four weeks or so, the EDA WEEKLY delivers to its readers information concerning the latest happenings in the EDA industry, covering vendors, products, finances and new developments. Frequently, feature articles on selected public or private EDA companies are presented. Brought to you by EDACafe.com. If we miss a story or subject that you feel deserves to be included, or you just want to suggest a future topic, please contact us! Questions? Feedback? Click here. Thank you!
If you look at verification it can be divided into three main pieces: ABV, CDV and AVM.
ABV - Assertion Based Verification. The aim of that is to improve the time to finding bugs. ABV depends on designers embedding assertions in their designs.
CDV - Coverage Driven Verification. The aim is to improve the time to get coverage. CDV depends on being able to search and measure coverage. Coverage is the metric you want to use. You want to figure out whether you have done enough verification. You might say that I have run all my tests and haven't found any bugs but that doesn't mean I have covered enough of the circuit.
AVM - Advanced verification methods. The aim is to improve design quality. There are some things you would do in simulation but there are other things you could do in simulation that are better off doing in another tool. An example is clock domain crossing. The issue there is that you have many clock domains in your design. We have seen designs that have 90 clock domains in them. One of the problems you have is how to make sure that your signals are handed off properly from one domain to the other. You have to use FIFOs, arbiters and things like that. In order to make sure your design works, you would write a testbench to simulate the data transfer and handoff and verify it all
works. It's very time consuming if you have to write a testbench that specifically sets things up that way. But with the technology we have in Questa that comes out of the 0-in acquisition that we did last year we can now do that kind of clock domain analysis statically. You don't need a testbench. That's an example of an advanced verification method. We can also do things like metastability analysis and a few other things that get people off the hook of having to write testbenches. Rather than sitting there writing testbenches, why not try another way of verifying what you're trying to do and not rely on only using the simulator.
Probably the best known example of an AVM would be something like logic equivalency checking. Mentor has been selling that for years but now with new technology we have some other AVMs.
By the way what is your definition of an assertion?
Basically an assertion is a statement of the design intent. Assertions can be either embedded in you design file or can be put in another file and linked in through another method. Assertion is a statement of intent like: If signal A rises and signal B falls within 3 clock cycles, then C has to be zero. So you put that in and as you run your simulation, the assertion sits there and verifies that whenever signals A and B behave in the way the assertion triggers on, then it will verify that within some period of time the other signal will be zero. The assertion can be much more complicated than that. You can do things like data use. You can say I have this register that fans out in four
places within 3 clock cycles, once I set up the register that data has to be latched into one of these four places. There is a library of these assertions. Part of it is standardized, part of it is stuff we have in our system and then there are some you write yourself. So these assertions are integrated with our kernel execution engine and tied into the metric system.
Assertions have been around since the mid nineties. Kantrowitz and Noack reported in 1996 that 34% of all bugs found on the DEC Alpha 21164 project were identified by assertions. Foster and Ceolno reported in 1998 that 85% of all bugs found on an HP project were identified by assertions.
Nine years ago people were complaining about assertions and questioning whether they were a viable way of doing verification. Here we are nine years later and finally there is a standard language out there that everybody can jump on the band wagon. People who were at the leading edge in 1996 already saw the value of these assertions and now many people at the leading edge think this kind of design methodology will be the one that dominates.
It's not something totally new that the world doesn't understand. It's been around for years. The language elements of Verilog have been well thought out. I think things are going to happen now.
Once you have assertions you move into coverage. You begin to ask questions like: Are there pieces of my circuit that I haven't adequately verified? What are those pieces? Start asking questions like that to try to understand when you have done enough. When is enough, enough? The philosophy a lot of people have is: If it isn't verified, you have to assume that it is broken.
In today's world you have to come at coverage from two points of view: top down and bottom up.
The top down approach where you say: whatever I design has to meet the specification. You're not worried about the implementation per se. You are worried whether or not the functionality that you have included in your design meets the spec.
The bottom up approach. If in fact the design had included functionality that is in the spec, you worry about whether you have implemented it properly. Things like clock domain crossing issues, metastability issues and stuff like that are all about implementation. Did you implement the circuit in a way that is robust under all process conditions and all manufacturing set of rules?
As a verification vendor we need to cover all top down specification verification and the bottom up implementation verification.
Languages like e and Vera have come at things either exclusively from the specification or exclusively from the implementation. It has really only been since SystemVerilog showed up where you really could address both the top down and the bottom up ways of working.
In order to answer the question 'How do I know I have done enough simulation', you need some notion of what you are measuring. Structural coverage is things like did I execute enough of the statements in my model because for sure if you haven't executed a statement in your model, then you haven't verified anything about it. But the issue with that kind of coverage is even if you can cover all the statements in your model and you can still have functionality bugs in there. What happens when you FIFO overflows or underflows? Does the circuit handle the error condition properly? So structural coverage is easy to do but doesn't tell you a lot. We have to move to things like transaction
coverage which is about protocol and functional coverage which is about how well does the design match the spec and functional coverage which is about implementation. Code coverage is whether all the lines in your model have actually been executed. All of these metrics have to be asking the question: Have I simulated enough, have I verified enough? Inside this Questa release all these metrics work.
The model will have embedded assertions in it that are used to instrument certain areas inside the circuit that provide feedback around coverage and that feedback is sent back to the testbench that makes decisions about what are the next set of tests that ought to be run. This is the way you get into this random directed testbench pioneered by Verisity. Inside you now have everything that PSL can do, everything that SystemVerilog can do. You have coverage directives, coverage metrics and we have created a GUI and Database where we collect together everything that is possible to know about what is covered and what is not covered in your circuit. All color coded. Red is bad and green is
good. It's kind of a cockpit area that tells whether or not you have holes in your verification and test plan. A very powerful thing. It enables designers and design groups to answer that question around am I done verifying yet. The way we have done this you can actually see where you haven't got coverage. Not only can you answer the question 'Am I verifying enough?' but it tells you on the green ones you've done verifying but on the red ones you've got more work to put in there. It tells you why that hasn't been verified.
You can find the full EDACafe event calendar here
To read more news, click here
-- Jack Horgan, EDACafe.com Contributing Editor.
Be the first to review this article