March 10, 2008
Closing the Verification Gap.
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
If you are building systems that do not have a lot of reuse, you will find that these tools give you tremendous productivity because you have to write all these models from scratch. If you have a system where you have a lot of reuse and you already have these models, then your productivity improvement is going to be a lot less because you already made the investment. It just makes life smoother. You have to look at productivity improvement in light of where people are in terms of overall maturity and experience.
In the case of MVC the model itself has separated the notions of functionality versus timing where timing is a transaction, a bunch of clock edges, or a bunch of waveforms down at the gate level. We can use a synthesis technology to make sure that the ports on that model are going to adapt themselves to the rest of the environment that you are verifying. It gives you reasonable RTL and TLM components, reduces your testbench development and refinement time and lets you get abstraction adaptation between levels of modeling hierarchy.
The InFact technology is quite interesting. It takes a very different approach to writing testbenches. The way InFact thinks about it is as follows: Image that you have a block with ports on its. Typically there are some control signals that go here and some data signals that go these. Then you have memory which is fairly simple. Ports on a memory can speak a language that can read, write and modify a memory location. To read you have to provide an address. The memory will look up the data at that address and give it back to you. You can have a write cycle. You give the memory an address and some data. It put that data into the memory. You can have a read-modify-write cycle. You give the memory an address. It reads the location, you change the data and it rewrites the data into the same location. You can give the address once. You do an output, then an input into the same memory location. You might also have an idle cycle that lets the memory recharge certain things or whatever. The memory might also test itself. It might have some certain built-in self tests commands that go in and reconfigure the memory. You might have ECC capability; some commands and controls to do error correction. If you think about it this way, this memory block has a language which it speaks. The vocabulary it has is things like read, write, read-modify-write, start BIST, Stop BIST and so on. These are words that it speaks. Send it a command to read and it will read. Send it a command to write, it will write. There are sentences that you can construct. A sentence is a string of words that have legal grammar. Our particular memory may have this weirdness in to that when you do a read-modify-write cycle, you have to follow it with an idle cycle because the memory has to recharge. So there is a grammatical rule. If you do not do that, you have violated the grammar and bad things are going to happen. With InFact you have a declarative way of capturing the vocabulary and the grammar rules by which you can form sentences. This comes out of compiler theory when you capture the syntax
of a language in BNF or Backus-Naur Formal. InFact lets you capture the language of your blocks in a kind of modified BNF. It is very compact BNF. It is very easy to do. Languages like C have a very compact BNF. Not very big. XML is another language that lets you capture the essence of syntactical rules. Once you capture the grammar and vocabulary for your blocks, you now come to the problem that you would like to create sentences that verify certain properties of the blocks. In InFact we have a way of allowing you to generate sentences based upon state transition graphs.
For your blocks you can specify certain state nodes you want to visit and certain arcs which are transition paths you are going to take through your design in order to verify certain properties. This is more powerful than specifying constrained random because constrained random tends to be kind of verbose when it comes to generating vectors because constraints are not about the design. Constraints are about the vector space. In InFact the constraints are derived from your design not strictly from the vector space. Therefore you are going to get a set of vectors that are much more highly tuned to what your circuit does and how it does it. If you were to plot a graph which had on the horizontal axis the number of simulation cycles and in the vertical axis the functional coverage (of course we would have to argue about what functional coverage means but assume we had agreement of this point), then we would find out that the constrained random graph has an asymptotic curve that eventually gets you lots of functional coverage but it requires exponentially large number of test vectors to accomplish that. InFact technology, because it understands more of your design, can generate a much more focused vector set and can give you coverage which in many cases is 20x to 50x few simulator cycles. It would be able to traverse your coverage space, the things you want to know about
your circuit, much more effectively. If you have fewer simulation cycles, you have fewer simulation output files to look at. You get your results much more quickly and you can use this technology at the architectural level and look at things that are interesting there and then reuse the same thing down at the RTL when you are at the implementation stage so that you can in fact show that your RTL level does implement correctly what you thought you wanted at the architectural level. That’s what InFact does.
What is the availability of these two products?
InFact is available immediately. Questa Multi-view Components is available 2Q 2008.
What is the pricing?
Pricing start at $25K.
For each or for both?
One year TBL or perpetual?
One year TBL.
Who are the major competitors in this verification area and do they offer anything like what you have been describing?
There are three companies that compete in the verification area. It is between Mentor Graphics, Cadence and Synopsys. Gary Smith issued his marketshare numbers after he spun out of the Gartner Group. Those number show that Mentor and Synopsys are neck and neck and Cadence is 2 or 3 market points behind. It is about 34% or 35% each for Synopsys and Mentor while Cadence is at 32% marketshare.
Synopsys has a set of verification IP out of their Designware group. That IP is not retargetable to abstraction levels. They target their VIP at the implementation level. Synopsys is not known for its architectural level tools. They have very little there. They had some tools which were really not much of a market success a few years ago.
On the Cadence side of the world, they typically do much better at the architectural level but most of their VIP has been generated from the acquisition of Verisity. That VIP is proprietary. They are well respected in the industry but they are not portable. They only run with Verisity. People are shying away from those things because it tends to lock them in. Synopsys has a similar problem in that their IP components are proprietary and when you use them, you are kind of stuck with their tool set. Neither company has the ability to do this automatic abstraction adaptation. That’s unique to Mentor Graphics.
We have this view of creating an open and portable verification methodology. We had announced AVM, Advanced Verification Methodology, last year. This year Cadence and Mentor have gotten together on OVM, Open Verification Methodology. There was a press release a few weeks ago. OVM has been very well received in the industry.
Multi-view Components and InFact are compatible with that technology. The point of it is to encourage the industry to create open standards. Testbenches for a large design has hundreds of thousands liens of code. Users want that code to be portable across toolsets. They do not want to be tied into one vendor. What we are encouraging is creating these open standards so that user created artifacts are protected from obsolescence. Anybody who has a testbench today is worried because there is only one vendor in the world that supports it. What if there is a falling out between them and that vendor. So, for example, on the Cadence side they were quite late on
entering SystemVerilog. Users were sitting out there thinking that SystemVerilog is good and I want to go their but I have all these models and I am stuck. OVM is trying to solve that problem. Cadence is working with us to get model interoperability working. The emphasis of that is to make sure that customers are not disadvantaged by all these EDA squabbles.
You can find the full EDACafe event calendar here.
To read more news, click here.
-- Jack Horgan, EDACafe.com Contributing Editor.