March 10, 2008
Closing the Verification Gap.
Please note that contributed articles, blog entries, and comments posted on are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
Jack Horgan - Contributing Editor

by Jack Horgan - Contributing Editor
Posted anew every four weeks or so, the EDA WEEKLY delivers to its readers information concerning the latest happenings in the EDA industry, covering vendors, products, finances and new developments. Frequently, feature articles on selected public or private EDA companies are presented. Brought to you by If we miss a story or subject that you feel deserves to be included, or you just want to suggest a future topic, please contact us! Questions? Feedback? Click here. Thank you!

The architectural simulation level numbers about 5 years ago were much lower. It is very encouraging to see that people are actually starting to simulate at the architectural level. Unfortunately most of these are discarded as soon as people start implementing. The reason is that there has not been an effective way to bridge the gap.

At the architectural level you are looking for things like latency, throughput, buss occupancy, and access time on the timing side and on the function side you are trying to make sure that the system as designed is doing the right things. If you are doing a video processor, you want to make sure your MPEG block is actually doing MPEG and you artifact remover removes artifacts and you motion tracker is actually tracking frames. There is a lot of activity so you are not so worried about the details of how your clock distribution works because you are not there yet. You are at the architectural level. At that level you are going to need a testbench that emits frames, packets and control signals that let you verify properties about your system. You are going to have models probably at the transactions level not at the RTL level. Probably your verification IP which are your ports into and out of the chip and ports into and out of certain blocks on you chip are going to be modeled at a very high level, likely at the TLM level. You are going to be writing directed tests that have relevance at the architectural level. Then what happens is that you decide you like your architecture and you start implementing your design but very little of what you did at the architectural level in terms of your investment in testbenches and models makes it across that boundary. With functional verification you reinvent the wheel. You write a whole other testbench. You create an infrastructure for it. That testbench has a lot more timing in it. It has clock edges. It has synchronization. There is a huge amount of machinery that you bring to bear in order to write a testbench down at the RTL level. By the way, you may have made a mistake somewhere along the line in terms of translating your architecture into RTL. You may have violated your latency requirement or your bus occupancy requirement. You may not even know that. Usually the testbench at the architectural level will not run at the RTL level. This is a problem. You may be designing something that is functionally incorrect
but has been well implemented. You designed the wrong thing right. What we are trying to do with these products (InFact and MVC) is to bridge the gap between architecture validation and functional validation so that you can use your architectural environment to make sure that what you are implementing at the RTL is actually what you want to implement. You have to design the right thing right.

At the architectural level you are going to have a set of directed tests that probably operate at the transaction level and today there is not an automated way for you to bride the gap to move a particular testbench down to the functional level. There is now a breakthrough in that area.

For verification IP there is also this gap. You are going to have IP that you wrote that works at the TLM level for the architecture and that IP does not have an equivalent representation at the RTL level. You have to write another one. Then you have the problem that does the model you write at the RTL level really have the same properties at the TLM level.

Do the models in your design have the ability to bridge the gap? Usually not. Probably the biggest reason those gaps are there are that it requires a significant investment in manpower to build models that bridge the abstraction gap between architectural level and the RTL level. It is not trivial to write a model that automatically adapts itself. The announcement we are making in MVC is in some ways a breakthrough albeit contained in this one area in that we have been able to harness synthesis technology to enable designers to write models such that you can synthesize these wrappers, abstraction adaptation layers, so that the functionality in the model is accessible automatically from
RTL from architectural TLM, and form gate levels. When you instantiate the model, it gives you the opportunity to specify which port should appear at which abstraction level. Because you do not have to invest any extra meantime in doing this, it then gives the designer the possibility of having that gap bridged without having to invest another 6, 8, or 12 weeks in writing the model and trying to make sure the model actually bridges the gap.

When you write system models today, you write them in whatever language you want. People use C and C++. Some people choose SystemC. SystemC gives you a convenient way of annotating time sequences and synchronizations. If you write things in C, you are on your own in figuring out how to represent time. It is possible to adhere to standards like SPIRIT which gives these models a way of intercommunicating. The industry has matured a great deal in the last 18 months in coming up with a specification of what models at the architectural level look like and how to make models interoperable. A lot of editors have asked if there is an agreement in the industry that everybody is going to be able
to use. The answer to that is within the next 6 to 9 months I think we are going to see some stability come in this area. Standard bodies seem to be converging on a set of specifications that are interoperable and can bridge abstraction gaps that are there. That’s good news because we believe there is convergence happening in that area. This gives us the ability to create a set of tools that effectively bridge those gaps through a set of established standards.

The InFact technology has the property that it generates the sequence of vectors or test in your testbench such that the content of the vectors is separate from the timing or synchronization aspects of the vector. In other words there is information generated by the testbench. The problem of how do you get that information into your design is separated so that if your design is at the TLM level, that packet is translated into a TLM representation so that it can traverse into a model of your design at that level or if you have a designate the RTL level, the packet of data that is generated by the testbench is effectively translated and manipulated to be accessible at the RTL level where
you have explicit clocks on positive and negative edges and stuff like that. What is interesting about InFact is it does let you write one testbench that you can plug into various levels of abstraction. The first time I think that something like this has been available in the design community.

Multi-view Verification Components also have the property of separating more cleanly functionality; what the Verification IP model does and how it processes data. We separate that from how the data gets into the model and out of the model. Getting into and out of the mode has a lot to do with the level of abstraction where your data lives. At the TLM level your data shows up in a packet or data structure. If it is at the RTL level, then your data shows up in terms of control signals, clocks with rising and falling edges and stuff like that. You have to worry about waveform. At the gate level you are down at setup and hold and things like that. What we have been able to do is create a
modeling paradigm. (I got to work the word paradigm into this. That’s really great!) We have created a paradigm that lets you capture the essence of a model and then annotate onto this model issues around how control and timing work.

Then there is a synthesis technology that takes that, both these things, and generates ports on this model that are TLM compatible, RTL compatible or gate compatible. You can mix and match.

Intelligent Testbenches and Multi-View Verification Components

How much productivity improvement is there?

10X. I was just remarking that back when I was a designer at Nortel, if I added up all the productivity gains that tool makers were promising me, my designs would have been done before I started because everybody was saving me 6 months here, 3 months there and so on. If you added it all up, a design only took 24 months but you saved 36 months using these tools. So the design had to be finished before it was started. Productivity is always a relative number. It depends on where you are today. It depends on how mature the design group is, where it is on the CMM model hierarchy. These numbers of 10X improvement come from 2 places. When we develop tools like this, we work hand and hand with as set of lead customers who are interested in these technologies. We also create our own models. One of the things we offer with our simulators is a kit of Multi-view Components that model things like Amber bus, PCI Express, and OCP. We have to generate the models ourselves. It turns out that when we use this technology and we use InFact to generate the testbenches to make sure that our models are correct, we notice that we can generate a lot more models, a lot more verifications than we were able to do in the past. I do the calculation of how much functionality we can get out there versus how many people we have generating it. I get about 6x to 10x leverage on where I used to be inside my internal group. These numbers are real numbers. We did not just make these up in some marketing conference room somewhere. In working with our customers in getting these products through alpha and beta, we get feedback on their own experiences using these modeling strategies that let you automatically adapt abstraction layers. They report back similar results of about 6x to 10x
improvement for that phase of their design cycle. It dose not mean that your over all design cycle is 10x better. It just means that when you are writing models and when you are generating testbenches, you get quite a multiplication in productivity in that area. Different companies of course have different portions of their time dedicated to these activities. So they will get a different sensation on how valuable these things are.

« Previous Page 1 | 2 | 3 | 4 | 5  Next Page »

You can find the full EDACafe event calendar here.

To read more news, click here.

-- Jack Horgan, Contributing Editor.

Review Article
  • Closing the Verification Gap. March 10, 2008
    Reviewed by 'Tao Chen'

    It is a great article about verification. Finally, I can really say executives at big EDA companies understand the issues. The article addresses reusability across different designs and one design with different representations. Like Java, "write once and run everywhere" is the key factor that increases productivities.
    However, are the technologies really new? The answer is no. We ( have been addressed those issues since 2004.

      2 of 6 found this review helpful.
      Was this review helpful to you?   (Report this review as inappropriate)

For more discussions, follow this link …
Downstream : Solutuions for Post processing PCB Designs

Featured Video
Currently No Featured Jobs
Upcoming Events
RISC-V Workshop Chennai at IIT Madras Chinnai India - Jul 18 - 19, 2018
CDNLive Japan 2018 at The Yokohama Bay Hotel Tokyu Yokohama Japan - Jul 20, 2018
International Test Conference India 2018 at Bangalore India - Jul 22 - 24, 2018
MAPPS 2018 Summer Conference at The Belmond Charleston Place Charleston SC - Jul 22 - 25, 2018
DownStream: Solutions for Post Processing PCB Designs
TrueCircuits: UltraPLL

Internet Business Systems © 2018 Internet Business Systems, Inc.
25 North 14th Steet, Suite 710, San Jose, CA 95112
+1 (408) 882-6554 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise