Formal From Spec to Sign Off - Real Intent
Why hasn't formal analysis methodology been more widely adopted?
Let me characterize formal verification as a very powerful technology but it has capacity limitations. That is one of the barriers to seeing very wide adoption of this technology. There are other issues in the development of ABV. One is the lack of standard methodology. Another is the lack of a standard language. The adoption of methodology requires changes in user behavior. When we started we saw tremendous obstacles that we had to deal with. The industry has continued to build capabilities for ABV. Today we have language standardization such as Accellera Property Specification Language (PSL) and SystemVerilog Assertion (SVA). Today, we have simulators supporting these standard assertion languages. We have seen a very significant increase in changes of user behavior where users employ assertions in their verification flow. That has created a very strong opportunity for us. But the biggest hurdle that needs to be overcome is to very significantly increase the reach of the technology. That brings me to our Convergence Engine. Overall we have been working on technology improvements for quite a while. Today very interesting technical developments have happened in the field which really should be cultivated. We believe that formal ABV verification will see a significant application in user flows in the near future on the basis of standard use of the technology.
Would you give us a quick overview of your product line?
Rich: If you think of your design flow from the very beginning of your most raw RTL, when your design isn't even finished, to where you are actually implementing it in a chip the order the our tools apply is Implied Intent Verification (IIV), Expressed Intent Verification (EIV), Clock Intent Verification (CIV) and PureTime Exception Verification. The first tool you use in the flow is Implied Intent. What this does is to find bugs in your design before you are ready to simulate. It quickly sprinkles checks throughout your design and tests these checks. To see if there are problems like dead code, floating nets, contention and things like that. It is really quick, really very easy. It is automated. It requires no vectors.
The second tool is Expressed Intent. Now you are getting to the point where your design is actually in blocks and you are putting them together into the full chip. You want to check the functionality of these blocks. EIV is an ABV tool where you make a statement about your design, an assertion in either PSL or SVA. We will tell you if that assertion is correct or incorrect. An example would be: I never want to see a grant signal unless a request signal is at least 2 clocks before that. The formal engine will go off and either say that yes indeed, this logic as designed will always hold that statement or it will give you a counter example or trace telling you here's an example where you have a bug, an example where it doesn't meet your specification. No testbenches are needed and the results are exhaustive, i.e. don’t depend on quality of vectors.
Clock Intent is a full chip tool that goes through your entire design, automatically extracts the clocks and checks to see that the correct guidelines have been met across clock domains.
How does that work?
There are many several phases in our clock checking. The first phase is a structural phase. We go through the design and understand what are the crossings and which ones are control and which ones are data. On the signals that need to be synchronized, we look to see that a proper synchronizer is in place. That's what I call structural checking. While it is a solution to say 95% of the problem, there are design assumptions made about even those checkers about data setup and hold, things like that. To check these assumptions, the SimPortal will write out a simulation model that you can run through your existing testbench. It will verify that the last 5% that you are asserting obey the proper design guidelines.
The last tool is PureTime, an SDC timing exception verifier. This is a really interesting area because that it SDC exhaustively checks to see if your SDC faults false_path and multicycle_path path constraints are correct through out your design. That's important because if you write a wrong fault false path constraint, you can build a bad chip. We believe that there are no other types of verification that can check for these things. Functional verification will not. Some people argue that gate level simulation can but nobody does much gate level simulationenough fully backannotated gate level simulation. Designers Manual design review won't solve the problem either. People need to understand that if you have 1,000 exceptions, you not only have to check that each exception is valid by itself you have to check that the interaction with that exceptions and any of the other 999 exceptions is valid. That's really difficult and I believe impossible to do manually. We have a lot of interest in this product. It at least saves a lot of reviewing time, and may likely help you to avoid respins due to timing bugs.
That's the spectrum from the very beginning of the design flow through functional checking with EIV, through clock domain checking. When you are running synthesis and static timing, that's where PureTime comes in.
What are the outputs of these products?
When you pick up our tools, you are going to see something that looks like astandard EDA tools look and feel. Since we've partnered with Novas, we are able to resell their interface. When you look at RTL, you can look at it in a source browser. You can click on that and pop up a schematic. If there is a waveform available, for example in Expressed Intent if you have a counter example, you can pop up a waveform. Some of the tools do not generate waveforms but all of our tools support schematic and RTL viewing. It varies. All things are cross linked and everything is to standards.
When someone sees this output, is the correction obvious as with a spell checker?
What the tool does is try to bring you to the right point. It really depends on the tool. Like for the clock domain tool, if we say there is something wrong with a particular crossing, you can click on that crossing and you can see in a popup window a schematic for that area of the logic. It will not tell you what to do to fix your design. People's design styles are different. But it is going to take you to the exact problem area. With PureTime, if you said it was a multicycle task with a count of 3 and it was actually a count of 2, we will report the 2. You can then practically edit your file and go on. It will tell you what the exact answer is supposed to be
All of our user interfaces have a high level of automation. You do a minimum amount of work up front. Like our clock tool automatically extracts clocks with almost no user interaction and then very much automates as the process goes along. We try to guide the user as much as can be done to exactly where the problem area is.
The four tools are different. With the ABV tool you are actually writing a program, a property. It is much more interactive and you have to work a lot more closely with it. The IE tool is completely automatic. We take your design and spit out a report file and take you right to the problem.
Would you talk about your Convergence Engine?
If we back up to what the purpose of all this technologystuff, people have a million ways that they can verify their design. In a lot of ways, it is a dynamic world. People are relying on simulation a lot. However, wWe believe any time a static tool can take over a space niche and do a good job, it will quickly take over that space because it’s easy and automatic. At this point the technology has been limiting the adoption because of the size of the designs they canthat can be run and because of the run times involved in processing these circuits. The engine is of critical importance in enabling the realistic use of these tools in people's shops. The reason the Cconvergence Eengine is such a breakthrough is because it takes a big design and effectively makes it much smaller. That's what we call 2D compression. If you think of your whole design and you make statements about one output pin - this pin should only go high three clocks after this other signal goes low - instead of processing the entire design to prove that property, we will only process a small portion of the design, that portion that is absolutely necessary. Therefore we are able to run much faster and get the job done much more quickly by focusing on the relevant parts of the design. Think of a big circuit, say one million gates. In a formal problem you can't just look at that one million gate circuit, you have to make multiple copies. You have to think about how the circuits changes over time. You start making multiple copies of the design to store the state changes over time. This is called state space based explosion. It is one of the curses challenges of formal technology. , It's one of the things that make it so difficulthard. The Our unique 2D compression exactly offsets state state space explosion. It makes the design look smaller. So even though we are keeping multiple copies, the key is that we are copying relatively tiny things. All of a sudden things that were not possible to do are now possible in a reasonable amount of time.
Be the first to review this article