Open side-bar Menu
 Decoding Formal
Dr. Jin Zhang
Dr. Jin Zhang
Jin Zhang has over 15 years of experience working in EDA, driving the effort of bringing new products and services to market. At Oski Technology, she is responsible for Oski’s overall marketing strategy as well as business development in Asia Pacific. Prior to that, she was the General Manager at … More »

Verification Management from a Formal Perspective

 
February 18th, 2014 by Dr. Jin Zhang

Recently, Gabe Moretti, contributing editor to Chip Design, wrote a lengthy article for Systems Design Engineering addressing an important topic, “Verification Management.” It included comments from Atrenta, Breker Verification Systems, Jasper Design Automation, Mentor Graphics, OneSpin Solutions, Oski Technology and Sonics on a series of questions from Gabe on how to manage today’s complex and time-consuming verification process.

As a formal verification services provider, Oski Technology has a deep understanding of the verification process. Because our formal verification projects typically last three months or longer, managing and tracking the formal verification process is as complex as managing simulation projects. As a result, Gabe Moretti’s questions are equally relevant for our formal verification projects, and we thank him for raising them. To offer a complete formal perspective, here are our answers to questions raised in the article.

1. How does a verification group manage the verification process and assess risk?

Effective management requires measurements to ensure that progress is made. At Oski Technology, we have developed a system that we use to track the formal verification process. The process includes several steps –– from defining the verification goals, quantifying the number of checkers needed for formal sign-off, implementing the checkers and measuring progress through a weighted state transition graph, to creating formal regression suite, measuring coverage and finally achieving formal verification closure, or sign-off. With this systematic approach, we can justify and prove that when we say we are done with the project, they know we are done.

Risks associated with formal verification projects usually come from two fronts ––whether we have enough checkers and whether we have achieved enough proof depths. With this process, we can manage and reduce these risks.

2. What is the role of verification coverage in providing metrics toward verification closure, and is this proving useful?

Verification coverage is an important metric to enable verification closure. Often, verification teams measure code coverage and/or functional coverage results and need to reach a certain target to achieve sign-off.

When formal is used as part of the verification strategy, the formal coverage result also plays an important role toward formal as well as overall verification sign-off. In recent years, commercial formal tools have added coverage features to measure similar metrics such as line or toggle coverage. When design blocks are verified using formal, these metrics can determine the completeness of constraints and ensure the validity of bounded proof depths. Formal coverage can also be incorporated with simulation coverage to measure the overall verification effort and sign-off.

3. How has design evolution affected verification management? Examples include IP usage and SoC trends.

Most of the SoC designs today are built by putting a lot of IP together. Without having to design everything from scratch, this methodology accelerates time to market.

However, this development trend adds new verification issues. For blocks designed from scratch or inherited from past projects, the block is thoroughly verified. For an IP block bought from a third party, designers often assume the IP block is thoroughly verified by the IP provider, hence in-house verification on the block is reduced. The problem is most IP is highly configurable, with some having more than one-million configurations.

The benefit is design teams can use the IP to their specific needs and configuration. The downside is IP providers often may not have verified the configuration-in-use for specific applications. The special configuration, if not verified, could cause issues when the IP is integrated in the SoC. The ideal solution relies on IP providers to not only use formal technology to verify their IP for all possible configurations, but also to provide a formal verification IP (FVIP) together with their design IP to design teams. That way, the design team can use the FVIP to verify the design IP in their specific SoC environment.

IP consumers should recognize the risks associated with using third-party IP and take measures to ensure the IP quality guarantees the specific configuration-in-use. They can do that either by doing more verification of the IP in house or by asking IP providers to provide a FVIP as part of the IP purchase.

4. What should be the first step in preparing a verification plan?

Formal property verification should be part of the overall verification strategy. Some design blocks, such as control and data transport types of design, are better suited for formal verification instead of simulation. The first step in preparing a verification plan is to look at the architecture and high-level block diagrams of the design, determine what blocks should be verified in formal and what should be verified in simulation. This partition helps improve verification efficiency by using the right tool for the right task. Today, the default verification strategy in simulation is to use formal to augment simulation. The balance should be readjusted by considering formal at the beginning of the verification planning stage in order to achieve optimal results.

5. Is criteria available to determine what tools need to be considered for various project phases? Which tools are proving effective? Is budget a consideration?

Different tools are best suited for different project development phases. For example, lint checking should be done as the first cleanse of the RTL code. Formal verification should be applied throughout RTL development. Simulation is best used for subsystem/chip verification and emulation for system-level verification, including hardware and software co-verification.

Each technology has its strength and weakness. Depending on a single technology throughout the whole project development is not effective. For example, even though simulation has been used as a “universal” technology in the verification flow, it is not the best technology to use for designs with many complex corner case scenarios. While investing in formal technology may be costly, the saving comes from the quality of the design and first-silicon success, along with reduced time and effort in debugging at the subsystem, chip and system level. The criteria for deciding what to use where should be based on an understanding of the types of design and the strength and weakness of each technology.

DVCon 2014 Starts Monday, March 3
Stop by DVCon to see us. We’ll be talking about the Oski Formal Sign-off Methodology in Booth #305, outlining the benefits of applying custom Abstraction Models during formal analysis to reach deeper search depth and achieve faster proof. DVCon will be held Monday, March 3, from 5 p.m. until 7 p.m., Tuesday and Wednesday, March 4-5, from 2:30 p.m. until 6 p.m. at the Doubletree Hotel in San Jose.

Tags: , , ,

Category: Blog

Logged in as . Log out »




© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise