Deprecated: Using ${var} in strings is deprecated, use {$var} instead in /www/www10/htdocs/nbc/articles/view_weekly.php on line 750

Deprecated: Using ${var} in strings is deprecated, use {$var} instead in /www/www10/htdocs/nbc/articles/config.inc.php on line 369

Deprecated: Using ${var} in strings is deprecated, use {$var} instead in /www/www10/htdocs/nbc/articles/config.inc.php on line 392

Deprecated: Using ${var} in strings is deprecated, use {$var} instead in /www/www10/htdocs/nbc/articles/config.inc.php on line 369

Deprecated: Using ${var} in strings is deprecated, use {$var} instead in /www/www10/htdocs/nbc/articles/config.inc.php on line 392

Warning: Cannot modify header information - headers already sent by (output started at /www/www10/htdocs/nbc/articles/view_weekly.php:750) in /www/www_com/htdocs/html.inc.php on line 229

Warning: Cannot modify header information - headers already sent by (output started at /www/www10/htdocs/nbc/articles/view_weekly.php:750) in /www/www_com/htdocs/html.inc.php on line 230

Warning: Cannot modify header information - headers already sent by (output started at /www/www10/htdocs/nbc/articles/view_weekly.php:750) in /www/www_com/htdocs/html.inc.php on line 237

Warning: Cannot modify header information - headers already sent by (output started at /www/www10/htdocs/nbc/articles/view_weekly.php:750) in /www/www_com/htdocs/html.inc.php on line 239

Warning: Cannot modify header information - headers already sent by (output started at /www/www10/htdocs/nbc/articles/view_weekly.php:750) in /www/www_com/htdocs/html.inc.php on line 240

Warning: Cannot modify header information - headers already sent by (output started at /www/www10/htdocs/nbc/articles/view_weekly.php:750) in /www/www_com/htdocs/html.inc.php on line 241
Verification Test Plan: The Book - October 09, 2006
Warning: Undefined variable $module in /www/www10/htdocs/nbc/articles/view_weekly.php on line 623
[ Back ]   [ More News ]   [ Home ]
October 09, 2006
Verification Test Plan: The Book

Warning: Undefined variable $vote in /www/www10/htdocs/nbc/articles/view_weekly.php on line 732
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
Peggy Aycinena - Contributing Editor


by Peggy Aycinena - Contributing Editor
Posted anew every four weeks or so, the EDA WEEKLY delivers to its readers information concerning the latest happenings in the EDA industry, covering vendors, products, finances and new developments. Frequently, feature articles on selected public or private EDA companies are presented. Brought to you by EDACafe.com. If we miss a story or subject that you feel deserves to be included, or you just want to suggest a future topic, please contact us! Questions? Feedback? Click here. Thank you!


Warning: Undefined variable $module in /www/www10/htdocs/nbc/articles/view_weekly.php on line 843

Warning: Undefined array key "upload_with_nude_flag" in /www/www_com/htdocs/get_weekly_feature_ad.inc.php on line 69


Missed the movie? Here's the book!

I don't know about you, but I had a time conflict on the morning of Thursday, July 27th, and hence couldn't attend one of the key panel discussions that took place that day in San Francisco at the 2006 Design Automation Conference in Moscone Center.

This not-to-be-missed panel showcased several of the leading gods of verification, and their friends, as they discussed issues related to the building of a verification test plan.

If you missed the event, as I did - or even if you were there - this is your lucky day, because herein you will find an improved version of that panel discussion. It's improved because:

A)You can read it at your leisure, hopefully with a good cup of coffee in hand;

B)There are several additional participants here who did not appear on the original DAC panel in July, but who have contributed substantive comments to the discussion below;

C)The book is always way better than the movie - more detailed, more complex, and with greater nuance.


So keeping these thoughts in mind, please read on. We're delving into a host of topics here, among them being formal verification, emulation and simulation, the division of labor between the design engineer and the verification engineer, and how to knit the whole darn thing together. Pretty interesting stuff!

*********************

Building a Verification Test Plan: The Book

*********************

Principle Characters:

Alan Hu - Department of Computer Science, The University of British Columbia
Andy Piziali - Verification Application Specialist, Cadence Design Systems
Craig Cochran - Vice President of Marketing, Jasper Design Automation
Catherine Ahlschlager - Hardware Manager, Formal Technologies Group, Sun
Doron Stein - Hardware & CAD Engineer, Cisco Systems
Harry Foster - Principle Engineer, Mentor Graphics
Janick Bergeron - Scientist, Verification Group, Synopsys
Rajeev Ranjan - CTO, Jasper Design Automation
Rich Faris - Vice President of Marketing & Business Development, Real Intent

*********************

Chapter I - Some Background

1) Please define formal verification.

Alan Hu - Formal verification means proving a property about a model of a system. "Proving" is in the sense of mathematical proof - the highest level of confidence of anything known to humanity. In other words, you have 100% coverage of all possible behaviors of your (model of your) system.

"Property" means you have to specify what you're proving, and this is the same problem whether you're working with formal or informal verification. If you don't know what you want (you don't specify to look for a particular case), you won't know to verify it. And "model" means you're always verifying something other than the actual product in the customer's hands. The model could be the layout, it could be gates, it could be a very abstract protocol, or an algorithm.

2) Please distinguish between an assertion and a property.

Alan Hu - I don't. Keep in mind that I'm speaking with the luxury of being an academic. In different contexts, some communities may draw a distinction.

For example, some people will say that "assertions" are embedded in the design, whereas "properties" are external to it. Other people will have other definitions. I don't think any usage has become the de facto standard, so if I'm talking to someone, I'll just keep listening until I figure out what distinction they're really trying to make.

3) At what point did formal verification become more than an "academic curiosity?"

Alan Hu - What we're seeing is a gradual process, and it's hard to cite a defining moment. Even thirty years ago, for certain niche markets, formal verification was already being used. Twenty years ago, companies like IBM had started doing formal equivalence verification in-house. Ten years ago, formal equivalence checking was taking over the RTL-to-gate market, and the big semiconductor and EDA companies (Intel, IBM, Cadence, Synopsys, etc.) and several start-ups were all ramping up on model checking.

Today, the model-checking (or property-checking) tools are gaining market traction, and RTL-to-gate equivalence checking is so completely owned by formal tools that most people don't think of it as formal verification anymore. Ten years from now, even more formal techniques will be mainstream parts of the design flow, but some formal verification problems will still be in the realm of academic research. It's important not to keep redefining formal verification to mean whatever we can't quite do yet.

4) Is it possible to verify a large SOC without using some form of formal verification?

Alan Hu - Sure. You can carefully hand-pick exceptionally good design and verification engineers, choose an extremely conservative (i.e., low performance) architecture and design that's "obviously correct," buy a ton of emulation boxes and server farms, simulate for a really, really long time, and do some extra spins of silicon until you get most of the bugs out.

The real question is whether it's cost-effective to do a large SOC without using some form of formal verification. And I'd have to say the answer is no. People vote with their wallets. Everyone these days is using formal for verifying RTL-to-gate equivalence, for example. People have decided that's the most cost-effective way to do that step.

For property checking, the value proposition is less clear. All of the people doing the most complex chips - the microprocessors and GPUs and very complex SOCs - are all either buying or developing in-house formal property checking tools. In some cases, they are developing extremely sophisticated formal methodologies, as well.

So, on the leading edge, the value proposition is definitely there. For the trailing-edge designs, though, people apparently can get by with older technology for a bit longer.

*********************

Chapter II - The Panel Discussion

1) What is a verification test plan (a spreadsheet, a GUI, a document?) and how does it differ from traditional verification strategies?

Harry Foster - Verification planning is not a thing - it is a critical process of design. Some might view this process as a means to an end (that is, a written document or verification feature checklist), but it is so much more than that. It is a fundamental component of the design process, and its analysis component often uncovers design and architectural bugs prior to implementation or application of any form of verification!

Janick Bergeron - A verification plan is a list of features of the design that need to be verified. A verification plan could come in many forms, such as a spreadsheet, a document in natural language, or it could simply reside in the engineer's head. The industry hasn't yet settled on the best approach to define a verification plan.

Doron Stein - A verification test plan should be a combination of a database (hence a dynamic infrastructure) with the ability to produce a "traditional" test plan in a document form or an HTML format. By establishing such a structure, we encapsulate the ability to reflect correct status with in the verification plan thus closing a bit the verification loop (plan, execute, evaluate your planing, execute, etc.).

Catherine Ahlschlager - Most of the test plans that I've seen are documents that describe the overall verification strategy, and features to be tested and how to test them. They are often translated into spreadsheets, so that coverage/tests can be written to monitor how the project is doing against the schedule. In addition, milestones are usually included to outline deliverables on each verification phase.

Andy Piziali - A "verification test plan" is an oxymoron because the name blends one verification method - test - with the broader design intent preservation process - verification. As a verification engineer, I believe the intent of the panel was to discuss the verification plan, so I will address your question in that context.

A verification plan is a natural language document that defines the scope of the verification problem and its solution. The scope is quantified by a structured set of design-under-verification (DUV) features and their respective coverage models. The solution is captured as the functional specification for a verification environment that employs dynamic and static verification techniques (simulation, formal analysis, assertion-based verification, acceleration, etc.).

If this document is machine readable, we refer to it as an executable verification plan - or vPlan - because a verification management tool can annotate the plan with live progress metrics as regressions and formal proofs are run. This transforms the plan from a verification process artifact into an application-specific document user interface.

Regarding how this plan differs from traditional verification strategies: in the past we wrote a test plan that enumerated each of the functional test cases or scenarios that needed to be exercised. A test (or set of tests) was associated with each scenario. Once each test was written, run, and passed, the scenario in the test plan was checked off. When all of the scenarios were checked off, verification was deemed complete.

Using a verification plan, verification is considered complete when all verification goals defined in the plan have been reached, usually 100% coverage of each coverage model. However, directed tests are still employed in some situations so each test, run and passed, is also considered a goal to be achieved.

Craig Cochran & Rajeev Ranjan - A verification test plan can be in any of those formats. However, the plan should be well structured, flexible, dynamically updated, and prioritized. For these reasons, Jasper developed GamePlan Verification Planner, which produces a customizable dynamic verification plan, which is stored in XML format and generates reports in HTML for easy sharing of status data.

A verification test plan does not differ from traditional verification strategies. Rather, it organizes them so that the verification team can make the most appropriate use of each strategy in the overall verification effort.

2) The DAC panel description referred to "trading brute force for finesse" when putting a verification plan into place. What does this mean?

Andy Piziali - I interpret this to mean that simulation cycles alone are insufficient to achieve functional verification closure. Finesse is required to carefully analyze all sources of DUV design intent - functional specifications, design specifications, whiteboard diagrams, etc. - capture the DUV requirements as named features with concise descriptions, design a coverage model for each feature that quantifies the behavioral space, implement each coverage model (using code coverage for some) and measure progress against each model as simulation and formal analysis proceed.

Associated generation constraints for a constrained random simulation environment and properties for formal analysis must be designed to achieve full coverage.

Harry Foster - Actually, I didn't like this description - trading brute force for finesse. I'm certainly not saying that I am for brute force over finesse. However, this subtitle moves the discussion down to debating verification infrastructure and tools too soon. And in my mind, the real importance of verification planning is the thought process.

My son recently graduated from high school, and I felt it was my fatherly obligation to offer him some words of wisdom that will shape the future he builds. So I told him: "Always remember, in this incredible world of automation - there is no substitute for thinking."

The same holds true for our industry. Automation can help us by providing solutions to the tedious bookkeeping aspect of the verification planning process. However, getting architects, designers, and verification engineers to all think about the problem space, and share their thoughts with each other, is really fundamental to verification success - and this cannot be automated.

With this understanding, I'll now answer the rest of your first question. The results of the verification planning process are generally described in a document referred to as the Verification Plan. This is a living document that captures the conclusions and decisions derived during the verification planning process - such as resource allocation, verification infrastructure, verification metrics objectives, completion criteria, tracking mechanisms, risk analysis, and feature (and functionality) sets that must be verified.

Craig Cochran & Rajeev Ranjan - “Brute force” in verification usually refers to massive amounts of constrained random simulation. Often, teams will determine that the verification effort is complete when they stop finding bugs using this method.

In our view, the “finesse” method to verification planning involves prioritizing the most critical functionality, determining the coverage thresholds and most appropriate verification approach for each feature, and systematically verifying each feature to the required coverage threshold. This ensures correctness where it matters most in the design, and doesn't waste additional time in brute force simulation.

Catherine Ahlschlager - As verification tools become more sophisticated, it's important for us to effectively use the right tools to address the different verification challenges. As an example, formal verification won't be able tell us if a microprocessor can execute an assembly program correctly. But on the other hand, formal verification can easily tell us if thread starvation will ever occur in a multi-threaded processor design, which is hard to verify otherwise.

Janick Bergeron - The "finesse" refers to using modern technologies with modern methodologies. For example, Synopsys' VCS functional verification solution supports Native Testbench (NTB) technology, which allows engineers to use the built-in constrained-random stimulus generator and create powerful SystemVerilog testbenches to generate corner-case scenarios. Such scenarios are impossible to conceive manually.

Also, VCS supports native SystemVerilog Assertions (SVA), making it easier to track and find design bugs. A carefully developed verification plan should leverage the power of these advanced technologies to find more design bugs in a given time. The plan should factor use of high-quality verification IP (VIP) for standard protocols. Synopsys has a rich portfolio of VIPs in its VCS Verification Library.

New verification techniques require deployment of modern verification methodologies. For example, Synopsys partnered with ARM to develop the widely used Verification Methodology Manual (VMM) for SystemVerilog. VMM documents best practices for setting up a robust and efficient verification environment leveraging coverage-driven constrained-random techniques, assertions and formal technologies. So, it's all about working smarter and not just working harder.

3) Isn't having to build a verification test plan just another layer of "structural obligation" that adds to the complexity of verification?

Catherine Ahlschlager - Quite the contrary; a verification test plan offers a quantitative way to measure the progress. It's the document that one references in review meetings. It forces verification engineers to think of ways to not only prove that the design works the way it should, but also how to break the design and possible corner cases relating to the same tests.

Doron Stein - If the verification test plan is regarded as a "layer," indeed this makes it a structural obligation. Yet, if the verification plan (coverage-driven) is a live, dynamic combination of a database that gets its input from a coverage matrix, as well as updated results being fed back (hopefully, automatically) from the regression, then this "verification plan" becomes the overall axis upon which the progress of the design project moves forward.

Janick Bergeron - It's an investment that will ultimately result in higher quality design. Just like it's not wise to build a chip without clear specifications, it's not wise to perform ad-hoc verification. The better the verification plan, the higher the chances of realizing a high quality design.

It used to be that the only thing you couldn't avoid was death and taxes. Now, we're getting to the point where we might have to add verification planning to the list.

Andy Piziali - No. When the verification plan was an artifact that became obsolete the day File->Save and File->Exit were selected, it could have been interpreted as pure overhead. But even at that time, substantial value was derived from simply creating the plan because the necessary specification analysis exposed bugs during the process.

Craig Cochran & Rajeev Ranjan - In fact, building a verification test plan is the most important step in verification. Stating that a verification test plan is another layer of structural obligation is like stating that producing blueprints is another layer of structural obligation in building a house. If you don't have a plan, you will never know what you are trying to achieve, or when your effort can be considered complete.

Harry Foster - As the old adage goes - those who fail to plan, plan to fail. Would you start out on a cross country tour without a map, list of key sites you want to visit, and an estimate of how long it is going to take you and how much it is going to cost?

4) Alternatively, does a verification test plan help to clarify, or does it just document which parts of the verification process should consist of formal verification, simulation, hardware acceleration, and emulation?

Janick Bergeron - A well-developed verification plan will do both. The plan is developed with inputs from all the stakeholders. Writing a verification plan up front forces the DV (design/verification) engineer to think deeply about how each feature of the design should best be tested. A verification plan helps efficiency by defining which technology will be used to verify which feature, thus, avoiding redundant verification.

Doron Stein - A verification plan can be regarded as an hierarchical list of design specifications and features that gives a different point of view onto the actual specification of the design. The mapping of those features onto verification techniques produces the actual way to verify and cover the design.

Harry Foster - As I previously stated, the analysis process identifies and clarifies the problem space and allows us to map the best solution for solving a particular problem.

Craig Cochran & Rajeev Ranjan - A verification test plan does much more than document or clarify - it directs the verification effort. It's a dynamic document which instructs the verification team where and how to apply verification technologies, and in what order, to achieve the required coverage criterion before considering the design verified.

Andy Piziali - The verification plan clarifies what needs to be verified by removing ambiguity from interpretation of the specification(s). As I described earlier, the quantification of the problem through defined metrics transforms an otherwise soft quality assurance process into a measurable process with specified milestones and goals.

One section of the verification plan (ref. attached file "Piziali BVTP 060720.ppt," slide 5), "Verification Environment Design," specifies how each feature will be verified. Note that I avoid the word "document" because it connotes recording a past event, whereas a verification plan is a specification, outlining what is to be done in the future.

5) How does the verification team evaluate the design to draw the appropriate conclusions about the appropriate mix of techniques? What are the "handles" in a design that point to the appropriate mix?

Andy Piziali - The answer to this question could really be addressed by any good book on functional verification because the answer is dependent upon the characteristics of the DUV. However, my brief response would be: A design typically has control intensive or data flow elements or both. The control intensive logic, more often than not sequential in nature, is better suited to formal analysis than the data-oriented logic. The data flow logic is better suited to verification using constrained random simulation.

Catherine Ahlschlager - This goes back to the verification test plan. After the feature list to be tested is determined, it is much easier to decide what technique is best equipped to address every verification need.

Janick Bergeron - Verification planning remains an art. These handles are based on the experience of the engineers contributing to the verification plan. Based on their experience of what worked before, what has failed before, what is new, what is old, what is risky, they identify what must be verified and how thoroughly.

Harry Foster - You begin by analyzing the problem space and understanding which solution is best suited for a particular problem. For example, components within a design that are highly concurrent are better candidates for formal verification (that is, arbiters, bus interfaces, etc.) than blocks that are sequential in nature and involve data transformations (that is, MPEG decoders, floating point units, etc.). However, it doesn't end there.

You must also consider the resources you can apply to the problem and the return-on-effort. For example, given infinite time, you could formally prove practically every good candidate block in your design. However, I've always claimed that the designer's most precious resource is “time.” Hence, you have to make tradeoffs and apply alternative strategies (for example, bug hunting using formal techniques versus full proofs).

Doron Stein - Teams usually go through the evolution process themselves, from "traditional" simulation-based verification into more thorough verification techniques such as formal verification and emulation. This means that usually the starting point is "everything goes to simulation," and then pieces of the design are being cut into other techniques. For instance, arbiters, control blocks, etc., go on to formal verification, while heavy data-pass and traffic-depended parts (or system simulation) go on to emulation, etc. These "cutting" steps are usually driven by previous in-house experience, as well as past crises, and the current wish to avoid those situations.

Craig Cochran & Rajeev Ranjan - Different block types lend themselves to different verification technologies. Lawrence Loh from Jasper Design Automation wrote an excellent paper in EE Times which discusses this topic ( http://www.eetimes.com/news/design/showArticle.jhtml;jsessionid=ET0UL31NVUGACQSNDLPCKHSCJUNN2JVN?articleID=190301228).

In addition, verification teams may decide to employ more rigorous verification technologies, such as JasperGold full formal verification, on components of a design where correctness is critical, such as for newly added features, late-stage spec changes, and highly concurrent logic.

6) Does a verification plan only work in larger companies where there is a separate verification team?

Andy Piziali - No. Although the size and level of detail of a verification plan may necessarily be abbreviated in a smaller company, the plan should be used by both small and large companies. If you don't plan your verification destination and approach, you will be surprised when you arrive. We must begin with the end in mind.

Harry Foster - Verification planning is effective regardless of the company size or organization structure.

Janick Bergeron - There is always a verification plan, even in a one-engineer development team. Whether he/she writes that plan down and formalizes it, is a different story. Having a formalized verification plan is especially important for a large team as it helps keep everyone on the same page, and communicates the status to all of the stakeholders in the project.

Craig Cochran & Rajeev Ranjan - Absolutely not. All teams responsible for verification benefit from a verification plan. Jasper's customers are of all sizes, from start-ups to large enterprises, and they all use structured verification planning.

Doron Stein - It is easier to deploy a verification plan in larger companies. Yet the understanding that holding a verification plan is a better, healthier methodology is moving into smaller design groups (and companies), as well.

Catherine Ahlschlager - I believe every project, big or small, can benefit from having a sound and up-to-date verification plan.

7) If a verification plan is useful for smaller, integrated teams, who has the responsibility to design the verification plan? The system architect? The designers? The design/verification engineer?

Janick Bergeron - It's all of the above. Any stakeholder who has an interest in the correct functionality of the design has a responsibility to contribute - if only to review - the verification plan.

Harry Foster - Everyone should be involved in the process. The verification team might own the deliverable (the actual verification plan document), but everyone is involved in developing it.

Catherine Ahlschlager - My experience has been that the verification manager is responsible for the overall verification strategy and methodology. Verification engineers tend to write tests to verify micro-architecture features after the verification methodology has been identified and agreed upon.

Doron Stein - A verification plan is needed regardless of the team size. The bigger the team is - the importance of an accurate plan increases proportionately.

Craig Cochran & Rajeev Ranjan - Verification plans are useful and important at all levels of an organization responsible for verification. Verification plans must support hierarchy, and enable system integrators to assemble subsystem plans into an overall system verification plan.

Andy Piziali - Although the verification plan should be owned by a lead design/verification engineer, all stakeholders in the verification outcome must contribute to its creation. They participate in early brainstorming sessions that expose the feature set of the design, creating the baseline structure from which the remainder of the plan is elaborated.

8) As designs today are quite huge, isn't it a given that the project will suffer from "inadequate" verification - even if a verification plan is in place? Can you quantize the improvements that can be made if a verification plan is in effect - in either time to completion, or numbers of errors that slip through?

Catherine Ahlschlager - Your first question rings true; verification is never done. We can try to be as thorough as we can when it comes to a verification plan, but we are only human, aren't we? That's why it is so very important to have the verification plan revisited every so often, to review it with fresh eyes.

More often than not, we might think of a new corner case inspired by recent bugs or low-coverage items. It's hard to quantify improvement made from having a verification plan, because I have yet to be involved in a project that has not had one. Nor can I imagine taping out a chip without a verification plan.

Janick Bergeron - A combination of verification plan, verification methodology, and verification technology addresses this problem. One could have the best plan, but without the right technology or the methodology the plan cannot be achieved.

In addition to planning and methodology, a high-performance solution will help. For example, one of our customers was able to cut down the regression time from 5 days to 1 day using VCS NTB. That gave the customer more time to use different verification techniques, run more verification cycles, find more design bugs in RTL, and achieve the objectives of the verification plan.

Harry Foster - My dad used to always tell me, “If you don't ask the question, you won't get the answer.” If you don't analyze the problem space (as part of the verification planning process), you won't know what questions to ask (or features to verify).

Andy Piziali - There are always trade-offs between the thoroughness of verification and delivering the product to market on time. In order to intelligently manage these trade-offs, this thoroughness is captured in what we refer to as the fidelity of the coverage models of the verification plan. A low-fidelity model is a coarse approximation to a behavioral space of the DUV and, although requiring less effort to design, requires much more simulation to fill because of its size. On the other hand, a high-fidelity model is a more precise description of the DUV behavior that results in a smaller coverage space but requires more effort to design.

By starting with low-fidelity models and incrementally refining them through the design process, the most important, most complex, and highest risk features are verified more thoroughly than the others.

Insofar as quantifying the improvements seen with the application of a verification plan, the number one improvement is in functional closure predictability. Our (Cadence) customers tell us that the uncertainty in their schedules is reduced by more than 90% by planning their functional verification and tracking progress against it. At the same time, the number of bug escapes in their products is dramatically reduced because they have rigorously quantified their verification problem, boxing in hidden bugs in fine-grained coverage models.

Your question also highlights the need to have a robust verification management tool linked to the verification plan so that engineers can easily correlate failures and progress metrics back to their plan, and management can make appropriate decisions based on the metrics to reallocate resources - human or machines (ex. simulations) - based on analyzing the metrics with respect to the verification plan and milestones.

Craig Cochran & Rajeev Ranjan - Any design can suffer from inadequate verification - remember that verification level is a trade-off between assuredness of correctness versus tapeout schedule. Using a verification plan dramatically reduces the possibility of inadequate verification. It enables verification teams to ensure correctness where it matters most in the design - critical functionality, new chip features, late-stage spec changes, etc - using methods such as full formal verification. Less critical functionality can be verified using less rigorous methods such as constrained-random simulation.

Doron Stein - If the verification in planned as it should be - by coverage-driven targets - then it becomes the gate to tapeout and the focus of the project. In order to achieve this end, there should be time-based planing and tracking of the verification goals and an automatic (!) reflection of the proper status.

The planning of the verification, and its timeline, should be agreed upon by the entire team - architects, designers, and verification engineers. It should not only become the verification engineers' problem. This strategy also ensures that there is a tighter integration between the design and its verification, and hence the quality level increases when taping out.

9) How are late-stage spec changes handled more efficiently if a verification plan is in place? Has the verification plan been laid out against considerations of the original design?

Craig Cochran & Rajeev Ranjan - Most definitely. A verification plan is updated dynamically during the verification process, tracking the levels of verification performed for each chip features. A late-stage spec change can then be evaluated to update the verification plan, to determine what parts of the chip should be re-verified and what methods to employ. Late-stage spec changes are usually the riskiest part of a design, and are therefore often addressed using full formal verification.

Doron Stein - If the verification plan is a function of a dynamic database, then it becomes easier to update the specification and quickly reflect the gap between the past verification goals and the new ones.

Janick Bergeron - A verification plan needs to be flexible and allow for new features, cover groups, tests, descriptions, etc., to be easily altered to accommodate late-stage spec changes. In addition to the plan, a well thought out methodology can help. For example, the VMM methodology outlines an object-oriented approach that can help create a "modular" verification environment. The object-oriented model allows for local changes without impacting the overall verification environment, or vice-versa.

Andy Piziali - If the verification plan is derived from the DUV functional specification, cross references to the spec are often inserted into the plan with each feature section. When the specification changes - as it always does - the corresponding sections of the verification plan are updated also.

When the verification plan section is updated, a verification management tool can show you what particular simulations need to be rerun in order to re-verify the logic changed because of the specification change.

I'm not sure I understand the second question, in particular the phrase "against considerations." If you are asking whether or not the verification plan shares the same structure as the specification, sometimes yes - but usually no. The reason is that the specification is more often organized along the lines of the physical partitioning of the hardware or software components, whereas the verification plan is more useful when organized according to functional requirements.

Catherine Ahlschlager - Having a verification plan allows us to identify quickly what tests are to be obsolete and what needs to be written. What is important is to sync-up the test plan with the change; otherwise, it's hard to quantify risk and schedule with respect to late-stage spec changes.

Harry Foster - The verification plan is developed concurrently with the design. Keep in mind that the design development is different than the implementation development (that is, RTL). Also, the way we define and build our verification infrastructure has more impact on effectively verifying late-stage spec changes than the verification plan itself.

For example, modern simulation environments are modular object oriented class-based components, versus large monolithic (tightly coupled) software programs. These modern simulation environments simplify support for late-stage changes through localization. Changes can be isolated by extending base-class library verification components to accommodate changes without affecting the entire simulation ecosystem.

Formal verification can also assist in rapidly verifying late-stage changes for blocks that are good candidates for formal and ones we have previously chosen to prove (thus having existing formal environments we can modify).

10) Does your company endorse the idea of a verification plan? Do they sell tools to help customers put in place such a plan?

Catherine Ahlschlager - Yes, we definitely endorse the idea of a verification plan at Sun.

Doron Stein - Cisco endorses verification plans, and constantly is searching for ways to improve the verification process.

Craig Cochran & Rajeev Ranjan - Jasper Design Automation absolutely endorses the idea of a verification plan. We have developed GamePlan Verification Planner - a free tool - to assist verification teams with the creation of structured, flexible verification plans. We believe that teams that use a structured approach will benefit more from Jasper's brand of systematic formal verification.

Harry Foster - I guess this means I have to take my "Harry Foster" hat off and put my "Mentor" hat on. Yes, of course my company endorses planning for success. Mentor Graphics provides tools to assist in the bookkeeping aspect of the verification planning process - such as the Unified Coverage DataBase (UCDB) with its open API for third-party tool integration.

Andy Piziali - Cadence Design Systems emphatically endorses the use of a verification plan, in particular an executable verification plan: the vPlan. Our verification management product, Incisive Management, is bundled with vPlan examples and templates for a variety of word processors. The vPlan is read by Incisive Management and displayed in a vPlan window with up-to-date verification progress metrics and roll-ups back- annotated into the user's plan. The vPlan may be configured and instantiated along with verification IP and design IP for an integrated reuse strategy.

Each verification IP (VIP) that Cadence sells is equipped with a complete protocol compliance vPlan linked to the functional coverage models of the VIP. This allows our customer to instantiate the VIP vPlan into their master vPlan rather than creating it from scratch for the specific protocol. vPlans are critical to achieve true design, VIP, and verification plan reuse since they allow an IP integrator to clearly understand exactly what was intended to be verified by the IP developer.

Janick Bergeron - Verification planning is an important part of the verification process, in addition to methodology and technology. But a verification plan cannot be achieved without the right technology or the methodology. Synopsys provides the broadly adopted VMM methodology for developing robust environments and VCS NTB technology delivering up to 5 times faster verification to enable DV engineers to predictably achieve their verification plan. Stay tuned for more.

*********************

Chapter III: Closing Commentary

Rich Faris - At a high level, verification planning hasn't changed. Before the chip is built, the functions should be defined clearly in specifications at the system, chip, and module levels. The flow through the tool chain should be defined so that the right information is generated at each stage in the process, and the right tools are available.

Each major function in the specification needs to be exercised by one tool or another, until it is deemed to have high enough verification coverage. But that doesn't really tell the whole story. That's the same as saying that going into battle is the same now as it was when the weapons were bows and arrows, but now they are precision smart bombs. In today's wars, the concept is totally different; lots of damage can be done without even setting foot on the battlefield.

In the same way, the tools of verification have matured drastically. While the system and chip designers are still required to capture their thoughts and plans in English specifications, this isn't enough. Both for static and dynamic verification, designers should capture their expectations for the different scenarios that need to be tested, and use constraints to define these modes or scenarios.

By using PSL (Property Specification Language) or SVA (SystemVerilog Assertions) to capture the constraints that define a mode, then either the design or verification engineer can write properties that are assertions that the design must conform to. Capturing the constraints and properties along with the design, as part of the design and verification process, removes some of the ambiguity of relying totally on English language specifications.

The other part of planning that is changing is the definition of the quality of test, and deciding when enough is enough testing. For simulation there always was the concept of code coverage. Simulation-based code coverage gave the engineers some idea of how well the lines and states in their code were executed. Now, dynamic ABV (assertion-based verification) or static formal ABV tools purport to report numbers that define some kind of a quality of coverage of the properties on the given design.

Having an idea of how the coverage of the various tools in the chain will be overlaid, and how much weight to put on the different tools numbers, is still going to be more art than science. Someday perhaps there will be a common database for various tools in the chain to write their reports, and this will ease the engineer's burden to merge and understand the disparate coverage data.

*********************

Chapter IV: Additional Books

Applied Formal Verification
by Harry Foster and Douglas Perry

Functional Verification Coverage Measurement and Analysis
by Andy Piziali

Writing Testbenches: Functional Verification of HDL Models
by Janick Bergeron

Hardware Verification with C++
by Mike Mintz and Robert Ekendahl

Verification Methodology Manual for SystemVerilog
by Janick Bergeron, Eduard Cerny, Alan Hunter, and Andy Nightingale

SystemVerilog for Design: A Guide to Using SystemVerilog for Hardware Design and Modeling
by Phil Moorby, Stuart Sutherland, Simon Davidmann, and Peter Flake

*********************
Chapter V: A Note of Thanks

My thanks to Francine Bacchini for her help with this article. Francine organized the original panel at DAC, chaired by Sharad Malik from Princeton University, and in recent weeks has provided additional, invaluable help in encouraging the panelists to submit their written responses to the questions.

In addition, I am grateful to everyone involved here for their contributions to this discussion.

*********************

Peggy Aycinena is Editor of EDA Confidential and a Contributing Editor to EDA Weekly.


You can find the full EDACafe.com event calendar here.

To read more news, click here.


-- Peggy Aycinena, EDACafe.com Contributing Editor.