The Breker Trekker Tom Anderson, VP of Marketing
Tom Anderson is vice president of Marketing for Breker Verification Systems. He previously served as Product Management Group Director for Advanced Verification Solutions at Cadence, Technical Marketing Director in the Verification Group at Synopsys and Vice President of Applications Engineering at … More » If You’re Not Measuring System Coverage, Your SoC Is at RiskAugust 19th, 2013 by Tom Anderson, VP of Marketing
No SoC verification engineer worthy of the title would argue that coverage is unimportant. Even back in the 1980s, before commercial coverage tools and industry standards were available, leading ASIC teams manually added coverage code into their testbenches. They checked that key state machines visited all legal states or made all legal transitions, or that a processor executed all opcodes in its instruction set, over the course of a simulation test. Verification teams who ignored coverage in those days were at risk of letting bugs slip through to silicon. The old maxim “if you don’t verify it, it’s broken” summed the situation up well. Today, leading SoC teams have adopted system coverage. Those who are ignoring this aspect of coverage are at risk of letting serious system-level bugs slip through. Let’s talk about system coverage and why it’s different from other metrics in use today.
First of all, let’s give credit to verification engineers in general for understanding the importance of coverage. Even their most basic approach—a manual test plan in which features are checked off as they are run in simulation—qualifies as a form of coverage. Their next step is usually code coverage, in which the simulator or an attached tool tracks which lines, blocks, expressions, etc. have been exercised. Software engineers pioneered the concept of code coverage, but many hardware teams have embraced it as well. Most verification engineers would argue that 100% code coverage is no guarantee of correctness but that incomplete code coverage points to missing tests and therefore to risk. Once again, if you don’t verify it, it’s broken. The advent of constrained-random stimulus generation demanded a new form of coverage metric, since there was no longer a direct correlation between a test case and the features being verified. Dedicated verification languages such as e and Vera introduced coverage points and coverage groups, constructs placed in the testbench by the verification team expressly to gather functional coverage. In some ways this was a return to the 1980s-era manual coverage methods, although with much more sophisticated language support available. Functional coverage has been the state of the art for more than fifteen years now, and it has served us well. But it’s important to recognize its limitations. It’s used extensively at the IP block and SoC subsystem level, especially by adherents to the Universal Verification Methodology (UVM) standard. In Breker terms, we would say that functional coverage can be effective at verifying driver scenarios, for example whether an IP block can read data from memory, transform it somehow, and write it back. We have seen at customers time and again that functional coverage is not effective and not being used for application scenarios. We often cite the example of a digital camera, in which one application scenario might entail:
Application scenarios are realistic, system-level sequences of functionality that reflect how an end user would actually run applications on the SoC. The act of acquiring an image, encoding it, and saving it to an SD card can be understood by anyone using a digital camera. The act of reading an image back off the SD card, decoding it, and displaying it on the camera’s screen is another example. These scenarios must be verified in order to reduce the risk of bugs that would compromise the user’s experience. True system coverage scales beyond functional coverage to measure whether all application scenarios, and all important variations on those scenarios, are exercised using self-verifying test cases. Even more important, any possible concurrent application scenarios must be verified. If the digital camera is embedded in a smart phone, will an image be captured correctly if a new email message arrives and the phone rings all at the same time? It turns out that Breker’s graph-based scenario models are ideal for establishing, measuring, and refining system coverage, including concurrent applications. Upcoming blog posts will provide details on how and why this works so well. In the meantime, please comment on what value you see (or don’t see) in system coverage as we’ve defined it. Tom A. The truth is out there … sometimes it’s in a blog. Tags: Breker, code coverage, concurrency, coverage, functional coverage, functional verification, graph, scenario model, SoC verification, system coverage, TrekSoC 2 Responses to “If You’re Not Measuring System Coverage, Your SoC Is at Risk”Warning: Undefined variable $user_ID in /www/www10/htdocs/blogs/wp-content/themes/ibs_default/comments.php on line 83 You must be logged in to post a comment. |
Tom,
I agree with you that coverage is mandatory ..
But how do we still make sure noting is broken even with 100 % coverage .. due to missing checkers, incomplete checkers ,improper checkers
Regards
Jebin
Jebin,
You’ve hit on a very important point: coverage without checks means nothing. Coverage is only valid if the test case that generates the coverage also checks for correct behavior. One of the coolest things about our graph-based scenario models is that they generate stimulus, checking, and coverage all at the same time, and all tied together. We’ll be covering this in much more detail in future blog posts, but for now let me assure you that we would never take credit for coverage that did not result from self-verifying test cases.
Tom A.