Open side-bar Menu
 Real Talk

Archive for April, 2011

Getting You Closer to Verification Closure

Tuesday, April 26th, 2011

Today’s leading-edge designs are verified by sophisticated and diverse verification environments, the complexity of which often rival or exceed that of the design itself.  Despite advancements in the area of stimulus generation and coverage, existing tools provide no comprehensive, objective measurement of the quality of your verification environment.  They do not tell you how good your testbench is at propagating the effects of bugs to observable outputs or detecting the presence of bugs.  The result is that decisions about when you are done verifying are often based on partial data or “gut feel” assessments.  Clearly, verification environments need some verifying of their own, in order to measure and improve the quality of verification. This is why SpringSoft has developed the Certitude™ Functional Qualification System.

Certitude is the only solution to provide an objective measure of the quality of your verification environment and guidance on how to improve it. Certitude injects potential bugs into your design and evaluates the ability of your verification environment to catch them. It completely analyzes whether the potential bugs are activated, propagated to observable outputs, and detected by your environment, thus identifying whether you need to improve your tests, assertions or checkers.  The result is higher confidence in your verification results and improved design quality.  Certitude provides both guidance and a means of measuring progress throughout the functional verification closure process, and critical data points to support your signoff decisions.

Learn More!

On May 5th, SpringSoft and Real Intent will be co-hosting a seminar on The Latest Advances in System-on-Chip Functional Verification Sign-off.  Join us there, where we will demonstrate how the Certitude system integrates easily with your existing simulation environment and applies these patented techniques to provide comprehensive, objective feedback on the quality of your verification environment and how to improve it.  We will show how recent technical advances, such as an improved fault prioritization algorithm and enhanced fault-dropping techniques, enable Certitude to quickly find the most serious deficiencies in your environment with a minimum of simulation resources.  We will also demonstrate how the tight integration with Verdi™ and the new Fault Impact Ranking engine minimize the analysis and debug effort required to understand and fix these problems.

X-verification: Conquering the “Unknown”

Monday, April 11th, 2011

As Craig Cochran so eloquently put it in the previous blog article, “SoCs today are highly integrated, employing many disparate types of IP, running at different clock rates with different power requirements. Understanding the new failure modes that arise from confluences of all these complications, as well as how to prevent them and achieve sign-off, is important.” As an example, Clock Domain Crossing issues are becoming a very big concern with all of this integration, but comprehensive tools like Meridian CDC enable sign-off with confidence.  However, another issue that is bubbling up as a problem desperately in need of a solution is “X-verification”. While the issue of handling “X’s” in verification has always been there, it has become more exasperated by low power applications that routinely turn off sections of chips, generating “unknowns”!

The “unknown” as it is called in digital design, is represented as an “X” logic level.  This means that the signal might actually take on a value of “1”, “0”, or “Z” in 4-state logic.  X values have existed in logic design forever, and are commonly used to represent the state of uninitialized signals, such as nets that are not driven, or storage elements that have no reset. “X-propagation” occurs when one of these X values feeds downstream logic, causing additional unknowns.  For example, as shown below, when signal ‘a’ is an unknown value, that unknown value is sometimes, but not always, propagated to the output.

 

assign y = a && b;

a       b      output

0       0         0

0       1         0

1       0         0

1       1         0

x       0         0

x       1         x

X’s also take on a beneficial role in both synthesis and verification. Explicit assignments to an X value can signify a “don’t care” condition that grants synthesis tools greater flexibility to optimize the generated logic. The X value is also used in verification to flag illegal states, created by problems such as bus contention. Automatic formal checking tools like Ascent IIV can use these assignments to check that the illegal state cannot be reached.

Unfortunately, X’s can also mask functional bugs in the RTL due to an X-propagation interpretation hazard known as “X-optimism”.  X-optimism is a trait of incomplete coding that incorrectly transforms an unknown value to a known value.  “If-else” statements and “case” statements can be X-optimistic when the condition is evaluated as an X value. Simulation semantics do not propagate the X value but rather translate the unknown X value to a known value.  The fact that the condition was an unknown is no longer visible – it is hidden, in a way that makes the X-propagation elusive. Here is an example:

// if-else conditionals

reg out1;

always @(*)

begin

if (condition)

out_1 = 1’b1;

else

out_1 = 1’b0;

end

 

condition |  out_1
============
1                   1
0                   0
x                   0

When condition is a 1’b1, then the output is 1’b1 and when condition is 1’b0 the output is 1’b0.  But notice what happens when condition is an X value.  Here the X value is an “unknown”.  But the output is translated to a 1’b0, and the unknown X is now masquerading as though it were definitively a 1’b0, when in fact it could have been a 1’b1 or a 1’b0, depending on how it is synthesized into gates.

While X-optimism bugs can be detected in gate-level simulation, it is slow and painful to debug there. X-optimism may also be innocuous, but still lead to differences between RTL and gate-level simulation that must be painstakingly resolved in order to achieve sign-off.

There are capabilities of existing tools that can help with X-verification. For example, RTL analysis tools like Ascent Lint will identify X assignments. Automatic formal tools such as Ascent IIV take it a step further and can verify that designated “illegal” states cannot be reached, thereby verifying that the X value will not propagate. While highly useful, this covers a relatively small percentage of X’s that might exist. In addition, four-state formal verification tools allow you to write explicit assertions to confirm that an X value cannot propagate to a specified point.  However, this requires knowledge of assertion languages and the ability to completely specify the applicable behavior of the inputs, as well as the need to know every point in the design that needs to be verified by an assertion, which is highly impractical.

X-verification sign-off is not an easy problem to solve, because the mere existence of X values is not an issue. The issue is that hazardous X propagation is often elusive because it is transformed by X-optimism into supposedly known values. Moreover, X-optimism is an insidious and intermittent problem because it only becomes an issue if the X-optimized signal is being used in the design when the optimism occurs.  The functional issue that results may not be detectable for many clocks after the X-optimism occurrence, and there may be multiple sources of X in its fan-in, making root cause analysis very difficult.  Adding to that, if debug occurs at the gate level, simulations are very slow and the logic is not as readable as the original RTL.

What is needed is a comprehensive solution built on the existing RTL verification infrastructure that detects when the propagation of X values masks functional bugs.  Real Intent is developing just such a solution, called Ascent XV. Join us at our joint seminar with SpringSoft entitled “Latest Advances in Verification Sign-off” (sign-up at http://www.springsoft.com/ri-ss-seminar) for details on Real Intent’s comprehensive solution to the X-Verification problem.  Ascent XV is conquering the “unknown” so designers can sign-off with confidence.

Learn About the Latest Advances in Verification Sign-off!

Tuesday, April 5th, 2011

If you’ve been reading this blog for a while, you know that the industry is seeing big and rapid changes to the Verification Sign-off process. Simulation and Static Timing Analysis are not enough anymore! SoCs today are highly integrated, employing many disparate types of IP, running at different clock rates with different power requirements. Understanding the new failure modes that arise from confluences of all these complications, as well as how to prevent them and achieve sign-off, is important.

Fortunately, Real Intent and SpringSoft have teamed up to offer a free joint seminar at TechMart in Santa Clara on May 5, 2011, titled “The Latest Advances in Verification Sign-off”. The seminar features User Case Studies from Broadcom and Mindspeed, technical sessions on hot topics such as Clock Domain Crossing (CDC) Sign-off, Verification Closure, X-Propagation Verification, and efficient SystemVerilog Testbench development, and a keynote address by Anant Agrawal, Chairman of Verayo, Inc., and a founding member of the SPARC processor team at Sun Microsystems.

Lunch will be served before the keynote, and at the conclusion of the seminar, a very nice gift will be given away in a drawing. Registration is free, so sign up now at http://www.springsoft.com/ri-ss-seminar.

To tempt you a little further, here are abstracts of the technical sessions:

1. You are doing CDC verification, but have you achieved CDC Sign-off?

The trends toward SoC integration and multi-core chip design are driving an exponential increase in the complexity of clock architectures. Functionality that was traditionally distributed among multiple chips is now integrated into a single chip. As a result, the number of clock domains is dramatically increasing, making Clock Domain Crossing (CDC) verification much more complex and an absolute must-have in the verification flow.

However doing CDC verification doesn’t mean you have achieved CDC sign-off. Lint-based CDC analysis, though identifies potential synchronization issues and risky CDC structures, but it does not guarantee that a CDC bug will not slip through to silicon. A systematic CDC verification methodology utilizing different CDC verification technologies in a layered approach needs to be in place in order to achieve CDC robust designs and final CDC sign-off.

This presentation discusses what it means to achieve CDC sign-off, highlights the necessary steps required in a CDC verification methodology that supports CDC sign-off, and uses customer experiences to showcase real life success of such methodology. With this knowledge, you won’t be just doing CDC verification, but achieving CDC sign-off!

2. Don’t Let the X-Bugs Bite: Signing off on X-Verification

Designers spend many, many hours verifying that RTL provides the correct functionality. The expectation is that the gate level simulation produces the same results as the RTL simulation.  X-Propagation is a major cause of differences between gate level and RTL simulation results, and issues are not detected by logical equivalence checkers. Unfortunately, while most X’s are innocuous at the RTL level, they can also mask functional bugs in RTL.  Resolving gate level simulation differences is painful and time consuming because X’s make correlation between the two difficult.  “X-Prop” issues cause costly iterations, painful debug, and sometimes allow X-related functional bugs to slip through.  This presentation explains the common sources of X’s, shows how they can mask real functional issues and why they are difficult to avoid. It also presents a unique practical solution to assist designers in catching X-propagation bugs efficiently at RTL, avoiding iterations that delay sign-off.

3. SystemVerilog Testbench – Innovative Efficiencies for Understanding Your Testbench Behavior

The adoption of SystemVerilog as the core of a modern constrained-random verification environment is ever-increasing.  The automation and sophisticated stimulus and checking capabilities are large reason why.  .  The supporting standards libraries and methodologies that have emerged have made the case for adoption even stronger and all the major simulators now support the language nearly 100%.  A major consideration in verification is debugging and naturally, debug tools have to extend and innovate around the language.  Because the language is object-oriented and more software-like, the standard techniques that have helped with HDL-based debug no longer apply.  For example, event-based signal dumping provides unlimited visibility into the behavior of an HDL-based environment; unfortunately, such straight-forward dumping is not exactly meaningful for SystemVerilog testbenches.  Innovation is necessary.

This seminar will discuss the use of message logging and how to leverage the transactional nature of OVM and UVM-based SystemVerilog testbenches to automatically record transaction data.  We’ll show you how this data can be viewed in a waveform or a sequence diagram to give you a clearer picture of the functional behavior of the testbench.  For more detailed visibility into the testbench execution, we will also discuss emerging technologies that will allow you to dump dynamic object data and view it in innovative ways was well as using this same data to drive other applications such as simulation-free virtual interactive capability.

4. Getting You Closer to Verification Closure

Techniques for Assessing and Improving Your Verification Environment

Today’s leading-edge designs are verified by sophisticated and diverse verification environments, the complexity of which often rivals or exceeds that of the design itself.  Despite advancements in the area of stimulus generation and coverage, existing techniques provide no comprehensive, objective measurement of the quality of your verification environment.  They do not tell you how good your testbench is at propagating the effects of bugs to observable outputs or detecting the presence of bugs.  The result is that decisions about when you are “done” verifying are often based on partial data or “gut feel” assessments.  These shortcomings have led to the development of a new approach, known as Functional Qualification, which provides both an objective measure of the quality of your verification environment and guidance on how to improve it.

This seminar provides background information on mutation-based techniques – the technology behind Functional Qualification – and how they are applied to assess the quality of your verification environment. We’ll discuss the problems and weaknesses that Functional Qualification exposes and how they translate into fixes and improvements that give you more confidence in the effectiveness of your verification efforts.

 

Get a jump on DAC and find out what’s happening in the world of verification closure and sign-off! Or, if you can’t make it to DAC this year, this is your chance to learn this year’s hot topics. Either way, it’s a great opportunity to learn from the experts for free.

CST Webinar Series



Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy