Open side-bar Menu
 Real Talk

Posts Tagged ‘verification’

DVCon Recap

Thursday, March 3rd, 2016

The Design and Verification Conference in Silicon Valley delivered the goods again this year. Here are some quick highlights from the show.

Wednesday%20Afternoon-8[1]

Graham talking to Koko and Pippa at the Oski Tech booth.

(more…)

Exposing and Eliminating X-optimism Bugs in RTL

Thursday, December 3rd, 2015

X-optimism occurs when an unknown X value is incorrectly resolved to a known value in RTL simulation. Optimism issues can be difficult to detect and debug because the X is no longer visible once the optimism occurs. The functional issue may not show up at an output for many, many clock cycles after the optimism. X-optimism issues also show up in a gate-level netlist or FPGA-based prototypes, but debug is difficult due to limited visibility in FPGAs, and netlist designs are less familiar post-synthesis. Trying to find an X-optimism bug in an FPGA model is like looking for a needle in a haystack due to limited visibility. In netlist simulations the design hierarchy is flattened, signal names changed, and there is a danger that the X under consideration will be mistaken for a pessimistic node and forced to a known value that hides a functional issue.

Real Intent’s Ascent XV uses static analysis to identify potential X-optimism issues at RTL so they can be fixed prior to simulation, ensuring efficient and accurate simulations. Fixing optimism issues in RTL streamlines getting netlist simulations or FPGA-based prototypes, up and running faster and reduces costly iterations. (more…)

DO-254 Without Tears

Thursday, April 23rd, 2015

This article was originally published on TechDesignForums and is reproduced here by permission.

At first glance the DO-254 aviation standard, ‘Design Assurance Guideline for Airborne Electronic Hardware’, seems daunting. It defines design and verification flows tightly with regard to both implementation and traceability.

Here’s an example of the granularity within the standard: a sizeable block addresses how you write state machines, the coding style you use and the conformity of those state machines to that style.

This kind of stylistic, lower-level semantic requirement – and there are many within DO-254 – makes design managers stop and think. So it should. The standard is focused on aviation’s safety-critical demands, assessing the hardware design’s execution and functionality in appropriate depth right up to the consequences of a catastrophic failure.

Nevertheless, one pervasive and understandable concern has been the degree to which such a tightly-drawn standard will impact on and be compatible with established flows. This particularly goes for new entrants in avionics and its related markets.

Your company has a certain way of doing things so you inevitably wonder how easily that can be adapted and extended to meet the requirements of DO-254… or will a painful and expensive rethink be necessary? Can we realistically do this? (more…)

My Impressions of DVCon USA 2015: Lies; Experts; Art or Science?

Thursday, March 12th, 2015

Last week I attended the Design and Verification Conference in San Jose.  It had been six years since my last visit to the conference.  Before then, I had attended five years in a row, so it was interesting to see what had changed in the industry.  I focused on test bench topics, so this blog records my impressions in that area.

First, my favorite paper was “Lies, Damned Lies, and Coverage” by Mark Litterick of Verilab, which won an Honorable Mention in the Best Paper category.  Mark explained common shortcomings of coverage models implemented as SystemVerilog covergroups.  For example, because a covergroup has its own sampling event, that may or may not be appropriate for the design.  If you sample when a value change does not matter for the design, the covergroup has counted a value as covered when in fact it really isn’t.  In the slides, Mark’s descriptions of common errors were pithy and, like any good observation, obvious only in retrospect.  More interestingly, he proposed correlating coverage events via the UCIS (Unified Coverage Interoperability Standard) to verify that they have the expected relationships.  For example, a particular covergroup bin count might be expected to be the same as the pass count of some cover property (in SystemVerilog Assertions) somewhere else, or perhaps as much as some block count in code coverage.  It struck me that some aspects of this must be verifiable using formal analysis. You can read the entire paper here and see the presentation slides here.

I was also impressed by the use of the C language in verification — not SystemC, but old-fashioned C itself.  Harry Foster of Mentor Graphics shared some results of his Verification Survey, and there were only two languages whose use had increased from year-to-year: SystemVerilog and C.  For example, there was a Cypress paper by David Crutchfield et al where configuration files were processed in C.  Why is this, I wondered?  Perhaps because SystemVerilog makes it easy via the Direct Programming Interface (DPI): you can call SystemVerilog functions from C and vice-versa.  Also, a lot of people know C.  I imagine if there were a Python DPI or Perl DPI, people would use those a lot as well! (more…)

Reporting Happiness: Not as Easy as You Think

Sunday, January 18th, 2015

Like other successful design automation companies we have many happy customers that use our tools.  Marketeers like myself crave getting customer stories and comments to share with the world at large.   While an individual engineer is happy to share his point of view why he likes one of our tools, before it can be made public and ascribed to engineer X at company Y, it must pass through a gauntlet of approvals by upper management at the customer.  Often there is a “quid pro quo” in this process.  In other words, to get company management to approve the quote, some benefit in the form of additional pricing discount, or extra short-term licenses is negotiated.  Or sometimes management sees Real Intent’s static solutions as one of their ‘secret weapons’ and don’t want to share the good news with potential competitors.

(more…)

The Evolution of RTL Lint

Thursday, November 27th, 2014

This article was originally published on TechDesignForums and is reproduced here by permission.

It’s tempting to see lint in the simplest terms: ‘I run these tools to check that my RTL code is good. The tool checks my code against accumulated knowledge, best practices and other fundamental metrics. Then I move on to more detailed analysis.’

It’s an inherent advantage of automation that it allows us to see and define processes in such a straightforward way. It offers control over the complexity of the design flow. We divide and conquer. We know what we are doing.

Yet linting has evolved and continues to do so. It covers more than just code checking. We begun with verifying the ‘how’ of the RTL but we have moved on into the ‘what’ and ‘why’. We use linting today to identify and confirm the intent of the design.

A lint tool, like our own Ascent Lint, is today one of the components of early stage functional verification rather than a precursor to it, as was once the case.

At Real Intent, we have developed this three-stage process for verifying RTL: (more…)

Video Keynote: New Methodologies Drive EDA Revenue Growth

Thursday, August 21st, 2014

Wally Rhines from Mentor gave an excellent keynote at the 51st Design Automation Conference on how EDA grows by solving new problems.  In his short talk, he references an earlier keynote he gave back in 2004 and what has changed in the EDA industry since that time.

Here is a quick quote from his presentation: “Our capability in EDA today is largely focused on being able to verify that a chip does what it’s supposed to do. The problem of verifying that it doesn’t do anything it’s NOT supposed to do is a much more difficult one, a bigger one, but one for which governments and corporations would pay billions of dollars for to even partially solve.”

Where do you think future growth will come in EDA?

(more…)

Static Verification Leads to New Age of SoC Design

Thursday, July 3rd, 2014

SoC companies are coming to rely on RTL sign-off of many verification objectives as a means to achieve a sensible division of labor between their RTL design team and their system-level verification team. Given the sign-off expectation, the verification of those objectives at the RT level must absolutely be comprehensive.

Increasingly, sign-off at the RTL level can be accomplished using static-verification technologies. Static verification stands on two pillars: Deep Semantic Analysis and Formal Methods. With the judicious synthesis of these two, the need for dynamic analysis (a euphemism for simulation) gets pushed to the margins. To be sure, dynamic analysis continues to have a role, but is increasingly as a backstop rather than the main thrust of the verification flow. Even where simulation is used, static methods play an important role in improving its efficacy.

Deep Semantic Analysis is about understanding the purpose or role of RTL structures (logic, flip-flops, state machines, etc.) in a design in the context of the verification objective being addressed. This type of intelligence is at the core of everything that Real Intent does, to the extent that it is even ingrained into the company’s name. Much of sign-off happens based just on the deep semantic intelligence in Real Intent’s tools without the invocation of classical formal analysis.

(more…)

SoC CDC Verification Needs a Smarter Hierarchical Approach

Thursday, June 19th, 2014

This article was originally published on TechDesignForums and is reproduced here by permission.

Thanks to the widespread reuse of intellectual property (IP) blocks and the difficulty of distributing a system-wide clock across an entire device, today’s system-on-chip (SoC) designs use a large number of clock domains that run asynchronously to each other. A design involving hundreds of millions of transistors can easily incorporate 50 or more clock domains and hundreds of thousands of signals that cross between them.

Although the use of smaller individual clock domains helps improve verification of subsystems apart from the context of the full SoC, the checks required to ensure that the full SoC meets its timing constraints have become increasingly time consuming.

Signals involved in clock domain crossing (CDC), for example where a flip-flip driven by one clock signal feeds data to a flop driven by a different clock signal raise the potential issue of metastability and data loss. Tools based on static verification technology exist to perform CDC checks and recommend the inclusion of more robust synchronizers or other changes to remove the risk of metastability and data loss.

(more…)

CST Webinar Series



Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy