Open side-bar Menu
 Real Talk
Lisa Piper, Senior Technical Marketing Manager at Real Intent
Lisa Piper, Senior Technical Marketing Manager at Real Intent
Lisa Piper is currently a Senior Technical Marketing Manager at Real Intent. She has extensive experience in simulation-based verification, acceleration and formal verification. Prior to Real Intent, Lisa worked at Lucent Microelectronics and AT&T Bell Labs. She has a BSEE from Purdue … More »

Reset Expectations with X-Propagation Analysis

June 25th, 2015 by Lisa Piper, Senior Technical Marketing Manager at Real Intent

The propagation of unknown (“X”) states has become a more pressing issue with the move toward billion-gate SoC designs, and especially so with power-managed SoC designs. The SystemVerilog standard defines an X as an “unknown” value used to represent the state in which simulation cannot definitely resolve a signal to a “1,” a “0,” or a “Z.”

Synthesis, on the other hand, defines an X as a “don’t care,” enabling greater flexibility and optimization. Unfortunately, Verilog RTL simulation semantics often mask the propagation of an X value, while gate-level simulations show additional Xs that will not exist in real hardware.

The sheer complexity and common use of power management schemes increase the likelihood of an unknown “X” state in the design translating into a functional bug in the final chip. This possibility has been the subject of two technical presentations at the Design and Verification Conference during the last couple of years: I’m Still in Love With My X! But, Do I Want My X to Be an Optimist, a Pessimist, or Eliminated? and X-Propagation Woes: Masking Bugs at RTL and Unnecessary Debug at the Netlist. Let’s look more closely at this issue and the requirements for a solution.

Billion-gate designs have millions of flip flops to initialize. Many of the IP blocks used in such designs also have their own initialization schemes. It is neither practical nor desirable to wire a reset signal to every single flop. It makes more sense to route resets to an optimal minimum set of flops that will initialize the remaining flops in the design. This approach poses a significant RTL coding challenge to ensure that it is the known values that propagate and not an unknown value that has been masked by optimism.

The analysis of any system with a complex initialization scheme is bound to identify many Xs. The issue is in knowing which ones matter, because dealing with unnecessary Xs wastes time and resources. However, missing an X state that does matter can increase the likelihood of late-stage debug, cause insidious functional failures, and, ultimately, necessitate respins.

Today’s power schemes further complicate the analysis of X issues. Coming out of a suspended state, initialization may occur via a reset signal, from retention flops, or by signal propagation. Besides the power schemes, the interaction between blocks must also be considered in any analysis, because it impacts what occurs on the inputs to the blocks.

Two simulation behaviors are working against designers — X-optimism and X-pessimism. X-optimism is primarily associated with RTL simulation and is caused by the limitations of HDL simulation semantics. It occurs when a simulator converts an X state into a 0 or a 1, creating the risk that an X causes a functional failure that will be missed in RTL simulation.

X-pessimism is primarily associated with gate-level simulation of netlists. As its name suggests, it happens when legitimate 0s or 1s are converted into an X state. This can lead to precious debug resources being directed toward unnecessary effort. Additionally, after synthesis has done its work, debug at the gate level is more challenging than in RTL.

Some engineers point out that we have always had to deal with Xs, and nothing really has changed. In fact, today’s SoCs employ different power management schemes that wake up or suspend IP. When coming out of a low-power state, Xs that will propagate to X-sensitive logic must be cleared on reset or within a specific short number of cycles afterward.

The situation is now much more uncertain for designers who must determine whether all possible power scenarios are considered and whether all Xs will be cleared correctly. Simply throwing a switch on a simulator to make all of the control logic X-safe and eliminate the optimistic behavior is a possible solution, but there are drawbacks.

First, simulation throughput will drop dramatically, and runtimes will grow a minimum of 3x. Second, some design logic is sensitive to X-optimism, and other logic is not. A brute force approach leaves designers to decide on their own whether they see a real problem or not. Finally there is no industry standard for X-safe simulation. The approach taken by one vendor’s tool likely will not be available in another’s.

The temptation is to supply a reset to all the flops in a design, but this is costly in terms of precious routing density and power usage. Ideally, a static tool could analyze the reset scheme of a design and then suggest a minimum subset of flops that need reset lines. The focus of the analysis is on the initialization phase, because that is where the Xs are most prevalent. The propagation of known values during the initialization phase will reduce the prevalence of the unknowns.

Analysis and optimization of the power-on and reset of digital designs has several phases.

Figure 1. Analysis and optimization of the power-on and reset of digital designs has several phases.

What is required is a resetability analysis and optimization report that details which flops will be reset within a defined number of cycles, and which ones will not. Further, it should suggest which additional flops need to be reset and recommend which flops have resets that can be removed. There is also a need for a similar analysis for retention cells that retain the system state when moving in and out of suspended operation.

For SoC-scale designs, a fast static analysis is needed to determine where in X-sensitive logic an X might be propagated. This approach quickly gives designers the necessary information to decide which Xs will not cause problems, such as those in a data path, and lets them focus on the important subset of Xs that do need attention.

This is very important when dealing with large designs and very easily can be overwhelmed by simulation results. Besides a text report, the availability of a schematic view that shows how an X can propagate to the suspect logic helps designers decide the importance of the sensitivity. They can then decide whether to eliminate the X-source or code for X-accuracy. The key is to have to address only specific points of concern, and not every construct in the design. And the key for that is static analysis which can give results in hours and not days.

With the advent of power-managed billion-gate SoCs, design teams now look for new ways to address the classic problem of X unknowns in their simulations. The latest generation of static verification tools has met this challenge, so designers can productively deliver X-safe designs.

This blog article was originally published on EETimes SoC Designlines.

Related posts:

Tags: , , , , , , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

S2C: FPGA Base prototyping- Download white paper

Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy