As the cost of failure continues to rise, SoC engineers see the growing importance of ensuring their work is as correct as possible as soon as possible in the design process. They cannot afford to carry errors forward from one stage to the next, where their impact grows while their causes become more obscured.
This requirement is driving the shift in design exploration and hand-off to the register transfer level. Using RTL for sign-off eases the integration of heterogeneous IP and makes it easier to check that the blocks are interfacing correctly with the host design, easier to check how clocks will cross these interfaces, and easier to check different power signatures and design testability. It also cuts the functional simulation load, especially when designs are being exercised at the system-level by reducing the number of states and the necessity to check for correct functionality.
What tools are available to improve the quality of RTL code before it reaches simulation and passes to synthesis? The latest generation of lint technology can handle full-chip designs of 500 million gates or more, and yet still can offer concise reporting. Timing constraints management and checking ensures correct timing for the block and full-chip level, so long as any changes in the RTL are reflected in the SDC files for the design. The SDC itself needs to be verified for correctness and consistency, and is essential for sign-off-grade analyses such as clock design crossing (CDC).
This month we are going to look at the use of the parameter statement in Verilog, which is used to define a constant local to a module. References to a parameter are made by using its name. A parameter can be redefined on an instance-by-instance basis in two different ways: parameter redefinition in the instantiation itself, or by using a defparam statement.
Use of the defparam statement can easily cause confusion and trouble in the following ways:
·Hierarchically changing the parameters of a module which may not be visible at the level of the affected module
·Placing the statement in a separate file from the instance being modified
·Using multiple statements in the same file to change the parameters of an instance
·Using multiple statements in multiple different files to change the parameters of an instance
To avoid unintended constant redefinitions, some companies disallow the use of defparam statements in their design flows. In the following example, the value of the parameter top.SIZE is changed at the very bottom level of hierarchy and may easily be missed. The designer may think that the SIZE and WIDTH parameters are still set to 8 as opposed to 4.
I spoke with Gary Smith on Tuesday, May 21, to get his thoughts on who made the list and why, and other events at DAC you should know about… like the Denali Party. The sound has some noise in it and I apologize for that. Still the video is worth watching. Enjoy!
Today’s systems on chip (SoC) are deeply complex in new ways. A dozen or so years ago, a state-of-the-art processor such as the Intel Pentium 4 used 42 million transistors, was built on a 180nm process and relied upon discrete chips to handle its system interfaces. Scroll forward, and the Westmere processor that Intel introduced in 2012 uses 2.6 billion transistors and is built on a 32nm process. The chip includes ten 64bit x86 cores, L3 cache, graphics processing, DDR3 interfaces, virtualisation support and more. This trend to massive integration is even stronger in the mobile space, where SoCs bring together complex computing, communications and entertainment functions on one die.
It’s no longer possible to design all the subsystems of an SoC from scratch and expect to get the chip out in a reasonable timeframe, so today’s SoCs are complex integrations of new logic, IP blocks brought forward from previous designs, and functional and interface IP licensed in from third parties. Some companies are even using third-party IP to build their system interconnect, on the basis of that its communications management support and interfaces to other IP blocks will help get a design out more quickly. In effect, an SoC is a sea of interfaces.
On May 9, 2013, John Cooley’s DeepChip site published a DVCon trip report we put together at Real Intent. Here are some of the items covered: Wally Rhine’s keynote, verification project stats, System Verilog, ad-hoc techniques, coverage and power measurement, formal tools, UVM, CDC, Harry Foster, ARM, Intel, John Goodenough, Gary Smith, “AHA”, design breaking, no mention of C nor SystemC, Design to Help Verification, RTL, instrumenting, when to do formal?, assertion synthesis, no mention of Specman “e”, plus details from Stu Sutherland’s talk on X-optimism, X-pessimism, the 15 X sources, how X’s mess each with constructs with assignments with operators, and three different ways to “fix” the X problem; plus the DVcon attendance numbers plus who exhibited there, too.
The report is quite long, and you can see it here or by scrolling the embedded frame below. Enjoy!
Elsewhere, Pranav Ashar, CTO at Real Intent, pointed out that the management of unknowns (X’s) in simulation has become a separate verification concern of signoff proportions. Modern power management schemes affect how designs are reset (start). X management and reset analysis are interrelated because many of the X’s in simulation come from uninitialized flip-flops and, conversely, the pitfalls of X’s in simulation compromise the ability to arrive at a clear understanding of the resetability of a design.
The SystemVerilog standard defines an X as an “unknown” value, which is used to represent when simulation cannot definitely resolve a signal to 1, 0, or Z. Synthesis, on the other hand, defines an X as a “don’t care,” enabling greater flexibility and optimization. Unfortunately, Verilog RTL simulation semantics often mask propagation of an unknown value by converting the unknown to a known, while gate-level simulations show additional X’s that will not exist in real hardware. The result is that bugs get masked in RTL simulation, and while they do show up at the gate level, time consuming iterations between simulation and synthesis are required to debug and resolve them. Resolving differences between gate and RTL simulation results is painful because synthesized logic is less familiar to the user, and X’s make correlation between the two harder. The verification engineer must first figure out whether the X in gate-level simulation is genuine before figuring out whether there is a bug in the design. Unnecessary X-propagation thus proves costly, causes painful debug, and sometimes allows functional bugs to slip through to silicon.
Continued increases in SOC integration and the interaction of blocks in various states of power management are exacerbating the X problem. In simulation, the X value is assigned to all memory elements by default. While hardware resets can be used to initialize registers to known values, resetting every flop or latch is not practical because of routing overhead. For synchronous resets, synthesis tools typically club these with data-path signals, thereby losing the distinction between X-free logic and X-prone logic. This in turn causes unwarranted X-propagation during the reset simulation phase. State-of-the-art low power designs have additional sources of Xs with the additional complexity that they manifest dynamically rather than only during chip power up.