Jim Foley, Director of R&D, Real Intent
Jim Foley, prior to his role as R&D Director at Real Intent, was Head for Power Analysis Products at Sequence Design, and has held product management and development roles at Cadence Design Systems. Jim received a BS in Electrical Engineering from Worcester Polytechnic Institute. Go Engineers!
April 18th, 2013 by Jim Foley, Director of R&D, Real Intent
In recent postings I’ve been writing about the nits and details of the semantics of Verilog, so, in the interest of balance, it’s appropriate to spend some time on VHDL as well.
VHDL has a stronger type system than Verilog, and is rather more explicit in how logic is specified, so you might think that that VHDL is less prone to legal but unintentionally incorrect modeling. It turns out that VHDL has coding gotchas of its own that are different from the ones in Verilog and SystemVerilog.
When you specify a subrange for a type or data declaration, it’s necessary to specify the direction using either ‘to’ or ‘downto’:
signal aa : std_logic_vector(7 downto 0); — descending range, use ‘downto’
signal bb : std_logic_vector(16 to 31); — ascending range, use ‘to’
April 11th, 2013 by Graham Bell
On February 28, 2013, Ed Sperling, Editor-in-Chief of System-Level Design sat down to discuss verification strategies and changes with Harry Foster, chief verification scientist at Mentor Graphics: Janick Bergeron, verification fellow at Synopsys; Pranav Ashar, CTO at Real Intent; Tom Anderson, vice president of marketing at Breker Verification Systems; and Raik Brinkmann, president and CEO of OneSpin Solutions. Part 3 of the discussion is presented below from the content at http://chipdesignmag.com/sld/blog/2013/03/28/experts-at-the-table-verification-strategies-3/.
April 4th, 2013 by Vaishnav Gorur, Sr. Applications Engineer
Modern CDC Verification Approaches
Thanks to advances in process technologies and the surging demand for high-performance, low-power, feature-rich consumer devices, the problem of CDC verification and handling of the underlying metastability issues has gone mainstream. Traditional CDC verification using linting, template-based approaches, hacked simulation or static timing approaches have been rendered archaic and are not scalable enough to stand up to the CDC verification challenge. There is an immediate need for a solid CDC verification tool with a robust methodology that not only plays well with the existing tool flow but is flexible enough to accommodate new power optimization flows without compromising on the quality or the extent of coverage.
Real Intent’s Meridian CDC was forged as a CDC tool from the get-go. It has evolved with the design industry, emerged as the market leader in CDC verification and has stood up to CDC verification challenges at major design houses worldwide. The specialized structural and formal analysis engines understand and analyze CDC issues at the grass roots level. They are architected for high-speed and high-capacity, and generate concise low-noise reports that accurately pinpoint CDC issues to enable rapid debugging. The ability of Meridian CDC to run both at the RT level as well as gate-level gives verification teams the wingspan they need to keep designs CDC-clean across the complete design flow.
April 1st, 2013 by Graham Bell
On February 28, 2013, Ed Sperling, Editor-in-Chief of System-Level Design sat down to discuss verification strategies and changes with Harry Foster, chief verification scientist at Mentor Graphics: Janick Bergeron, verification fellow at Synopsys; Pranav Ashar, CTO at Real Intent; Tom Anderson, vice president of marketing at Breker Verification Systems; and Raik Brinkmann, president and CEO of OneSpin Solutions. Part 2 of the discussion is presented below from the content at http://chipdesignmag.com/sld/blog/2013/03/08/experts-at-the-table-verification-strategies-2/.
March 28th, 2013 by Rick Nordin, VP of Business Development, Breker
As vice president of business development for Breker Verification Systems, I meet with loads of verification engineers and development teams and always walk away with new insights. Any market analyst who wants help to identify a new or emerging trend in chip design and verification should network with business managers like me. All too often, we’re watching a chip verification shipwreck on par with the sinking of the Titanic, leaving us “Cold as Ice,” as Foreigner intoned in 1977.
You’re cold as ice
You’re willing to sacrifice your chip
This little refrain was playing in my head recently as I was driving away from a painful meeting with a development team working on the verification of a complicated system-on-chip (SoC) design. All was not going well. While the SoC design looked flawless and taped out with no problems, early samples of the chip were not working as expected in some scenarios. The verification engineers weren’t “cold” because they didn’t care; they were close to the iceberg and didn’t realize it.
You never take advice
Someday you’ll pay the price
This team, like so many others, got sucked up in a “stitch and ship” mentality that could sink its corporate ship like an iceberg. While the electronics industry has benefited from reusing blocks of intellectual property (IP), it’s not a panacea. An IP block with a well-defined function can be reused in multiple designs and shared among numerous development teams or companies. All too often, though, development teams assume that because the IP, fabric and memory subsystem have been tested individually, the entire flow will work as intended. After all, if each IP block has been tested and works, it might seem as if the software should be able to stitch them together into a production-worthy device. Read the rest of SoC Verification Can be Cold as Ice
March 21st, 2013 by Vaishnav Gorur, Sr. Applications Engineer
C. The need for reset signals to be asynchronously asserted and synchronously de-asserted.
Although it appears that use of asynchronous resets is preferred due to the ability to reset a subsystem without an active clock edge, there is still a catch. Asynchronous resets are, by definition, asynchronous both during assertion and de-assertion of reset. The assertion, as discussed earlier, does not pose an issue as it is independent of the clock signal. However, the de-assertion is still subject to meeting reset recovery times. The reset recovery time is similar to a setup timing condition on a flip-flop; it defines the minimum amount of time between the de-assertion of reset and the next active clock edge.
If the asynchronous reset is de-asserted near the active edge of the clock and violates the reset recovery time, it could cause the flip-flop to go metastable, resulting in potential loss of the reset value of the flip-flop. A non-deterministic reset value defeats the whole purpose of using a resettable flip-flop. Hence, a fully asynchronous reset is also not a viable reset solution for systems with multiple clock domains.
As described above, synchronous resets have issues during reset assertion and asynchronous resets have issues during reset de-assertion. To overcome these obstacles, an ideal solution is to combine the best of both worlds: use a scheme that involves asynchronous assertion yet synchronous de-assertion of reset. Read the rest of Part Six: Clock and Reset Ubiquity – A CDC Perspective
March 21st, 2013 by Graham Bell
Ed Sperling, Editor-in-Chief of System-Level Design recently did a follow-on video interview after his Experts At The Table: Verification Strategies roundtable. Here below, you can read Ed’s introduction to the video interview and the question he posed to Pranav Ashar, CTO at Real Intent. To hear Pranav’s answer, click on the embedded video (which starts at 3:56).
“When you think about the most complex SoCs that are going out the door these days, at 28 and 20nm, it’s a wonder that they still work. A good part of the reason is that they are verified very effectively. Verification traditionally has been 50% to 70% of the NRE that goes into designing these chips and that has not changed. But, the size of the chips and the complexity has grown significantly. So here to discuss what is going on in verification today we have:
… So Pranav, from your perspective what is the big change or big changes that have happened in verification in the past couple of years as we have rising complexity in a chip?” Read the rest of The BIG Change in SoC Verification You Don’t Know About
March 14th, 2013 by Jim Foley, Director of R&D, Real Intent
One of the first things you learn about when modeling logic in Verilog is to avoid race conditions. You can do this by coding clocked registers with non-blocking assignments. So why not make life simple, and use non-blocking assignments for combinational logic too?
Let’s back up a bit and review the basics:
always @(posedge clk)
bb = f1(aa); // When clk rises, bb is determined by aa
always @(posedge clk)
cc = f2(bb); // The same instant, cc could get the new result.. This is not what we want! Read the rest of Ascent Lint Rule of the Month: COMBO_NBA
March 14th, 2013 by Graham Bell
On February 28, 2013, Ed Sperling, Editor-in-Chief of System-Level Design sat down to discuss verification strategies and changes with Harry Foster, chief verification scientist at Mentor Graphics; Janick Bergeron, verification fellow at Synopsys; Pranav Ashar, CTO at Real Intent; Tom Anderson, vice president of marketing at Breker Verification Systems; and Raik Brinkmann, president and CEO of OneSpin Solutions. Part 1 of the discussion is presented below from the content at http://chipdesignmag.com/sld/blog/2013/02/28/experts-at-the-table-verification-strategies/.
March 7th, 2013 by Vaishnav Gorur, Sr. Applications Engineer
An asynchronous reset control that crossed clock domains but was not synchronously de-asserted, causing a glitch in control lines to an FSM.
The scenario above is at the confluence of the following three design requirements, and resulted in a failure when one of them was not met:
A. The need for multiple clock domains in the design that can be independently reset.
Let us delve deeper into each of these design requirements in order to understand the context of the failure.
A. Need for multiple clock domains in the design that can be independently reset.
In the event of failure, a hardware reset is a necessity to restore the system to a known initial state from which it can start functioning deterministically. Power-cycling a modem is a classic example of allowing enough time for a system reset to propagate to all sub-systems, some of which might be operating at different clock frequencies. From a verification standpoint, since each of these subsystems typically is designed and verified separately, the presence of a reset in each subsystem enables effective block-level verification by ensuring that the design is in a known state for simulation.
It is good design practice for every flip-flop in a design to be resettable. In order to extract higher performance in functional mode, there may be certain parts of the design (e.g. pipeline registers) which themselves are not resettable but whose upstream registers are. In such cases, the design takes more clock cycles to be put into a known state as the upstream reset values need to propagate down to these registers. Often, this is an acceptable tradeoff but one that the system designers need to be cognizant of when determining the reset strategy for the SoC.
Several benefits stem from the ability to independently reset subsystems, some of which are: