The Breker Trekker
Tom Anderson, VP of Marketing
Tom Anderson is vice president of Marketing for Breker Verification Systems. He previously served as Product Management Group Director for Advanced Verification Solutions at Cadence, Technical Marketing Director in the Verification Group at Synopsys and Vice President of Applications Engineering at … More »
The Report of Simulation’s Death Was an Exaggeration
May 5th, 2016 by Tom Anderson, VP of Marketing
With a nod to Mark Twain, this week I’d like to comment on a recent three-part series with the provocative title “Are Simulation’s Days Numbered?” The articles were transcribed from one of the “experts at the table” events that SemiconductorEngineering does so well. Breker wasn’t involved in this particular roundtable, but I enjoyed reading the series and found that it stirred up some thoughts. As a blogger, of course I’m going to share them with you and I hope you enjoy them in turn.
Let’s get this out of the way immediately: in three parts and more than 5,000 words, there was no mention of portable stimulus. That might not seem too surprising given the title, but in fact verification portability both from IP to system and from simulation to hardware arose during the discussion. So I’ll comment on that but, given my background as a vendor of formal EDA tools and reusable IP blocks, there are a few other topics that also piqued my interest.
The roundtable started by discussing how IP providers should use both simulation and formal property verification to produce high-quality products. Cadence’s Pete Hardee pointed out that formal can verify not only how the IP behaves across its full range of intended usage, but also outside of this range. An example of this that I have used in the past is that a minor protocol error on one of its interfaces should not cause the IP block to deadlock.
Dave Kelf from OneSpin correctly pointed out the value for a provider to add assertions on the IP interfaces to ensure that their customers are using the block correctly. He went further to suggest shipping the IP with internal assertions as well. I agree on both points. It saves a lot of support time if users catch their own misuse of the block. If a lingering bug in the IP block makes its way to a customer, it’s embarrassing but much better detected by an assertion in simulation than in silicon.
We all know that the limits of simulation are being stressed by large SoC designs, but consultant Lauro Rizzatti argued that even the largest IP blocks are hitting a wall and starting to move to emulation. Adapt IP’s Mike McNamara wasn’t quick to dismiss simulation, noting its unparalleled visibility and debug features. I certainly can’t argue with his comment “emulation, simulation, formal—you should use all of them.” As Arturo Salz from Synopsys commented, verification just keeps adding more techniques.
I do have to disagree with some of the comments about verifying low-power operation, especially using the propagation of unknown (X) values. It is critical to verify that the design, whether complex IP or full SoC, continues to operate properly as power domains are turned off and on. The lack of an unknown state in most emulation solutions limits the verification that can be performed there. Some formal tools do handle X values, but they have limited capacity to verify at the full-chip level where all the power states are visible.
I really appreciated how quickly the discussion moved to finding performance problems. As I like to tell our customers, “a performance bug is a functional bug” because it’s just as much a violation of their product specification. Relying on production software to verify performance adds more complexity to the mix, especially for the debug and diagnosis of any limitations uncovered.
Speaking of debug, Salz mentioned the difficulty of providing a test case to an IP vendor to reproduce problems found at the full-chip level. That led to the related topic of how an IP block can provide useful feedback without revealing the “secrets” of its design. IP providers prefer products that are encrypted or otherwise protected, but this may limit visibility for diagnostic purposes once the block is embedded within the customer’s chip.
At the very end, Hardee circled back to the opening topic of how to help users integrate IP blocks correctly. He proposed that shipping assertions and making more use of formal analysis would provide at least a partial solution. Then he made the most controversial statement of the roundtable for me: “The piece we haven’t figured out is the integration of the unit verification environment into the system verification environment.”
I respectfully disagree. We have figured this piece out. Portable stimulus using Breker’s graph-based scenario models solves this problem today. Any test cases developed for an IP block can be run at the subsystem, SoC, or multi-SoC level. Furthermore, portable stimulus:
With a nod to Monty Python, I assert not only that simulation is “not quite dead yet” but that it is very much alive. Breker preserves and even extends the value of simulation with efficient, automatically generated test cases, vertical reuse from IP to SoC, and seamless horizontal reuse to hardware. As always, your thoughts and comments are most welcome.
The truth is out there … sometimes it’s in a blog.
Tags: Accellera, Adapt IP, Breker, bring-up lab, C/C++, cache coherency, Cadence, constraints, emulation, ESL, FPGA, functional verification, graph, graph-based, IP, multi-SoC, OneSpin, portable stimulus, prototyping, PSWG, realistic use case, Rizzatti, scenario model, simulation, SoC validation, SoC verification, Synopsys, system-on-chip, SystemVerilog, test case generator, test cases