Archive for 2014
Thursday, August 14th, 2014
In our previous four posts, we have woven a story quite different from the way we’ve talked about Breker and our technology for the past few years. Regular readers know that our focus has been on verifying system-on-chip (SoC) designs by generated multi-threaded, self-verifying C test cases to run on the SoC’s embedded processors. TrekSoC generates these test cases for simulation with RTL or ESL models; TrekSoC-Si generates test cases for emulator, FPGA prototypes, and actual silicon.
The last few posts have pointed out that TrekSoC has had to handle running in a transactional testbench since many test cases send data on or off the chip. We’ve worked hard to ensure that we can integrate easily into testbenches compliant with the Universal Verification Methodology (UVM) standard. Today we leverage this knowledge as we introduce TrekUVM, which generates multi-threaded, self-verifying test cases for a purely transactional UVM testbench.
Thursday, August 7th, 2014
In our last blog post, we worked our way up the conclusion that our TrekSoC product can be used to verify designs that do not contain embedded processors. As we noted, there is not a widely accepted industry term for such devices. For the moment, let’s call them “transactional designs” since the majority of them take transactions in at one end and generate transactions at the other end, sometimes for two very different protocols, and are often bidirectional in nature.
The technological argument is simple. Most SoCs also have I/O ports, both standard buses and proprietary protocols, and TrekSoC must be able to talk to them, coordinate among them, and synchronize their transactions with generated C code running in the embedded processors. A purely transactional chip and testbench form a subset of the challenge for which TrekSoC is designed, so it’s not surprising that we can help. Today’s post fills in some more details.
Wednesday, July 30th, 2014
In our previous two posts, we went into considerable detail on the vertical reuse of verification information from IP block to subsystem to system. We have focused on how graph-based scenario models enable simple composition as you move up the design hierarchy. This type of reuse is not possible with traditional testbench elements such as UVM scoreboards and virtual sequencers. Once again, this is not a slam against the UVM, but rather a basic trait of constrained-random testbenches.
We skimmed over one aspect of vertical reuse: the transition from a “headless” SoC subsystem with no CPU to full-chip simulation with our automatically generated multi-threaded C test cases running on the SoC”s embedded processors. We also skipped the question of whether or not our graph-based scenario models can generate full-chip tests for chips that do not contain processors and are not classified as SoCs. This post links these ideas together and answers the question. (more…)
Tuesday, July 22nd, 2014
In our last post, we went into quite a detailed discussion of how the Accellera Universal Verification Methodology (UVM) has limitations on reuse. Specifically, we showed why it is not possible to compose scoreboards and virtual sequencers together as you move up the design hierarchy from verifying blocks to verifying clusters or complete chips. In the process, information about how connected blocks communicate is lost and must be recreated in the higher-level sequencer.
We also claimed that graph-based scenario models provide more effective reuse, specifically because lower-level graphs can be composed into a higher-level graph as blocks are combined and you move up the chip hierarchy vertically. Block-level graphs compose cluster-level graphs, and cluster-level graphs compose full-chip graphs. In today’s post, we take the same example used last time and show how reuse works with graph-based scenario models rather than pure UVM testbenches.
Thursday, July 17th, 2014
Over the lifetime of The Breker Trekker, we’ve published numerous posts about the inherent benefits of graph-based scenario models for verification. These models allow you to pull on a rope rather than push it. They allow you to begin with the end in mind, solving backwards to determine the necessary inputs. They support advanced verification planning and debug. They make verification modeling more pleasant. They enable both horizontal reuse over the course of a project and vertical reuse from IP block to subsystem to system.
Today we’d like to dig into a particular aspect of vertical reuse that we have not addressed in detail before. One of the goals of verification standards has been to define testbench elements that are reusable. This goal was very much in mind when the Accellera working group standardized the Universal Verification Methodology (UVM). By establishing a standard architecture, nomenclature, and application programming interface (API), UVM components are highly reusable from project to project and even company to company. However, the UVM fails at other forms of reuse.
Tuesday, July 8th, 2014
Last week we talked once again about our familiar mantra to “begin with the end in mind” when performing SoC verification. We described the enormous value that graph-based scenario models provide by enabling automatic test case generation from desired results. TrekSoC can walk the graph backwards, from result to inputs, and generate the C code necessary to exercise true user-level test cases across multiple threads and multiple heterogenous processors.
It’s clear even to the biggest fans of the Universal Verification Methodology (UVM) that this standard breaks down at the full-chip level for an SoC containing one or more embedded processors. The UVM, for all its good points, does not encompass code executing on processors and does not provide any guidance on how to link such code with the testbench that connects the chip’s inputs and outputs. The value of scenario models for SoCs is clear. But what about large chips without embedded processors? Does Breker have a role to play there as well?
Monday, June 30th, 2014
I’ve written about formal analysis rather frequently in this blog, although I do not consider Breker’s products to be formal in nature. There are several reasons for this. After ten years working with formal tools, I remain personally interested in that market. I also see interesting parallels between the adoption of formal and graph-based technologies. Further, whenever we cover formal analysis we get a great response. Clearly our readers like the topic as well.
I’m returning to formal this week because of a provocative comment made by one of our customers at DAC a few weeks ago. Wolfgang Roesner from IBM participated on the show floor in a Pavilion Panel called “The Asymptote of Verification.” Among several astute observations about the attributes of graph-based scenario models, he made a comparison with formal analysis that I found especially perceptive.
Monday, June 23rd, 2014
Over the last few weeks, we’ve provided a look back at DAC from Breker, Jonah McLeod of Kilopass, and verification consultant Lauro Rizzatti. Today we wind up the series with some great insights and memories from five more DAC exhibitors.
For formal verification services provider Oski Technology, DAC confirmed what it’s experiencing: use of formal adoption is on the rise worldwide, notes Jin Zhang, its senior director of marketing. As is often the case, along with adoption comes the need for training and that’s certainly true for formal verification. Attendees and exhibitors alike stopped by the Oski booth to ask about advanced formal training. Yes, Oski offers several types of training customized to specific needs, and verified that DAC can be a great place to raise awareness and visibility.
Monday, June 16th, 2014
We hope you enjoyed last week’s guest post from Jonah McLeod of Kilopass with his experiences at this year’s Design Automation Conference (DAC) in San Francisco. We’ve offered several of our friends in the EDA industry to write in with their assessments of the show. Next up is Lauro Rizzatti, another industry veteran perhaps best-known as general manager of EVE-USA. These days he’s a verification consultant, and he shares his story of going to DAC as a conference attendee rather than as a vendor:
This is the first DAC where I wasn’t responsible for an exhibitor booth and it was exhilarating. I was able to attend sessions, walk the exhibit floor and, generally, get a feel for what’s going on in our industry. I’m pleased to report the news is good. Very good, in fact.
Tuesday, June 10th, 2014
Last week, we offered Breker’s perspective on the recently concluded Design Automation Conference (DAC) in San Francisco. After last year’s DAC in Austin, in addition to our own summary we published several guest posts from other vendors in which they shared their impressions of the show. These proved quite popular, and so again this year we’ll be publishing some guest posts with interesting thoughts on DAC and how it’s evolving to meet the needs of the semiconductor industry. Today we begin with Jonah McLeod, director of corporate communications at Kilopass:
Three days of DAC as an attendee found me listening to presentations at the TSMC and SMIC booths from foundry partners. In between times, I listened to two pitches from Monte Carlo simulation vendors Solido Design Automation and CLK Design Automation. Both promised to achieve Spice-level accuracy within a couple of percentage points in a fraction of the time. I also checked out Verifyter AB, a company offering debug automation and analysis software.