Open side-bar Menu
 The Breker Trekker
Tom Anderson, VP of Marketing
Tom Anderson, VP of Marketing
Tom Anderson is vice president of Marketing for Breker Verification Systems. He previously served as Product Management Group Director for Advanced Verification Solutions at Cadence, Technical Marketing Director in the Verification Group at Synopsys and Vice President of Applications Engineering at … More »

We Like the UVM, Really We Do!

January 28th, 2014 by Tom Anderson, VP of Marketing

When people first start reading about Breker and what we do, we make the point that transactional simulation testbenches are breaking down at the full-SoC level. Usually, we specifically mention the Universal Verification Methodology (UVM) standard from Accellera as not being up to the challenge of full-chip verification for SoC designs. We sometimes worry that someone will read into this that we don’t like the UVM, or Accellera, or even standards in general. Nothing could be further from the truth!

We have great respect for the UVM and other EDA-related standards developed by Accellera, IEEE, and other organizations. In this post, we’d like to discuss specifically what we see as the strengths and weaknesses of the UVM and explain how Breker’s technology complements rather than replaces this methodology. Yes, the UVM has limitations, and we address those with our tools and technologies. But the UVM forms a stable and standard base on which nearly all of our customers build their simulation-based verification environments.

The bottom line is that the UVM works very well for the verification of IP blocks, subsystems, small chips, and even some large chips without embedded processors. However, our customers have found that the methodology simply does not scale to verify a complete SoC with one or more processors. There are three main shortfalls, one related to testbenches in general, one related to constrained-random testbenches, and one related to the UVM itself.

The first issue is the nature of simulation testbenches, in which all behavior of the design being verified is exercised by manipulation of the design’s inputs. This works well for small and simple designs, but the larger the design and the greater the sequential depth of its logic, the harder it is to trigger corner-case behavior deep inside. For an SoC, it may take dozens or hundreds of very specific inputs to set up a single operation. Figuring out how to accomplish this with a hand-written test is challenging, time-consuming, and resource-intensive.

Moving to constrained-random stimulus generation automates the testbench but presents other problems. The testbench must be capable of generating long, precise input sequences with only carefully chosen randomized values if deep behavior is to be exercised. In UVM terms, this means setting up a sequencer for every input port and developing a virtual sequencer to tie all the sequencers together and figure out what results are expected as data moves from one port to another.

As IP blocks are combined into subsystems and subsystems into the full chip, much of the testbench work must be redone. As lower-level inputs are subsumed into internal interfaces, their sequencers are no longer relevant. A new virtual sequencer must be written to tie together the remaining inputs and sequencers at the full-chip level. Verification reuse is possible only for input ports that remain external to the SoC, and for some types of passive testbench components such as protocol and coverage monitors.

While the UVM did a lot to standardize testbenches, it can’t go beyond the limitations of the constrained-random approach. It also does not address or encompass any form of code running on the embedded processors, which are the brains of the SoC. Since production code isn’t ready and runs too slow in simulation, the verification team often hand-writes 1000-2000 tests to compile and run in the SoC’s embedded processors. This effort is usually only loosely correlated with the testbench work since the UVM does not connect the two worlds.

Breker’s approach, as you probably know, is to automatically generate self-verifying , multi-threaded C test cases to run efficiently in full-chip simulation. We don’t replace the use of the UVM at lower levels, and we actually complement it by leveraging existing UVM verification components (UVCs) on the chip’s input and output ports. We plug into the UVCs by taking over the sequencer functionality and then we replace the virtual sequencer as well. The test cases take care of coordinating all the threads, processors, and I/O ports.

We improve on the UVM-based flow in another way as well. The constrained-random approach only works as long as there are testbenches available. Of course simulation is the primary platform, but acceleration with the design in hardware and the testbench in simulation also works. The EDA industry has made some effort to map more of the UVM testbench into hardware to be able to run on in-circuit emulators or FPGA prototypes. However, this is a complicated path than has not been very successful so far.

In contrast, we can generate the C test cases for simulation, acceleration, emulation, and even actual silicon in the lab. If access is provided to the chip I/O ports via a debug interface, we can exercise the complete chip as thoroughly as in simulation but orders of magnitude faster. If no I/O access is available, then the test cases concentrate on verifying internal data paths. The test cases for all the platforms are generated from the same graph-based scenario models. We call this “horizontal” verification reuse across the project.

Finally, graphs from IP blocks and subsystems can be simply combined to form a full-SoC scenario model. This level of “vertical” verification reuse is simply not possible with any other form of transaction-based or constrained-random testbench. So, while we like the UVM and respect the role it plays, when it comes to SoC verification across the project, our TrekSoC and TrekSoC-Si products are essential add-ons to the standard methodology

Tom A.

The truth is out there … sometimes it’s in a blog.

Please request our new x86 server validation case study at

Related posts:

Tags: , , , , , , , , , , , , , , , , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *


S2C: FPGA Base prototyping- Download white paper

Internet Business Systems © 2017 Internet Business Systems, Inc.
25 North 14th Steet, Suite 710, San Jose, CA 95112
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy