Open side-bar Menu
 The Breker Trekker
Tom Anderson, VP of Marketing
Tom Anderson, VP of Marketing
Tom Anderson is vice president of Marketing for Breker Verification Systems. He previously served as Product Management Group Director for Advanced Verification Solutions at Cadence, Technical Marketing Director in the Verification Group at Synopsys and Vice President of Applications Engineering at … More »

Can Graphs Make Modeling More Pleasant?

 
January 7th, 2014 by Tom Anderson, VP of Marketing

This week’s blog post is inspired by Brian Bailey’s recent article “Making Modeling Less Unpleasant.” I noted with amusement that the link to his article ends with “making-modeling-pleasant” which I suspect was automatically generated from an early draft. So perhaps Brian started with the idea that modeling could be pleasant, but concluded that “less unpleasant” is as good as it can get? Is he too pessimistic? Can modeling actually be pleasant?

It depends in part on what aspect of design or verification modeling we consider. Brian’s primary focus is on system-level models of the design, also called electronic system-level (ESL) models, architectural models, or virtual prototypes. The appeal of a simulatable SoC model fast enough to run compiled code, capable of both functional and performance verification, is easy to understand. There have been many attempts to establish standard approaches, such as transaction-level modeling (TLM), and languages, such as SystemC.

Despite much attention for many years, true system-level models are hardly universal. It takes extra time and effort to write these models properly in order to achieve their potential. As Brian notes, just rewriting RTL models in SystemC is not the answer. Architects may be more interested in using the models than in writing them, so it’s not clear who will do the work. Also, much of the SoC’s content is reused from earlier chips or licensed in IP form. These portions of the design must also be modeled at a high level to have a complete SoC virtual prototype, but existing models are usually not available.

With the increasing popularity of high-level synthesis, there was a flurry of hope in the industry that a single SystemC model would suffice as both a TLM-based architectural model and as the source for the design itself. The appeal of this vision is also easy to understand. Looking back to an earlier transition, advanced design teams used RTL models for many years before the mainstream moved up from gates. The availability of logic synthesis was a key factor in this evolution, since a single model could be used for more efficient simulation than gates while serving as the design source via synthesis.

Despite the apparently obvious parallels, the system-level convergence for which we hoped has not occurred. A virtual prototype fast enough to run production code requires trade-offs in favor of performance at the expense of accuracy, but a more accurate model is required if high-level synthesis is to produce results as good or better than RTL and logic synthesis. So the issue of which design models are needed and how they are produced is not a simple one.

Things get more complicated when considering verification. One common definition of verification is that it involves comparing an independent model of the design intent with the implementation model. In the testbench world, this means developing components such as stimulus generators, result checkers, and coverage metrics to verify the RTL model in simulation. In formal analysis, sets of assertions and input constraints are mathematically evaluated against the RTL. In other forms of static analysis, the RTL model is checked against a set of rules.

Many SoC teams also hand-write some C tests to run on the embedded processors; these can also be regarded as models because they must either be self-checking or interact with the testbench to determine pass or fail status. Testbench components, formal rules, static rules, and diagnostic code are all examples of models that someone has to write. The Universal Verification Methodology (UVM) and many other recipes have been developed to try to make this easier. While “unpleasant” may be a bit harsh, there is no doubt that model creation is hard work and not always much fun.

So back to my original question: is there such a thing as pleasant modeling? Actually, we have seen a strong positive customer response to our graph-based scenario models. Graphs are a natural way to express SoC functionality since they look very much like the dataflow diagrams that architects and designers draw to document the design and explain its functionality to others. Scenario models capture both scenario generation and result checking in a single representation, a very different approach than other forms of modeling.

System-level coverage can be automatically extracted from the scenario models, showing metrics for which use-case scenarios have been verified and which have not. A scenario model also serves as a form of verification plan since it guides automatic generation of multi-threaded, multi-processor, self-verifying test cases to run on the SoC’s embedded processors. While “pleasant” may be a bit of a stretch, our customers find graph-based scenario models a natural and even fun way to model for verification. If you haven’t given them a try yet, you know where to contact us!

Tom A.

The truth is out there … sometimes it’s in a blog.

Please visit us today at www.brekersystems.com

Tags: , , , , , , , , ,


Warning: Undefined variable $user_ID in /www/www10/htdocs/blogs/wp-content/themes/ibs_default/comments.php on line 83

You must be logged in to post a comment.




© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise