Open side-bar Menu
 The Breker Trekker

Posts Tagged ‘TrekSoC’

Memories … Light the Corners of My Verification Space

Tuesday, December 17th, 2013

With due apologies to Barbra Streisand, the topic of today’s blog post is the verification of SoC memories and memory subsystems. Once upon a time, memories were considered just about the easiest design structure to verify. A simple automated test doing “walking 1s” and “walking 0s” supplemented by some random reads and write to random addresses with random data seemed to be good enough.

“Can it be that it was all so simple then? Or has time re-written every line?” Actually, it really was that simple back then. But a lot of changes in memory subsystems have come along to complicate matters: memory regions, caches, multi-processor designs, shared memory, complex memory maps, etc. Verification of memories today is much more challenging, with many corner cases to be exercised, but it’s an essential part of the overall SoC verification effort.


Will Graph-Based Scenario Models Dominate Verification?

Tuesday, November 19th, 2013

In last week’s post, I responded to an article in which Jasper‘s CEO is quoted as saying “formal will dominate verification” and that concluded “at some point in the future, formal will be the default choice for every verification task in the way that simulation/emulation is today.” I challenged this statement, giving examples of SoC verification where I do not believe that formal analysis alone can provide the answer.

Thinking about formal in that way naturally led me to ask the same question about Breker’s technology. Will graph-based scenario models “dominate verification?” At some point in the future, will graph-based scenario models “be the default choice for every verification task in the way that simulation/emulation is today?” As I promised last week, I’ll offer my thoughts on these questions as well.


Will Formal Really Dominate Verification?

Wednesday, November 13th, 2013

Today’s post is prompted by a recent article on SemiWiki in which Jasper Design Automation’s CEO Kathryn Kranen is quoted as saying “formal will dominate verification.” There is a nice set of metrics from Jasper’s recent User Group meeting showing their impressive growth in revenue, logos, users, and licenses as supporting evidence for formal’s increasing footprint. The article concludes by stating “at some point in the future, formal will be the default choice for every verification task in the way that simulation/emulation is today.”

That made me sit up and take notice. Before joining Breker, I spent the previous 12 years of my career focusing on formal analysis, about six years full-time and the rest as one component of a wider suite of verification products I managed. I’m a big fan of formal, but I don’t think that I can comfortably predict that it will “dominate” verification. Let  me share my thoughts.


Emulation and Software-Driven SoC Verification: Two Peas in a Pod

Monday, November 4th, 2013

Emulation got its start in the late 1980s. As an early employee of the pioneering company in emulation, Quickturn Design Systems, I remember the enthusiasm created by the promises of the technology and the challenges that came with its delivery. It is not an exaggeration to state that many of the early adopters failed to get a decent ROI on their emulation investment because of finicky software or unreliable hardware.

However, emulation has come a long way in terms of performance, ease-of-use, reliability, and pricing. This maturity enables SoC design teams all over the world to make emulation a key component of their verification arsenal. The three major suppliers of emulation are enjoying steady growth and almost unstoppable momentum due to the increasing complexity of SoCs.


Sneak Preview of this Week’s ARM TechCon in Santa Clara

Monday, October 28th, 2013

Over the last couple of decades, vendor-specific conferences have complemented and in some markets even supplanted general industry events. Intel, Microsoft, Sun/Oracle, Apple, and many other companies have had huge, successful shows year after year. Perhaps it’s a sign of a certain level of maturity when a company has the resources to hold its own event and the appeal to attract a large crowd.

In the world of EDA (and IP, and embedded systems), ARM is certainly one of the biggest recent success stories. As the company has grown, its small technical events have evolved into a major show now known as ARM TechCon. Breker will be both speaking and exhibiting at this week’s event in Santa Clara, just down the road from Breker’s headquarters in San Jose.


Guest Post: Documentation Is Not Just a Requirement

Monday, October 21st, 2013

Breker customers have surely noticed that the quantity and quality of our product documentation have taken a huge leap in the last six months or so. This is due to the Herculean efforts of Bob Widman, a well-known documentation, training, and applications expert in the EDA industry. He has been working with Breker for most of this year and the results speak for themselves. We’re pleased that Bob has contributed the following guest post on the importance of documentation:

Why does a company provide documentation with its product? The typical answer is that the customer expects it. Often overlooked is how the process of creating the documentation has a positive impact on the product and the company that is developing it.


TrekSoC-Si: Achieving the Longstanding Goal of Horizontal Verification Reuse

Tuesday, October 15th, 2013

All of us at Breker are excited as we write this post, since we’ve just made our most important product announcement in several years. We’ve expanded the Breker product line by adding TrekSoC-Si, a brand-new tool that generates multi-threaded, multi-processor, self-verifying C test cases for in-circuit emulation (ICE), FPGA-based prototypes, and actual production silicon. In other words, TrekSoC-Si does for hardware platforms what TrekSoC did for simulation.

We’ll talk more about how TrekSoC-Si works in a moment. But first it’s important to note that both TrekSoC and TrekSoC-Si use the same graph-based scenario models as input to describe the intended behavior of the SoC and provide a test plan. This means that, for the first time in the industry, you can achieve horizontal verification reuse across your entire project schedule, from high-level simulation models all the way through your first chips arriving from the foundry.


Two Peas in a Pod: Scenario Models and System Coverage

Tuesday, September 10th, 2013

In our last technical blog post, we surveyed some of the existing forms of coverage, including their virtues and limitations, and their applicability to SoC designs. We also introduced a new type of metric, system coverage, based on application scenarios that reflect how an end user would actually run applications on the SoC. We closed by claiming that “Breker’s graph-based scenario models are ideal for establishing, measuring, and refining system coverage.” This is the next in a series of posts to explain why and how.

Another earlier post described the Breker approach of “beginning with the end in mind” using graph-based scenario models. In the graphs used by TrekSoC, outcomes appear on the left and inputs appear on the right, reflecting the way that the test case generator works from the desired result toward the setup conditions needed for a particular application scenario.


If You’re Not Measuring System Coverage, Your SoC Is at Risk

Monday, August 19th, 2013

No SoC verification engineer worthy of the title would argue that coverage is unimportant. Even back in the 1980s, before commercial coverage tools and industry standards were available, leading ASIC teams manually added coverage code into their testbenches. They checked that key state machines visited all legal states or made all legal transitions, or that a processor executed all opcodes in its instruction set, over the course of a simulation test.

Verification teams who ignored coverage in those days were at risk of letting bugs slip through to silicon. The old maxim “if you don’t verify it, it’s broken” summed the situation up well. Today, leading SoC teams have adopted system coverage. Those who are ignoring this aspect of coverage are at risk of letting serious system-level bugs slip through. Let’s talk about system coverage and why it’s different from other metrics in use today.


Verification Beginning with the End in Mind

Tuesday, July 23rd, 2013

Folks who have been following Breker for a while know that we like the phrase “begin with the end in mind.” It succinctly summarizes why our use of graph-based scenario models is different than traditional constrained-random testbenches.

Suppose that you want to trigger a particular behavior within your design as part of your verification process. With a testbench, you have control over only the design’s inputs, so you might issue a series of input stimulus changes that you believe will cause the desired behavior. You may hit your target, or you may not. Automating your testbench with the constrained-random capabilities of the Universal Verification Methodology (UVM) reduces the manual effort, but there’s still no guarantee that you will trigger your targeted behavior.


S2C: FPGA Base prototyping- Download white paper

Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy