Open side-bar Menu
 The Breker Trekker

Posts Tagged ‘TrekSoC’

Decoding Formal Club Unlocks Some Mysteries

Tuesday, March 10th, 2015

Last May, I published two blog posts on the presentations made at a “Decoding Formal Club” event hosted by the smart folks from Oski Technology at the Computer History Museum in Mountain View. With everything else going on, I didn’t manage to make it to another of their regular meetings until last week. The first event of 2015 was very interesting, so again I’m returning to the popular topic of formal analysis and playing reporter. The line between media and blogging is rather thin these days anyway.

This edition of Decoding Formal featured three talks, one an end-user case study and the other two  instructional in nature from well-known formal experts. I found all three worthwhile and will do my best to communicate some of the main points made. I also have to mention the final presentation, more a performance than a talk, by the inimitable and irrepressible Clifford Stoll. Lately he’s been manufacturing and selling Klein bottles, which you may remember from a geometry teacher trying to mess with your mind.


It Was a Great DVCon, and There Are Two More to Come

Thursday, March 5th, 2015

In last week’s blog post on The Breker Treker we previewed this week’s Design and Verification Conference (DVCon) in San Jose, the leading industry event for verification professionals. We had a really good time there, finishing up just this afternoon. We always enjoy DVCon, but this week was even more fun than usual. We met attendees from an amazing range of companies designing SoCs, from simple microcontrollers to some of the largest FPGAs and custom chips on the planet.

Three aspects of the show really stood out: intense interest in cache coherency verification, considerable curiosity about the Accellera Portable Stimulus Working Group (PSWG), and the number of people who started the conversation with “I’ve heard good things about Breker from a colleague” or “I was told that I really need to check you out.” Let’s discuss what each of these trends means for the industry and speculate about the impact on Breker.


The Importance of DVCon and Why Breker Will Be There

Tuesday, February 24th, 2015

Most of the time when we blog about upcoming conferences, report live from an ongoing show, or summarize one that’s just finished, we see a significant spike in readership. Clearly our followers want to keep up with what’s happening in trade shows, conferences, and other industry events. It may also be the case that tighter travel budgets have reduced the ability to attend conferences in person, driving all the more interest in reading the news from the field. A few weeks ago, we discussed DesignCon and explained how it had evolved to include almost no verification content.

Next week is the annual Design and Verification Conference (DVCon) in San Jose, an event that we have covered in considerable detail in several popular posts in the past. As we have discussed, this conference has become the main way to keep up on what’s happening in the ever-changing world of functional verification. We encourage you to check out their Web site and the complete program. The topics include the UVM, SystemVerilog, SystemC, code generation, multi-language, mixed-signal, formal techniques, coverage metrics, and low-power verification.


Blast from the Past: Verification in Silicon

Wednesday, January 7th, 2015

Late last year, we published a series of blog posts discussing how the world of large chip designs is moving toward multi-processor, cache-coherent SoCs. This trend is due to several sub-trends, including the addition of one or more processors, the growth in number of processors, the use of shared memory, and the addition of caches to improve memory performance. The result of this movement is clear: large chips are becoming more difficult to verify than ever.

Verification teams face challenges at every turn. It’s hard to run a complete SoC-level model in simulation, especially if the team wants to boot an operating system and run production applications. This may be feasible in emulation or FPGA prototyping platforms, but these cost a lot of money. What we’re starting to see is the truly stunning trend that some teams are taping out SoCs without ever having run the entire design together. This means that full-chip verification and debug isn’t happening until first silicon is in the lab. Let’s explore why this is happening.


Top 5 New Holiday Gifts for the Verification Engineer

Tuesday, December 30th, 2014

Last year, we wound up in December with a post on the “Top 5 Holiday Gifts for the Verification Engineer” and it proved very popular despite the holiday timing. To refresh your memory (and ours), here is the 2013 list:

#5: Relief from hand-writing verification test code.
#4: Relief from hand-writing validation diagnostics.
#3: Vertical verification IP reuse from block to system.
#2: Horizontal verification IP reuse from electronic system level (ESL) to silicon.
#1: Effortless system coverage reflecting end-use applications.

As you might expect, every one of these gifts is still available today for users of our Trek family of products. But over the last year we have added two new products, many new features, and deeper integration into existing verification flows. So we’d like to wrap up 2014 with an all-new list of holiday gifts for the verification engineer. We hope you like them as much as you liked last year’s offerings:


If Your Chip Is Not a Cache-Coherent SoC, It Soon Will Be

Tuesday, November 25th, 2014

Yes, we know that the title of this week’s post sounds a lot like two previous posts. We wanted to link together the two threads from those posts into a single message that we believe reflects what is happening right now in the world of complex chips. This is a short summary in line with the short week due to the Thanksgiving holiday here in the United States. The line of argument is straightforward:

  • Large chips are adding embedded processors to implement complex functionality while retaining flexibility
  • Single-processor chips are adding multiprocessor clusters to get better performance at a given process node
  • Multiprocessor chips are using shared memory for effective data transfer and interprocess communication
  • Neighbor-connected processor arrays are moving to shared memory to reduce cross-chip data latency
  • Multiprocessor designs are adding caches to reduce memory access time and bypass memory bottlenecks
  • Multiprocessors with caches require coherency in order to ensure that the right data is always accessed

While most of these statements are not universally true, they reflect a significant sea change that we see every day when discussing current and future projects with our customers.


Will Breker Become an IP Company?

Tuesday, November 18th, 2014

In last week’s blog post, we talked about the emergence of the commercial IP industry and shared some personal experiences. Although Breker is an EDA company and not known for IP products, we intersect with semiconductor IP (SIP) and verification IP (VIP) in important ways as we work with our customers. We’re also starting to offer our own scenario model IP (SMIP) as part of accelerating and improving verification even more. We’d like to expand on these topics in today’s post.

We have few if any customers or prospective customers who don’t use commercial VIP in their testbenches. After all, if you’re designing a standard interface you want the best verification possible that you’re meeting the standard. A VIP model that’s been used by dozens or hundreds of other projects serves as a pre-silicon “plugfest” where you get to verify your implementation of the standard against what others have done. Now that the Universal Verification Methodology (UVM) is nearly ubiquitous, most VIP is developed in a fairly consistent manner.


If Your Chip Is Not an SoC, It Soon Will Be

Wednesday, November 5th, 2014

Last week’s post was addressed primarily to those of you who are already designing SoCs. We made the point that more and more SoCs have multiple processors, either homogenous or heterogeneous, and that most or all of those processors do or will have caches. This led to the main conclusions of the post, that multi-processor cache coherency is necessary for most SoCs, and therefore that coherency is now a problem extending beyond CPU developers to many chip-level verification teams.

But what if you don’t have embedded processors in your design? There’s a clear sense emerging in the industry that more and more types of chips are becoming multi-processor SoCs, and most of these will require cache coherency for the CPU clusters and beyond. In this post we’ll describe the trends we see, based in part on what we learned at the recent Linley Processor Conference in Santa Clara. The world as we know it is changing rapidly, offering more challenges for verification teams but more opportunities for us to help.


If Your SoC Is Not Cache Coherent, It Soon Will Be

Thursday, October 30th, 2014

In last week’s post, we discussed in detail how Breker’s TrekSoC and TrekSoC-Si products can verify the performance of your SoC by stressing every aspect of its functionality. Shortly before that, we announced a partnership with Carbon Design Systems to complement their fast, accurate processor models with TrekSoC. About two months ago, we introduced the new Coherency TrekApp and described how it can verify multi-processor cache coherency with minimal effort.

You can see a strong theme here: multi-processor SoC designs, fast simulation models, automatic generation of multi-threaded, multi-processor test cases, and test cases powerful enough to gather realistic performance metrics from pre-silicon simulation. But what if you don’t have multiple processors or caches in your SoC design? There’s a clear sense emerging in the industry that more and more chips are becoming multi-processor SoCs, and most of these will require cache coherency for the CPU clusters and beyond. Let’s explore this topic more in this post.


Performance Verification: Bringing Your SoC to Its Knees

Wednesday, October 22nd, 2014

For those unfamiliar with the expression in the title, bringing someone (or something) to its knees means making it submissive. It’s a metaphor possibly derived from the act of hitting someone so hard that his knees buckle and he falls to a kneeling position. Why such a nasty term to start this post? Because when you want to verify the performance of your SoC you want to stress every aspect of it. You want to be mean to it. You want to bring it to its knees.

The most common way to do this is to run production software (operating systems plus applications) on a virtual prototype, a high-level system model created by architects before RTL implementation begins. This is not easy; it takes effort to set up workloads that will stress the design and often production software is not ready at this early stage of the SoC project. Further, this verifies only the high-level model, but RTL simulates too slowly to replicate the same tests, or often to boot the operating system at all.


S2C: FPGA Base prototyping- Download white paper

Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy