Open side-bar Menu
 The Breker Trekker

Posts Tagged ‘Accellera’

Using Scenario Models to Capture Use Cases

Thursday, July 30th, 2015

One of the signs that a technological domain is still fairly young is frequently evolving terminology as the pioneers attempt to explain to the mainstream what problem needs to be solved and what solutions can be brought to bear on the problem. Such is the case with SoC verification. At Breker we used to start explaining what we do by talking about graphs, but shifted to “graph-based scenario models” to emphasize that graphs are perfect for expressing scenarios of real-world behavior.

Our friends at Mentor, also strong advocates for graphs, began using the term “software-driven verification” to describe their approach. We rather like this description, but feel that it can only be applied accurately when embedded test code is being generated and when the embedded processors are in charge of the test case. Now our friends at Cadence have been sprinkling the term “use case” throughout their discussions on SoC and system verification. Let’s try to sort out what all this means.

(more…)

Guest Post: Rain or Shine for the EDA Cloud?

Wednesday, July 15th, 2015

Recent announcements from IBM and others about supporting EDA tools in the cloud have spurred renewed discussion on this topic, including here at The Breker Trekker. As expected, the recent posts have been very popular with our readers. Those of you who have been following this topic for a while may recall that, almost exactly two years ago, EDA vendor OneSpin announced cloud support for their formal tools. We invited their VP of Marketing, Dave Kelf, to fill us in their experiences since then:

Two years ago OneSpin introduced the cloud version of it’s Design Verification (DV) formal-based products. Some commentators pointed at other failed EDA attempts to make the same move, suggesting more of the same. Others hailed the announcement as a bold move whose time had come. So… did it work out and what have we learned? The results are surprising, and suggest trends that make some EDA solutions a natural fit for the cloud, whereas others are questionable.

(more…)

What’s so Special about Your SoC Design Data?

Tuesday, June 30th, 2015

Last week on The Breker Trekker, we discussed the resurgence of interest in EDA tools in the cloud. Like our first post on the topic two year’s ago, last week’s entry was very popular. Clearly this is a topic of interest to both our regular and occasional readers. Two more announcements regarding EDA in the cloud also surfaced during the recent Design Automation Conference (DAC), so it does seem as if there is more effort going toward finding a technically and financially successful industry solution.

Last week we summarized five barriers that have helped prevent cloud-based EDA from achieving mainstream adoption:

  • The EDA vendor’s effort to port to a cloud-based platform
  • Worries about GUI and interactive responsiveness
  • Ability to support users of cloud-based tools
  • Lack of an established, proven business model
  • Concerns over security of the design and verification data in the cloud

(more…)

Is the Forecast Cloudy Yet for EDA?

Wednesday, June 24th, 2015

It has been almost exactly two years since we discussed the possibility of EDA tools in the cloud here on The Breker Trekker. The post was popular then, and it remains so. In fact, of the more than 100 posts we’ve published, our cloud post remains the second most read. This week, the recent news that IBM will make its EDA tools available in the cloud through a partnership with SiCAD brought cloud computing back to the forefront. Let’s discuss what has changed–and what hasn’t–in the past two years.

The idea of users being able to run EDA tools as leased enterprise software on remote machines has been around for years, well before the term “the cloud” was widely used. Synopsys invested a great deal of time and effort into its DesignSphere infrastructure, initially more of a grid application than a cloud solution as we use the term today. But the difference is not very important; the key concepts are the same and they represent a major departure from the time-tested model of customers “owning” EDA tools and running them in-house.

(more…)

A Look Back at the 52nd DAC

Thursday, June 11th, 2015

Last week we looked forward to the 52nd edition of the annual Design Automation Conference (DAC), held this week at Moscone Center in San Francisco. Today we look back at the past three days and all of the activity at the show. It was a very busy time for Breker as usual, but there were some special aspects this year that we’d like to mention. We also want to thank the many customers, prospects, colleagues, and even competitors who joined us at various times for provocative discussions and plenty of social networking. As always, we invite you to add your comments on DAC and what you thought about the show.

Overall, the exhibition floor seemed lively for most of the time. We frequently had multiple visitors in our booth, asking questions and watching demos. We focused on two aspects of our Trek product line: immediate availability of portable stimulus and pushbutton verification of cache coherency. We saw lots of interest in both topics and it’s hard to say which drew more attention. Our suite was booked for most of the time, with customers receiving updates from Breker and prospects discussing their verification challenges and how we might be able to help them.
(more…)

Please Join Breker at DAC in San Francisco

Tuesday, June 2nd, 2015

We have less than a week to go before the most important event for EDA vendors and users: the annual Design Automation Conference (DAC). The show returns to Moscone Center in San Francisco, which has played host many times over the years. As one of the most popular tourist destinations in the world, San Francisco is a great draw for out-of-towners but also just a short trip from Breker’s headquarters in the heart of Silicon Valley. The combination of a strong peer-reviewed technical conference and a busy exhibition floor is unbeatable, making this a must-attend event for many in our industry.

When the technical program first came out two months ago, we posted about some of the interesting changes made this year. There are some innovative additions to the program, including keynotes from non-EDA vendors, “sky talks” from industry experts, a major focus on the Internet of Things (IoT), and tracks for such important topics as automotive electronics, IP, and security. The popular Designer Track returns with case studies from real users, and there are plenty of deep technical papers for those who spend their days coding algorithms and optimizing data structures. At least eight sessions have significant verification content.

(more…)

Portable Stimulus Layer 3: Test Randomization

Thursday, May 28th, 2015

Over the lifetime of this blog, we’ve covered a lot of diverse topics regarding Breker’s products and technology, trends in SoC verification, and the EDA industry in general. For the last month, we’ve offered our longest series of posts ever on a single topic: portable stimulus. There’s a very good reason for this: Accellera’s Portable Stimulus Working Group (PSWG) is making good progress on defining a standard in this area. As one of the group’s leaders, Breker has been leveraging our many years of experience in SoC verification to develop the best possible industry solution. We’ve been using The Breker Trekker blog to share our thoughts and to encourage your feedback.

We begin the fifth, and perhaps most important, post in our series by reminding you that we split portable stimulus into three layers: defining the tests using abstract primitive operations, scheduling the tests across multiple threads and multiple processors, and randomizing the control flow to verify the full range of realistic use-case scenarios. We have shown over the last two posts that both the first and second layers can be defined easily by a simple application programming interface (API) providing access to a base-class library. This library includes the basic building blocks needed for a directed or automated test as well as scheduling control for processors, threads, and resources. It is natural to wonder whether the randomization layer can be handled in a similar way.

(more…)

Portable Stimulus Layer 2: Test Scheduling

Wednesday, May 20th, 2015

Last week, we continued our  series of posts on the topic of “portable stimulus” as defined by Accellera’s Portable Stimulus Working Group (PSWG) and the standard they are working to define. We should say “we are working to define” since Breker is very much a part of the effort and is providing our many years of experience in this area to develop the best possible industry solution. As a reminder, we at Breker split the definition of a standard for portable stimulus into three parts: defining the tests using abstract primitive operations, scheduling the tests across multiple threads and multiple processors, and randomizing the control flow to verify the full range of realistic use-case scenarios.

Our most recent post dealt with the first layer: defining abstract tests using primitive operations and other elements within a base-class library, with access defined by a simple application programming interface (API). This week we consider the second layer: scheduling of tests and resources. If a verification engineer is writing a directed test for a single-processor SoC using the API calls from the first layer, then little of the second layer may apply. However, as we have discussed before, there is a clear trend of SoC designs moving to multi-processor designs with complex memory and cache subsystems and a rich variety of I/O protocols. These chips require automated test generation and the features provided by the test scheduling layer.

(more…)

Portable Stimulus Layer 1: Test Abstraction

Thursday, May 14th, 2015

From the number of blog views, it’s clear that the topic of “portable stimulus” is of considerable interest to our readers. As a reminder, Accellera’s Portable Stimulus Working Group (PSWG) is developing a standard in this area and Breker is helping to lead this effort. In our last two posts on this topic, we have outlined our guiding principles for any proposed standard, based on our own experience over the years with our most advanced customers. We also split the goal of the portable stimulus effort into three parts: defining the tests using abstract primitive operations, scheduling the tests across multiple threads and multiple processors, and randomizing the control flow to verify the full range of realistic use-case scenarios.

For this post, we’re going to explore the first level in more detail. We made the statement in our last post that the test abstraction level can be standardized using a simple application programming interface (API) to specify the abstract steps of the test. The API defines the access to a base-class library providing the primitive operations used to create portable tests. First of all, let’s be clear that this is not a theoretical proposal. We have provided a library with a defined API for several years and this is a key building block of our own portable stimulus and test solution. We know that this approach works from our own customers and believe that it would be an excellent foundation for a standard.

(more…)

Options for a Portable Stimulus Specification Format

Thursday, May 7th, 2015

In our last blog post we provided some updates on the ongoing effort by Accellera to standardize “portable stimulus” in its Portable Stimulus Working Group (PSWG). We mentioned our three guiding precepts as we participate in, and help lead, this industry effort:

  • Portable stimulus is not enough;  portable tests must encompass stimulus, results checking, and coverage
  • Test portability must encompass both vertical reuse from IP to SoC and horizontal reuse across all verification platforms
  • The tests themselves are not portable, but are generated for multiple targets from an abstract specification of the verification space

We stated our view that the goal of the portable stimulus effort can be split into three parts: defining the tests using abstract primitive operations, scheduling the tests across multiple threads and multiple processors, and randomizing the control flow to verify the full range of realistic use-case scenarios. We mentioned that the first part can be can be standardized using a simple application programming interface (API) to specify the abstract steps of the test. We have also found that the scheduling part can be handled by an expanded API. The user might want to specify the available resources and how they should be used in a particular test, for example, the number of threads running on each processor. When it comes to the third part, the randomization, an API might be feasible but there a number of candidate formats. We’d like to spend the remainder of this post examining these options.

(more…)




© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise