Open side-bar Menu
 What's PR got to do with it?
Ed Lee
Ed Lee
Ed Lee has been around EDA since before it was called EDA. He cut his teeth doing Public Relations with Valid, Cadence, Mentor, ECAD, VLSI, AMI and a host of others. And he has introduced more than three dozen EDA startups, ranging from the first commercial IP company to the latest statistical … More »

Granularity and complexity in low power verification

 
April 2nd, 2013 by Ed Lee

Cary Chin, Director of Technical Marketing at Synopsys, has an intriguing take on how to approach verification now that the mandate for design project managers is to meet the low power requirement of the target end-product.   Chin says that if we look at verification in terms of fine and broad “granularity,” users will meet their verification goals with a lot less angst and anguish.    However, at first glance,  I had no idea what Chin was talking about…which is why we asked him to join us and talk about this idea.

Ed: Cary,  you’ve been recently talking about granularity in verification, especially in terms of low power.  What does this all mean?  

Cary:  When I think of granularity in low power design, I’m thinking about the size of the “chunks” that we manipulate to improve the energy efficiency (or “low power performance”) of a design.  For example, in most of today’s low power methodologies, large functional blocks are the boundaries we work within – we can shut down these blocks or manipulate the voltage to save energy when peak performance isn’t required.  This boundary level isn’t just a matter of convenience; our tools and methodologies for both implementation and verification can only deal with certain levels of complexity, so we are confined in many dimensions in how we can pursue finer granularity.

Ed:  Why is it important that we see verification in terms of granularity?

Cary: A couple of reasons.  First, finer granularity implies finer control.  You can think of our previous “non-power-aware” methodologies as having just the whole chip as its one big chunk – the whole thing is either powered up (voltage applied to the power pins), or not (no voltage, not much to verify).  This view was just fine until chips got big enough and power became important enough that we needed to think about shutting down portions of the chip while others were still functioning, in order to improve the energy efficiency.  On top of that, since we’re still operating (more or less) in the era of Moore’s law, complexity increases quickly enough that yesterday’s humongous design is tomorrow’s ho-hum chip.  Thinking about finer granularity allows us to get an early look at the complexities we’ll be facing in the near future.

Ed:  You mentioned somewhere about how broad and fine granularity is a lot like big rocks and finer gravel.  You fill the bucket with the rocks first and then fill in with the gravel.  This is interesting…but what do you mean?

Cary: Well, it’s certainly important to understand basic methodology, in our case how to properly handle shutdown and voltage control, before we try to tackle the general case.  In developing algorithms, we always think of the N=1 case, then the N=2 case, before trying to generalize to case N.

Ed:  OK, I’m beginning to get it.  So why can’t we use broad granularity chunks only?

Cary:  While N=1 and N=2 are simpler cases, they don’t adequately represent the general case.  Realistically, we can’t always practically implement solutions for all N, but developing a solution for general case N usually allows us to see potential problems that we can address early, rather than in the form of bugs later on.

Ed:   So, what do we gain using finer grained low power strategies?

Cary:  It’s hard to come up with a general figure of merit, but both hardware optimization and software optimization have shown us that structural optimizations at a structural level, independent of the “function” being implemented, can result in significant improvement in metrics like performance (or runtime) and area (or memory).  There’s no reason to believe that low power structural optimizations independent of (or contained within) functional blocks couldn’t deliver significant energy savings as well.

Ed:  Ok, sounds reasonable.  So all users should adopt this method, right?   After all, it’s a sure way to optimize verification.   What’s the catch?

Cary:  Complexity.  As with most optimizations, the flow and the tools become much more complex.  As usual, each user needs to determine the “right” level of optimization for his needs, combined with the availability of tools to pull it off with minimal (or at least contained) risk.

Ed:  So it’s a lot of work, a job that requires automating the process.   But that’s a problem, as you said elsewhere, right?   What are the roadblocks to automating these kinds of optimizations?

Cary:  This is one of the big problems – there are many issues associated with finer-grained approaches, including potentially big problems in the physical implementation.  Power grids are a sensitive part of every design, and careful analysis is required to minimize noise and switching problems that could have impact throughout the design.  The explosion in complexity, just in terms of power states, would necessitate rethinking the way we construct our flows, capture power intent, and verify low power operation.

Ed:  Then why not just avoid this approach altogether? 

Cary:  We may never get down to cell-level power control, and it may not even make sense.  However, thinking about the limitations of our current flows allows us to identify areas for improvement.  Ultimately, one of the things we’ll have to deal with is the current separation of RTL (function description) and UPF (power intent description).  We are still thinking of power behavior as something that is “programmed” on top of the existing function.  As things progress, we might find ourselves asking why power intent and functional intent are separate at all; the idea that every cell in our design is controlled by logical input pins AND power pins, and produces an output (or not) based on the state of the FULL SET of those control pins, isn’t so foreign.  But it does make every cell more complicated than it is today, and that complexity grows quickly.

Ed:  So it’s more of an approach for consideration and experimentation, at this point?

Cary:  Fine-grained power control isn’t something we’re likely to implement anytime soon, but examining the related issues gets us started thinking about methodology and tool extensions to allow for better power optimization and more complete verification going forward.

Ed:  Thanks, Cary, for giving us some insight into granularity in low power verification. 

Tags: , , , , , , , , , , , , ,

Logged in as . Log out »




© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise