Dr. Pranav Ashar
Dr. Pranav Ashar is chief technology officer at Real Intent. He previously worked at NEC Labs developing formal verification technologies for VLSI design. With 35 patents granted and pending, he has authored about 70 papers and co-authored the book ‘Sequential Logic Synthesis’.
June 26th, 2014 by Dr. Pranav Ashar
This article was originally published on TechDesignForums and is reproduced here by permission.
Reset optimization is another one of those design issues that has leapt in complexity and importance as we have moved to ever more complex system-on-chips. Like clock domain crossing, it is one that we need to resolve to the greatest degree possible before entering simulation.
The traditional approach to resets might have been to route to every flop. Back in the day, you might have been done this even though it has always entailed a large overhead in routing. That would help avoid X ‘unknown’ states arising during simulation for every memory location that was not reinitialized at restart. It was a hedge against optimistic behavior by simulation that could hide bugs.
Our objectives today, though, include not only conserving routing resources but also capturing problems as we bring up RTL for simulation to avoid unfeasible run times there at both RTL and – worse still – the gate level.
There is then one other important factor for reset optimization: its close connection to power optimization.
Matching power and performance increasingly involves the use of retention cells. These retain the state of elements of the design even if appears to be powered off: in fact, to allow for a faster restart bring-up these must continue to consume static power even when the SoC is ‘at rest’. So, controlling the use of retention cells cuts power consumption and extends battery life. Read the rest of Reset Optimization Pays Big Dividends Before Simulation
June 19th, 2014 by Sarath Kirihennedige
This article was originally published on TechDesignForums and is reproduced here by permission.
Thanks to the widespread reuse of intellectual property (IP) blocks and the difficulty of distributing a system-wide clock across an entire device, today’s system-on-chip (SoC) designs use a large number of clock domains that run asynchronously to each other. A design involving hundreds of millions of transistors can easily incorporate 50 or more clock domains and hundreds of thousands of signals that cross between them.
Although the use of smaller individual clock domains helps improve verification of subsystems apart from the context of the full SoC, the checks required to ensure that the full SoC meets its timing constraints have become increasingly time consuming.
Signals involved in clock domain crossing (CDC), for example where a flip-flip driven by one clock signal feeds data to a flop driven by a different clock signal raise the potential issue of metastability and data loss. Tools based on static verification technology exist to perform CDC checks and recommend the inclusion of more robust synchronizers or other changes to remove the risk of metastability and data loss.
June 12th, 2014 by Graham Bell
Real Intent had a photo booth at its exhibit in San Francisco at the Design Automation Conference. We thought it would be cool to give a photo souvenir of the 51st conference for anyone who strolled by and to celebrate the 2014 FIFA World Cup. On hand to work the booth was Jeremy who helped everyone with funny props or choosing the right World Cup team jersey.
Between Jeremy and myself we were able to get some great photos. Here are just a few for your viewing pleasure. And at the bottom of the page, you can click on the link to see all the blackmail photos for your fellow conference attendees and exhibitors. Enjoy!
June 5th, 2014 by Graham Bell
Thanks to everyone that came to the 2014 Design Automation Conference. It was a successful show with maximum traffic on Tuesday afternoon. At the Real Intent booth we were giving away Roses (yes they were real!) and had a photo booth as well. Visitors could dress up in world-cup soccer jerseys and hoist the World Cup 2014 Trophy.
May 28th, 2014 by Graham Bell
Real Intent is bringing lots of valuable information and fun to the Design Automation Conference (DAC) 2014 in San Francisco June 2-4, 2014, at Booth #1825:
Click here to see a very informative five-minute video by Real Intent President & CEO Prakash Narain. In it he references Real Intent’s mission to make RTL signoff more efficient and previews four products it will highlight at DAC 2014 that advance the state of the art.
May 22nd, 2014 by Graham Bell
Years ago when Real Intent began, 10 million logic gate designs were considered “top of the line.” Today you might be dealing with billion-gate designs – significantly more complicated across a far wider scope of applications. The sheer complexity leads to a whole new host of verification challenges. Because sign-off is an iterative process, you have to deal with things like capacity, performance, power and timing issues, and engineering effort at each step. There’s a real need for toolsets that handle functional verification tasks prior to simulation and synthesis to avoid the exorbitant cost of silicon failure – so much so, that even Synopsys is getting onboard with a new verification suite.
Real Intent is committed to deliver the industry’s best possible software tools for verifying next-generation digital designs for FPGAs and complex SoCs. Our Ascent products for static functional verification prior to synthesis, and our Meridian products for advanced sign-off verification for CDC and constraints timing, uniquely address specific SoC sign-off issues. They save design and verification engineers a lot of time and effort, and give 10x better design quality and productivity compared to alternative methods. We also work closely with industry leaders like Defacto, Calypto and our newest industry partner, MathWorks, to ensure quality and compatibility.
May 15th, 2014 by Ramesh Dewangan
In my last article, Redefining chip complexity, I touched on the challenge of using asynchronous interfaces for the integration of the various IPs in SoC design. In this posting I, want to drill down to expose more of the verification issues and to suggest the right approach to handle them.
Asynchronous clock domain crossing and the associated meta-stability issues are a well-researched problem. There are multiple verification solutions in the market and they do a fair job of reporting issues and pin-pointing what you can do next. So what’s new?
The asynchronous challenge has acquired additional dimensions. Read the rest of Gigagate SoC Designs and IP Growth Challenge Verification
May 8th, 2014 by Dr. Pranav Ashar
With the pending acquisition of Jasper Design Systems by Cadence, there is a renewed discussion on the state of formal verification of RTL design. It has become a mainstream technology and no longer requires a Phd. to use. It is now considered a part of the verification process. What has changed? The following is a perspective by Pranav Ashar, CTO at Real Intent.
The capacity of formal tools has certainly improved by orders of magnitude in the last 20 years, and done so far beyond the basic increase in speed of computers. This makes everything else I am saying here possible. The second very crucial change is that people have figured out where and how to apply formal so it makes a real difference. Turns out that the best places are where the scope is narrow and where there is a full understanding of the failure modes that need to be analyzed.
The hurdles faced in the ease of use of assertion-based formal analysis have been (1) coming up with the assertions, (2) controlling analysis complexity, (3) debug, and (4) absence of definitive outcome. Narrowly scoped verification objectives allows assertions to be generated automatically, have manageable formal analysis complexity, provide precise debug info and always return actionable feedback.
Clock domain crossing (CDC) analysis is a great example of how this can be done. The failure mode that needs to be covered involves a confluence of functionality and timing, which is where simulation falls short. Also, the logical scope of the failure analysis is local, i.e. the complexity of formal analysis does not directly scale with chip size. Finally, because the failure mode is well understood, the checks are generated automatically and the debug feedback is in the context of the designer’s intent. These are powerful reasons for using formal methods. Beyond CDC, there is an increasing number of such high-value verification objectives where static analysis and formal methods are essential and viable.
On always returning actionable feedback, the formal community has been too focused on improving the core engines in formal analysis. While that is essential, of course, the community has failed to realize that there is a gold mine of information in the intermediate (bounded) results produced during formal analysis that can be fed back to the user. This feedback provides a better handle on debug, coverage and on actionable next steps to improve verification confidence. We are missing out on some low hanging fruit by not exploiting this information, especially in the context of narrowly scoped verification objectives.
The simulation of of an SOC or any other system is always a fallback option. You would always prefer an analytical model if it serves the purpose. The more you understand a problem, the more you can apply a precise specification and an analytical flavor to the analysis, i.e. formal analysis. As SOC design steps become more understood, there are more areas that are becoming amenable to formal solutions.
The status today is that formal analysis is already being used under-the-hood for high-value sign-off verification objectives like the verification of clock domain crossings, power management, design constraints and timing exceptions, power-on and dynamic reset, X-effects and a few more. More high-value verification objectives will continue to be identified where the narrow-scope template can be applied to make formal analysis viable and simulation-like in its ease of use. Formal verification methods for SOCs are firmly anchored in terms of demonstrated value and I see a bright future ahead for them.
For further comments by Dr. Ashar, view the following video interview. He discusses the keynote speech he gave at the FMCAD 2013 conference on the topic “Static verification based sign-off is a key enabler for managing verification complexity in the modern SoC.”
May 1st, 2014 by Graham Bell
Its only one month left until the Design Automation Conference in San Francisco, June 1-5 and the process of getting ready is keeping me BUSY. This week, I would like to highlight the DVCon 2014 Best Oral Presentation by Kelly D. Larson from NVIDIA on “Determining Test Quality through Dynamic Runtime Monitoring of SystemVerilog Assertions.”
This paper describes an entirely different way to use these same SVA assertions. While the standard use of SystemVerilog assertions is typically targeted towards DESIGN QUALITY, this paper describes how to effectively use assertions to target individual TEST QUALITY. In many cases the same SystemVerilog assertions which were written for measuring design quality can also be used to measure test quality, but it’s important to realize that the fundamental goal is quite different. Read the rest of Determining Test Quality through Dynamic Runtime Monitoring of SystemVerilog Assertions
May 1st, 2014 by Graham Bell
Whether you are an entrepreneur or even thought about being one, you know that there are two things to worry about – time and money.
Everyone is welcome at the upcoming installment of the popular EDAC – Jim Hogan Emerging Companies Series, where Jim and other panelists (Amit Gupta, President and CEO, Solido Design Automation, Inc. and Vishal Kapoor, Principal, Three Legged Stool, LLC.) will discuss the sources of money – from crowdfunding to institutional investment to intrapreneural capital. The panel will debate the opportunities and risks associated with those sources and which one might be right for you.
Every entrepreneur believes that they know their product, but often only from a technical capability perspective. How you position, communicate and sell your product may be more important than it’s underlying capability. Without having a keen understanding of what you are building, how capital efficient it is and who needs it, the chances of getting that capital are significantly diminished.