Open side-bar Menu
 Real Talk

Author Archive

The Future of 3D Technologies is Fast and Heterogeneous

Thursday, August 27th, 2015

With the slow down in Moore’s law, technologists are now speculating on what future integrated circuits will look like.  One constraint is the clock frequency of CMOS processors,  which is topping out at around 4GHz for high-end processors in the 100W range, down to around 1-2GHz for ~5W processors used in laptop and mobile applications. With this constraint on clock speed, IC designers are adding more cores to increase processing throughput. Along with these additional processors is an increasing need for easy access to high-speed memory. Performance will not be achieved if multiple processors are contending for shared memory access.

One solution to this challenge are new 3D-manufacturing technologies in combination with new chip architectures to overcome the bandwidth-latency barrier in high-count multi-core chips.

The following will be the key enablers for 3D manufacturing: (more…)

DO-254 Without Tears

Thursday, April 23rd, 2015

This article was originally published on TechDesignForums and is reproduced here by permission.

At first glance the DO-254 aviation standard, ‘Design Assurance Guideline for Airborne Electronic Hardware’, seems daunting. It defines design and verification flows tightly with regard to both implementation and traceability.

Here’s an example of the granularity within the standard: a sizeable block addresses how you write state machines, the coding style you use and the conformity of those state machines to that style.

This kind of stylistic, lower-level semantic requirement – and there are many within DO-254 – makes design managers stop and think. So it should. The standard is focused on aviation’s safety-critical demands, assessing the hardware design’s execution and functionality in appropriate depth right up to the consequences of a catastrophic failure.

Nevertheless, one pervasive and understandable concern has been the degree to which such a tightly-drawn standard will impact on and be compatible with established flows. This particularly goes for new entrants in avionics and its related markets.

Your company has a certain way of doing things so you inevitably wonder how easily that can be adapted and extended to meet the requirements of DO-254… or will a painful and expensive rethink be necessary? Can we realistically do this? (more…)

The Evolution of RTL Lint

Thursday, November 27th, 2014

This article was originally published on TechDesignForums and is reproduced here by permission.

It’s tempting to see lint in the simplest terms: ‘I run these tools to check that my RTL code is good. The tool checks my code against accumulated knowledge, best practices and other fundamental metrics. Then I move on to more detailed analysis.’

It’s an inherent advantage of automation that it allows us to see and define processes in such a straightforward way. It offers control over the complexity of the design flow. We divide and conquer. We know what we are doing.

Yet linting has evolved and continues to do so. It covers more than just code checking. We begun with verifying the ‘how’ of the RTL but we have moved on into the ‘what’ and ‘why’. We use linting today to identify and confirm the intent of the design.

A lint tool, like our own Ascent Lint, is today one of the components of early stage functional verification rather than a precursor to it, as was once the case.

At Real Intent, we have developed this three-stage process for verifying RTL: (more…)

ARM Fueling the SoC Revolution and Changing Verification Sign-off

Thursday, October 2nd, 2014

ARM TechCon was in Santa Clara this week and Real Intent was exhibiting at the event.  TechCon was enjoying its 10th anniversary and ARM was celebrating the fact that it is at the center of the System-on-Chip (SoC) revolution.

The SoC ecosystem spans the gamut of designs from high-end servers to low-power mobile consumer segments. A large and heterogeneous set of players (foundries, IP vendors, SoC integrators, etc.) has a stake in fostering the success of the ecosystem model. While the integrated device manufacturer (IDM) model has undeniable value in terms of bringing to bear large resources in tackling technology barriers, one could argue that the rapid-fire smartphone revolution we have experienced in the last five years owes in large part to the broad-based innovation enabled by the SoC ecosystem model. How are the changing dynamics of SoCs driving changes in verification requirements, tools and flows and thereby changing the timing sign-off paradigm?

It’s Time to Embrace Objective-driven Verification

Thursday, September 18th, 2014

This article was originally published on TechDesignForums and is reproduced here by permission.

Consider the Wall Street controversy over High Frequency Trading (HFT). Set aside its ethical (and legal) aspects. Concentrate on the technology. HFT exploits customized IT systems that allow certain banks to place ‘buy’ or ‘sell’ stock orders just before rivals, sometimes just milliseconds before. That tiny advantage can make enough difference to the share price paid that HFT users are said to profit on more than 90% of trades.

Now look back to the early days of electronic trading. Competitive advantage then came down to how quickly you adopted an off-the-shelf, one-size-fits-all e-trading package.


Executive Insight: On the Convergence of Design and Verification

Thursday, August 7th, 2014

This article was originally published on TechDesignForums and is reproduced here by permission.

Sometimes it’s useful to take an ongoing debate and flip it on its head. Recent discussion around the future of simulation has tended to concentrate on aspects best understood – and acted upon – by a verification engineer. Similarly, the debate surrounding hardware-software flow convergence has focused on differences between the two.

Pranav Ashar, CTO of Real Intent, has a good position from which to look across these silos. His company is seen as a verification specialist, particularly in areas such as lint, X-propagation and clock domain crossing. But talk to some of its users and you find they can be either design or verification engineers.

How Real Intent addresses some of today’s challenges – and how it got there – offer useful pointers on how to improve your own flow and meet emerging or increasingly complex tasks.


Static Verification Leads to New Age of SoC Design

Thursday, July 3rd, 2014

SoC companies are coming to rely on RTL sign-off of many verification objectives as a means to achieve a sensible division of labor between their RTL design team and their system-level verification team. Given the sign-off expectation, the verification of those objectives at the RT level must absolutely be comprehensive.

Increasingly, sign-off at the RTL level can be accomplished using static-verification technologies. Static verification stands on two pillars: Deep Semantic Analysis and Formal Methods. With the judicious synthesis of these two, the need for dynamic analysis (a euphemism for simulation) gets pushed to the margins. To be sure, dynamic analysis continues to have a role, but is increasingly as a backstop rather than the main thrust of the verification flow. Even where simulation is used, static methods play an important role in improving its efficacy.

Deep Semantic Analysis is about understanding the purpose or role of RTL structures (logic, flip-flops, state machines, etc.) in a design in the context of the verification objective being addressed. This type of intelligence is at the core of everything that Real Intent does, to the extent that it is even ingrained into the company’s name. Much of sign-off happens based just on the deep semantic intelligence in Real Intent’s tools without the invocation of classical formal analysis.


Reset Optimization Pays Big Dividends Before Simulation

Thursday, June 26th, 2014

This article was originally published on TechDesignForums and is reproduced here by permission.

Reset optimization is another one of those design issues that has leapt in complexity and importance as we have moved to ever more complex system-on-chips. Like clock domain crossing, it is one that we need to resolve to the greatest degree possible before entering simulation.

The traditional approach to resets might have been to route to every flop. Back in the day, you might have been done this even though it has always entailed a large overhead in routing. That would help avoid X ‘unknown’ states arising during simulation for every memory location that was not reinitialized at restart. It was a hedge against optimistic behavior by simulation that could hide bugs.

Our objectives today, though, include not only conserving routing resources but also capturing problems as we bring up RTL for simulation to avoid unfeasible run times there at both RTL and – worse still – the gate level.

There is then one other important factor for reset optimization: its close connection to power optimization.

Matching power and performance increasingly involves the use of retention cells. These retain the state of elements of the design even if appears to be powered off: in fact, to allow for a faster restart bring-up these must continue to consume static power even when the SoC is ‘at rest’. So, controlling the use of retention cells cuts power consumption and extends battery life. (more…)

What’s Happening with Formal Verification?

Thursday, May 8th, 2014

With the pending acquisition of Jasper Design Systems by Cadence, there is a renewed discussion on the state of formal verification of RTL design. It has become a mainstream technology and no longer requires a Phd. to use.  It is now considered a part of the verification process.  What has changed?  The following is a perspective by Pranav Ashar, CTO at Real Intent.

The capacity of formal tools has certainly improved by orders of magnitude in the last 20 years, and done so far beyond the basic increase in speed of computers. This makes everything else I am saying here possible. The second very crucial change is that people have figured out where and how to apply formal so it makes a real difference. Turns out that the best places are where the scope is narrow and where there is a full understanding of the failure modes that need to be analyzed.

The hurdles faced in the ease of use of assertion-based formal analysis have been (1) coming up with the assertions, (2) controlling analysis complexity, (3) debug, and (4) absence of definitive outcome. Narrowly scoped verification objectives allows assertions to be generated automatically, have manageable formal analysis complexity, provide precise debug info and always return actionable feedback.

Clock domain crossing (CDC) analysis is a great example of how this can be done. The failure mode that needs to be covered involves a confluence of functionality and timing, which is where simulation falls short. Also, the logical scope of the failure analysis is local, i.e. the complexity of formal analysis does not directly scale with chip size.  Finally, because the failure mode is well understood, the checks are generated automatically and the debug feedback is in the context of the designer’s intent. These are powerful reasons for using formal methods. Beyond CDC, there is an increasing number of such high-value verification objectives where static analysis and formal methods are essential and viable.

On always returning actionable feedback, the formal community has been too focused on improving the core engines in formal analysis. While that is essential, of course, the community has failed to realize that there is a gold mine of information in the intermediate (bounded) results produced during formal analysis that can be fed back to the user. This feedback provides a better handle on debug, coverage and on actionable next steps to improve verification confidence. We are missing out on some low hanging fruit by not exploiting this information, especially in the context of narrowly scoped verification objectives.

The simulation of of an SOC or any other system is always a fallback option. You would always prefer an analytical model if it serves the purpose. The more you understand a problem, the more you can apply a precise specification and an analytical flavor to the analysis, i.e. formal analysis. As SOC design steps become more understood, there are more areas that are becoming amenable to formal solutions.

The status today is that formal analysis is already being used under-the-hood for high-value sign-off verification objectives like the verification of clock domain crossings, power management, design constraints and timing exceptions, power-on and dynamic reset, X-effects and a few more. More high-value verification objectives will continue to be identified where the narrow-scope template can be applied to make formal analysis viable and simulation-like in its ease of use. Formal verification methods for SOCs are firmly anchored in terms of demonstrated value and I see a bright future ahead for them.

For further comments by Dr. Ashar, view the following video interview. He discusses the keynote speech he gave at the FMCAD 2013 conference on the topic  “Static verification based sign-off is a key enabler for managing verification complexity in the modern SoC.”


S2C: FPGA Base prototyping- Download white paper

Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy