Open side-bar Menu
 Real Talk

Archive for July, 2010

Leadership with Authenticity

Monday, July 26th, 2010

An interesting title…What is leadership with Authenticity?

Well, let’s discover…first of all let’s break it down; we will start out by talking about leadership.

Is leadership telling people what they want to hear to keep them going the direction you think they should go? Or is leadership just taking flight and hoping that people follow?  Wikipedia defines it as “a process of social influence in which one person can enlist the aid and support of others in the accomplishment of a common task…”

Leadership is a big responsibility but it is also something that a person needs to do with finesse.  If everyone is going one direction and you decide to change the course dramatically, this can be very painful. 

I try to think of it as if you are steering a large passenger liner at full speed ahead. It takes a lot to turn the ship of that size and if you turn too abruptly there are huge consequences.  There would be chaos because people don’t know what is happening, what to do or what to expect. If people are in the wrong place it could be catastrophic for them. They would be unprepared and they might fall off the ship, along with valuable cargo… and then there goes your crew!

If this is the intention then you have accomplished the goal but usually it takes a lot to get the ship right again and sometimes it is impossible.  So remember, changing course abruptly is not a good practice when steering a ship or when running a business. 

Let’s bring Authenticity into the picture…What is Authenticity?

Again, I refer back to Wikipedia for a definition: It is “a particular way of dealing with the external world, being faithful to internal rather than external ideas.“

So authenticity means to uncover your true self.  “We live in a culture that is starving for authenticity.  We want our leaders, co-workers, friends, family members, and everyone else that we interact with to tell us the truth and to be themselves.  Most important, we want to have the personal freedom and confidence to say, do and be who we really are, without worrying about how we appear to others and what they might think or say about us.” (Mike Robbins)

Sadly, however, even though we may say we want to live in a way that is true to our deepest passions, beliefs, and desires, most of us don’t.   WHY? Starting at a very early age, we are taught by our parents, spouses, teachers, friends, co-workers, politicians and the media that it’s more important to be liked and to fit in than it is to be who we truly are.  In addition, many of us assume that who we are is not good enough and therefore we’re constantly trying to fix ourselves or to act like others who we think are better than us.

Oscar Wilde…a famous author and poet said… “Be yourself, everyone else is already taken.”  To me this summarizes authenticity.

Bringing the two together is an art and a process that you develop along the way.  I believe that the most successful leaders are the ones that are authentic.  We are all unique and so our styles differ but if the basic foundation is Authenticity or being Real, that is a fantastic start.  How you go about enlisting the aid and support of others is more effective when you do it in your style.  Have fun!  Lead with Authenticity.

Clock Domain Verification Challenges: How Real Intent is Solving Them

Monday, July 19th, 2010

With chip-design risk at worrying levels, a verification methodology based on just linting and simulation does not cut it. Real Intent has demonstrated that identifying specific sources of verification complexity and deploying automatic customized technologies to tackle them surgically has benefit. Automatic and customized don’t go together at first glance. Whereas automatic deals with maximizing productivity in setup, analysis and debug, customized ensures comprehensiveness. That’s the challenge for clock-domain verification as well as for the plethora of other failure modes in modern chips. Clock-domain verification is certainly a case in point. Its complexity has grown tremendously:

Signal crossings between asynchronous clock domains: The number of asynchronous domains approaches 100 for high-end SOCs optimizing performance or power. The chip is too large to distribute the same clock to all parts. Also, an SOC is more a collection of sub-components, each with its own clock. Given the large number of domains and crossings, the myriad protocols for implementing the crossings, and the corresponding large number of failure modes, writing templates to cover all scenarios is very expensive. Template-based linting on such chips with millions of gates is very slow – takes days. Additionally, the report from a template-based analysis is so voluminous as to challenge the ability of the team to analyze it manually, causing real failures to be overlooked.

Widely disparate and dynamic clock frequencies: Analyzing for data-integrity and loss in crossings under all scenarios is non-trivial and beyond linting alone.

Proliferation of gated clocks: Power management and mode-specific gated clocks are now common, introducing a manifold verification problem. (1) Clock setup must be correct for meaningful verification. Detailed setup analysis highlights errors in clock distribution or the environment spec. (2) Functionally verify the designs with gated clocks. (3) The variety of gated clock implementations creates a variety of glitching possibilities. Clock glitches are very hard to diagnose. You want to know about this possibility as early as possible. Given the variety of gated-clock types and glitching modes, a template-based approach is a recipe for productivity loss and slow analysis.

Reset distribution: Power-up reset is much more complex now to optimize for power and routing. Full verification of the reset setup prior to subsequent analysis is essential.

Timing optimization: Optimizations like retiming may violate design principles causing glitch potential at the gate-level even if there was none in RTL. Glitch analysis must be an integral part of verification and the tool must operate on RTL as well as gates. Template methods make it harder since multiple templates may be required to support RTL and gate as well as mixed languages.

Clock distribution: Previously 2nd-order issues like clock jitter in data/control transfers have more impact in DSM. Even synchronous crossings must now be designed carefully and verified comprehensively.

Full-chip analysis: Speed, scalability, precision and redundancy-control become key considerations in full chip analysis with many hierarchy levels and 100 million gates.

Real chip respins are revealing: (1) Asynchronous reset-control crossing clock domains but not synchronously de-asserted, caused a glitch in control lines to an FSM. (2) Improper FIFO-protocol controlling an asynchronous data crossing caused read-before-write and functional failure. (3) Reconvergence of non-gray-coded synced control signals to an FSM caused cycle jitter and an incorrect transition. (4) Glitch in a logic cone on an asynchronous crossing path that was latched into the destination domain corrupting captured data. (5) Gating logic inserted by power-management tools resulted in clock glitch.

CDC verification is not solved adequately by simulation or linting. It has become a true showstopper and an effective solution is a must have.  Real Intent’s approach understands the failure modes from first-principles to develop symbiotic structural and formal methods to cover them comprehensively and precisely. Structural and formal methods combine to check the clock & reset setup, metastability errors, glitching, data integrity / loss and signal de-correlation. This approach allows us to auto-infer designer intent and checks for the crossing or clock/reset distribution. As a result, our structural analysis runs 10x faster and does not require the designer to develop templates. Formal methods analyze for failures under all scenarios efficiently and comprehensively without a laborious enumeration of scenarios. For example our free-running-clock feature checks for data-loss for all frequency ratios. We complete the solution with an automatic link to simulation that models metastability and adds checks in the testbench. These solutions are offered in Real Intent’s Meridian product family.

Building Strong Foundations

Monday, July 12th, 2010

I recently joined Real Intent with over 10 years of experience developing and supporting assertion-based methodologies and have seen the technology move from research toward the mainstream.   Formal technologies have proven to have a lot of value for functional verification and for coverage, but having to learn evolving assertion languages and techniques has slowed the adoption.  I like Real Intent’s approach of automating the verification effort.

 In the very early stages of design, linting is a basic step. Lint checkers for HDL have been around for some time, and continue to become more sophisticated.  AscentTM Lint runs very fast because the checks are all static.  The user can easily configure what checks are desired.

In the next stage, also early in the process but after linting, Real Intent has what is my favorite tool – Implied Intent Verifier (IIV).  They have adapted formal verification techniques to automatically detect issues that can result in bugs that might be difficult to trigger and detect in simulation.  Think of this as automatically generated assertions. Formal verification without having to write assertions!  It is all automatic.  IIV goes beyond static linting to detect bugs that require sequential analysis.

An example of a significant IIV check is the one for state machine deadlocks. Deadlocks are the type of symptom that foreshadow bugs that can result in product recalls if not found. Finding them often depends on whether the testbench author thinks to test the scenario.  IIV provides detection of deadlock in one FSM and between two FSMs, without the need to write any testbench or assertions.  For example,

lisa_article_2

lisa_article_3

This is the classic example of two state machines that are waiting on one another.  In this case a single-state deadlock (SSD) is reported for both state machines and the deadlocked state is state 00.  This is because state machine A is waiting on a signal from state machine B and vice-a-versa.

Many other errors are also reported that have the same root cause.  One of the unique features of IIV is that it distinguishes secondary failures. The report focuses your effort on the root cause of a failure, in this case the SSD, and you can ignore the secondary failures.

While this example was very simple for the purpose of illustration, you can imagine a similar scenario in protocols. Take for example, a peer-to-peer handshake where both request to transmit at the same time, causing them to both go to a state where they are waiting for an acknowledge signal from their peer.  This would be a fundamental state machine design issue.  Simulations would pass unless the corner case where both request simultaneously is tested. As shown in the simple example above, this can also happen as the result of a simple typo.

You can get a fast start in functional verification by exploiting the verification features provided in Real Intent’s tool suite.  Common bugs are quickly and automatically weeded out, building a strong foundation for the real work of verifying your specific design intent. Check out Real Intent’s complete product line at www.realintent.com .

 

 

 

 

 

Celebrating Freedom from Verification

Monday, July 5th, 2010

Happy Fourth of July!  If you’re celebrating Independence Day today, chances are you have the time to do so because of a set of tools that freed you from the drudgery of endless verification cycles.

Yes, let’s give thanks as an industry to the plethora of commercial tools that reduce the amount of time consumed by laborious verification tasks.  They take many forms today, from hardware emulation and formal verification to simulation and acceleration, to name just a few.  All have been developed to reduce the verification portion of the design cycle –– purported to be in the range of 70% –– and to lessen the burden you carry.

Each year, the verification challenge gets worse as SoC design sizes and complexity increase, stressing and periodically breaking existing design flows.  New data shows that the average design size is now exceeding 10-million ASIC-equivalent gates ––  don’t get me started on what’s an ASIC-equivalent gate, I’ll save that for another post –– with individual blocks running between two- and six-million ASIC-equivalent gates.

Exercising each and every one of those gates by an old rule of thumb would require a number of cycles equivalent to the square number of gates.  That is close to a quadrillion cycles –– yes, that’s a one followed by fifteen zeros.  That’s a lot of verification cycles and a lot of headaches.

And, lest we forget, the time-to-market push continues unabated.

How do we cope with this triple challenge of gates, cycles and time to market and tame the tiger?  Only functional verification can thoroughly debug a design before silicon availability, if you have the time to do it. 

Maybe not all is lost.  Exhaustive functional verification carried out via a RTL simulator is no longer a practical or viable alternative because of its abysmal performance –– they are just too slow to fully analyze and verify larger chips.  And, almost all of today’s chips are large and getting larger.

Emulation serves as a neat solution to the runtime problems that afflict these 25-year old logic simulators.  They are used to identify bugs and can alleviate the functional verification bottleneck by executing at megahertz speeds.  They accelerate the time needed to develop and validate hardware or embedded software design within the constantly shrinking schedule.  Emulators improve the product quality by increasing the level of testing of a design to meet the quality standards expected in today’s feature-rich electronics devices. 

You can forget whatever you may have heard about the older “big box” emulators.  New generations of modern hardware emulators fit in small footprint chasses and deliver execution speeds close to real time, making them useful as in‑circuit test vehicles.  While their runtime performance is impressive, they are far less expensive, easier to use and flexible enough for the current SoC project or the next one.

Even with these tools, verification continues to be a time-consuming process and often the bottleneck, but many of them have given you the freedom to enjoy the day off.  Celebrate the holiday and let freedom ring!

CST Webinar Series
S2C: FPGA Base prototyping- Download white paper



Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy