Craig Cochran, VP of Marketing and Business Development, Real Intent
Craig is a 22-year EDA veteran who has developed markets for many implementation and verification product areas. He was most recently VP of Marketing for ChipVision Design Systems, a startup focused on low-power ESL design. Before that he was VP of Marketing at Jasper Design Automation where he … More »
May 16th, 2011 by Craig Cochran, VP of Marketing and Business Development, Real Intent
Real Intent and SpringSoft got a head start on DAC this year when the companies co-hosted a seminar on May 5th that touched on 4 key technology areas related to Advanced Sign-off Verification. In addition to two great user sessions presented by engineering managers at Broadcom and Mindspeed Technologies, sessions by Real Intent covered Clock Domain Crossing (CDC) Sign-off as well as X Verification, while SpringSoft covered SystemVerilog testbenches and testbench verification. If the interest level shown by the audience is any indication, these will be hot topics at DAC 2011 in San Diego!
One great thing about such events is that they give us an opportunity to survey designers on SoC design trends and discover what they think is important in the area of Sign-off. Since they have taken the time to attend the seminar, clearly these designers have more interest in the subject matter than would be indicated in a purely random poll, but it is still very interesting to see what trends we can learn from the survey. Here are results from some of the questions.
As you might imagine, we asked a lot of questions about Clock Domain Crossing. The first was “How Many Clock Domains Will Your Next Design Have?”
As you can see, the results showed that over half of the designers expect to have 25 or more clock domains on their next chip. We continue to see an upward trend with more SoCs having greater than 50 clock domains and some SoCs already in the hundreds. This greatly increasing complexity is fueling the growth of CDC Verification and driving the capacity, performance and comprehensiveness requirements that have compelled more companies to choose Meridian CDC.
We then asked if designers had ever had a CDC-related bug slip through and cause late-stage ECO or a silicon respin.
Over three-fourths of these designers reported that they have had a CDC Bug slip through. Clearly, CDC Verification has become an imperative, and one of the most important requirements of a CDC Verification solution is comprehensiveness.
Based on this result, the result of the next question was not a surprise. We asked whether designers considered Clock Domain Crossing Verification to be a Sign-off Criterion. No graph is needed for this one, because 100% of the attendees answered Yes.
Since Sign-off requires a solution with the capacity and performance to handle full-chip designs, without producing noisy reports, we surveyed designers on these issues. For this question, we also broadened the poll to include Lint.
Clearly, noise is a major issue with many designers’ current tool. One user at the seminar reported that a CDC bug had caused a respin of a large SoC. He said that the tool he was using on this design (which was not Meridian CDC) did spot the CDC bug, but it was buried in a report containing 30,000 warnings. This elicited a collective groan from the audience, as many had obviously dealt with this issue. The user reported that he got rid of that tool and replaced it with Meridian CDC from Real Intent.
This graph also shows that performance and capacity are major issues. Designers seem more worried about performance today, since some tools are not able to give quick feedback to designers, but we are hearing more and more concern about Capacity as SoC design sizes grow into the 100s of Millions of gates.
Finally, as I mentioned earlier, one of the technical sessions in this seminar was on X Verification. We also surveyed designers about their level of concern for this hazard.
The result shows that most designers were very concerned about bugs slipping through to silicon due to X-Propagation, which can mask functional bugs due to a phenomenon known as X-Optimism. X-Optimism is a coding hazard that occurs when a simulator assigns a known value when the value really should be unknown. Such bugs are highly elusive, and require a comprehensive solution to flag potential hazards and identify X-Optimism bugs when they occur in RTL simulation. Indeed, the session on Ascent XV for X Verification generated a lot interest and excellent questions about how to detect and eliminate X Bugs.
This seminar gave us the opportunity to preview some of our latest technology for designers in Silicon Valley, as well as to sense what concerns they have and what trends we should be aware of. And it also gave us an opportunity to get a jump on DAC, where we will be showing the latest developments in our solutions for Advanced Sign-off Verification.
Don’t Miss Us At DAC 2011 in San Diego! To sign up for a suite presentation and demo, email us today at firstname.lastname@example.org.
May 9th, 2011 by Lauro Rizzatti - General Manager, EVE-USA
I was driving back from a meeting one day last week with the car radio playing in the background, mulling over the development environment a senior hardware designer had just described. As you might expect, he depicted a scenario of tightened project cycles, reduced budgets and resources, added features, frustration and loads of late nights and aggravation. And, of course, bugs, bugs and more bugs in tens of millions of gates.
Then, he said complexity is rising due to the increased use of embedded software. According to his team’s calculations, the software portion of a system on chip (SoC) is growing at a rate of 140 percent per year, while hardware is expanding about 40 percent year to year.
With all the rolling around in my mind, I barely registered the radio announcer’s voice, but snapped to attention as Bon Jovi began singing:
Whoa, we’re half way there
Whoa oh, livin’ on a prayer
… we’ll make it I swear
Whoa oh, livin’ on a prayer
Whoa, is right! In an earlier career, Jon Bon Jovi must have been a hardware designer or a verification engineer. Otherwise, it’s hard to imagine him composing a song about livin’ on a prayer for anything but SoC design.
Verifying hardware design? An impossible task that, at times, seems to need some celestial intervention or as Bon Jovi intones, prayer. That’s what it may seem like for the hardware designer I met last week or verification engineer whose job it is to debug the design.
Functional verification is a way to thoroughly debug a design before silicon availability, though exhaustive functional verification using a software simulator is not a viable solution any longer because of its unsatisfactory performance. Moreover, simulation farms do not address large designs since they require long sequences of tests that consume billions of cycles and cannot be parallelized.
Fortunately, prayers do get answered. For example, EVE pioneered an approach to hardware-assisted verification that combines traditional emulation and rapid prototyping systems into a single-unified environment for ASIC and SoC debugging and embedded software validation. And, Real Intent produces automatic verification solutions using innovative formal techniques in an easy to use methodology.
Hardware-based verification platforms are more than just another emulation product because they can be used by hardware designers to verify and debug SoC hardware designs, and embedded software developers to validate SoC embedded software. The hardware and the embedded software can be debugged concurrently, giving engineering teams two concurrent views of a design, the inner workings of the SoC hardware and the whole embedded software code. An engineering team can trace and change any of them and monitor the effects. A hardware bug that effects the embedded software code execution can be traced starting from the embedded software and vice versa.
While Bon Jovi’s lyrics may seem apropos, don’t keep livin’ only on a prayer! EVE and Real Intent can help. They will be at the 48th Design Automation Conference next month in San Diego demonstrating their solutions. Stop by EVE in Booth #2836 and Real Intent in Booth #2131 to learn more.
Special thanks to Bon Jovi. Livin’ on a Prayer is from the album Slippery when Wet and was released as a single in 1986.
May 2nd, 2011 by Jin Zhang, Director of Technical Marketing
On Thursday May 5th, Real Intent and SpringSoft will co-host a seminar addressing “Latest advances in System-on-chip functional verification sign-off”. One of the topics is, “You are doing CDC verification, but have you achieved CDC sign-off?”, where I will be discussing the history of CDC verification, why it is important to focus on CDC sign-off today, and most importantly, how to achieve CDC sign-off.
Real Intent, as a leader in the space of CDC verification and sign-off, has made great contributions to the field by advancing the technology, as well as in educating the industry on key CDC issues. The Real Talk Blog, over the past year, has featured many articles discussing aspects of CDC verification and sign-off. A few highlights are included here:
Al Joseph, Sr. Application Engineer, wrote about the fact that while the industry was quick to adopt the acronym “CDC”, and many EDA vendors were quick to claim to have a CDC solution, users need to understand what CDC analysis really entails. A Real CDC solution must go far beyond just checking for single-bit and data-bus metastability management. It should also recognize and verify all asynchronous interfaces, support RTL and gate-level netlists, and incorporate structural, formal, and metastability simulation techniques.
Real Intent CTO, Dr. Pranav Ashar, discussed the mounting challenges facing clock domain crossing verification today, such as the number of signal crossings between asynchronous clock domains, the proliferation of gated clocks, widely disparate and dynamic clock frequencies, reset distribution, timing optimization, etc. Dr. Ashar also touched upon how Real Intent’s solution addresses these challenges.
Vishnu Vimjam, R&D Manager, discussed a unique capability Meridian CDC has in verifying designs with dynamically changing frequencies. With other solutions, users would have to perform multiple runs to verify the correctness of the design under different frequencies combinations. Using Meridian CDC, however, users can obtain confidence in one run on whether there will be CDC design errors under all possible frequencies. This is the most advanced technique in formal CDC analysis, not a small achievement for Meridian CDC R&D.
People often underestimate the knowledge required to make the best use of a CDC tool. Sr. Application Engineer Al Joseph outlined all the things a design or verification engineer needs to understand, such as power, testability, quasi-static domains, and mode selection, etc., in order to be the most productive and effective in using a CDC solution.
With the increase of design size and number of asynchronous clock domains (we have worked with companies with well over 100M gates with 200 asynchronous clock domains), CDC sign-off has become a must-have in the verification sign-off flow. However, not every CDC solution is up to the task of CDC sign-off. Al Joseph, Sr. Application Engineer, wrote about the criteria a CDC tool needs to have in order to enable CDC sign-off.
Rick Eram, Director of Sales at Real Intent, has first-hand knowledge of why designers are switching to Meridian CDC after trying out competitors’ products. In short, the main reasons are ease-of-setup, accuracy, performance and coverage. To sum it up, “Doing CDC verification takes a Real CDC tool architected to do the job, not a linter adapted to do CDC work”.
Real Intent has a wealth of knowledge and experience in CDC verification and sign-off. It is our mission to help every design and verification team succeed in achieving CDC sign-off. Come and join us for an informative and fun seminar on May 5th at noon! You can sign up at http://www.springsoft.com/ri-ss-seminar.
I look forward to seeing you there!
April 26th, 2011 by George Bakewell, Dir. of Product Marketing, SpringSoft
Today’s leading-edge designs are verified by sophisticated and diverse verification environments, the complexity of which often rival or exceed that of the design itself. Despite advancements in the area of stimulus generation and coverage, existing tools provide no comprehensive, objective measurement of the quality of your verification environment. They do not tell you how good your testbench is at propagating the effects of bugs to observable outputs or detecting the presence of bugs. The result is that decisions about when you are done verifying are often based on partial data or “gut feel” assessments. Clearly, verification environments need some verifying of their own, in order to measure and improve the quality of verification. This is why SpringSoft has developed the Certitude™ Functional Qualification System.
Certitude is the only solution to provide an objective measure of the quality of your verification environment and guidance on how to improve it. Certitude injects potential bugs into your design and evaluates the ability of your verification environment to catch them. It completely analyzes whether the potential bugs are activated, propagated to observable outputs, and detected by your environment, thus identifying whether you need to improve your tests, assertions or checkers. The result is higher confidence in your verification results and improved design quality. Certitude provides both guidance and a means of measuring progress throughout the functional verification closure process, and critical data points to support your signoff decisions.
On May 5th, SpringSoft and Real Intent will be co-hosting a seminar on The Latest Advances in System-on-Chip Functional Verification Sign-off. Join us there, where we will demonstrate how the Certitude system integrates easily with your existing simulation environment and applies these patented techniques to provide comprehensive, objective feedback on the quality of your verification environment and how to improve it. We will show how recent technical advances, such as an improved fault prioritization algorithm and enhanced fault-dropping techniques, enable Certitude to quickly find the most serious deficiencies in your environment with a minimum of simulation resources. We will also demonstrate how the tight integration with Verdi™ and the new Fault Impact Ranking engine minimize the analysis and debug effort required to understand and fix these problems.
April 11th, 2011 by Lisa Piper, Senior Technical Marketing Manager at Real Intent
As Craig Cochran so eloquently put it in the previous blog article, “SoCs today are highly integrated, employing many disparate types of IP, running at different clock rates with different power requirements. Understanding the new failure modes that arise from confluences of all these complications, as well as how to prevent them and achieve sign-off, is important.” As an example, Clock Domain Crossing issues are becoming a very big concern with all of this integration, but comprehensive tools like Meridian CDC enable sign-off with confidence. However, another issue that is bubbling up as a problem desperately in need of a solution is “X-verification”. While the issue of handling “X’s” in verification has always been there, it has become more exasperated by low power applications that routinely turn off sections of chips, generating “unknowns”!
The “unknown” as it is called in digital design, is represented as an “X” logic level. This means that the signal might actually take on a value of “1”, “0”, or “Z” in 4-state logic. X values have existed in logic design forever, and are commonly used to represent the state of uninitialized signals, such as nets that are not driven, or storage elements that have no reset. “X-propagation” occurs when one of these X values feeds downstream logic, causing additional unknowns. For example, as shown below, when signal ‘a’ is an unknown value, that unknown value is sometimes, but not always, propagated to the output.
assign y = a && b;
a b output
0 0 0
0 1 0
1 0 0
1 1 0
x 0 0
x 1 x
X’s also take on a beneficial role in both synthesis and verification. Explicit assignments to an X value can signify a “don’t care” condition that grants synthesis tools greater flexibility to optimize the generated logic. The X value is also used in verification to flag illegal states, created by problems such as bus contention. Automatic formal checking tools like Ascent IIV can use these assignments to check that the illegal state cannot be reached.
Unfortunately, X’s can also mask functional bugs in the RTL due to an X-propagation interpretation hazard known as “X-optimism”. X-optimism is a trait of incomplete coding that incorrectly transforms an unknown value to a known value. “If-else” statements and “case” statements can be X-optimistic when the condition is evaluated as an X value. Simulation semantics do not propagate the X value but rather translate the unknown X value to a known value. The fact that the condition was an unknown is no longer visible – it is hidden, in a way that makes the X-propagation elusive. Here is an example:
// if-else conditionals
out_1 = 1’b1;
out_1 = 1’b0;
condition | out_1
When condition is a 1’b1, then the output is 1’b1 and when condition is 1’b0 the output is 1’b0. But notice what happens when condition is an X value. Here the X value is an “unknown”. But the output is translated to a 1’b0, and the unknown X is now masquerading as though it were definitively a 1’b0, when in fact it could have been a 1’b1 or a 1’b0, depending on how it is synthesized into gates.
While X-optimism bugs can be detected in gate-level simulation, it is slow and painful to debug there. X-optimism may also be innocuous, but still lead to differences between RTL and gate-level simulation that must be painstakingly resolved in order to achieve sign-off.
There are capabilities of existing tools that can help with X-verification. For example, RTL analysis tools like Ascent Lint will identify X assignments. Automatic formal tools such as Ascent IIV take it a step further and can verify that designated “illegal” states cannot be reached, thereby verifying that the X value will not propagate. While highly useful, this covers a relatively small percentage of X’s that might exist. In addition, four-state formal verification tools allow you to write explicit assertions to confirm that an X value cannot propagate to a specified point. However, this requires knowledge of assertion languages and the ability to completely specify the applicable behavior of the inputs, as well as the need to know every point in the design that needs to be verified by an assertion, which is highly impractical.
X-verification sign-off is not an easy problem to solve, because the mere existence of X values is not an issue. The issue is that hazardous X propagation is often elusive because it is transformed by X-optimism into supposedly known values. Moreover, X-optimism is an insidious and intermittent problem because it only becomes an issue if the X-optimized signal is being used in the design when the optimism occurs. The functional issue that results may not be detectable for many clocks after the X-optimism occurrence, and there may be multiple sources of X in its fan-in, making root cause analysis very difficult. Adding to that, if debug occurs at the gate level, simulations are very slow and the logic is not as readable as the original RTL.
What is needed is a comprehensive solution built on the existing RTL verification infrastructure that detects when the propagation of X values masks functional bugs. Real Intent is developing just such a solution, called Ascent XV. Join us at our joint seminar with SpringSoft entitled “Latest Advances in Verification Sign-off” (sign-up at http://www.springsoft.com/ri-ss-seminar) for details on Real Intent’s comprehensive solution to the X-Verification problem. Ascent XV is conquering the “unknown” so designers can sign-off with confidence.
April 5th, 2011 by Craig Cochran, VP of Marketing and Business Development, Real Intent
If you’ve been reading this blog for a while, you know that the industry is seeing big and rapid changes to the Verification Sign-off process. Simulation and Static Timing Analysis are not enough anymore! SoCs today are highly integrated, employing many disparate types of IP, running at different clock rates with different power requirements. Understanding the new failure modes that arise from confluences of all these complications, as well as how to prevent them and achieve sign-off, is important.
Fortunately, Real Intent and SpringSoft have teamed up to offer a free joint seminar at TechMart in Santa Clara on May 5, 2011, titled “The Latest Advances in Verification Sign-off”. The seminar features User Case Studies from Broadcom and Mindspeed, technical sessions on hot topics such as Clock Domain Crossing (CDC) Sign-off, Verification Closure, X-Propagation Verification, and efficient SystemVerilog Testbench development, and a keynote address by Anant Agrawal, Chairman of Verayo, Inc., and a founding member of the SPARC processor team at Sun Microsystems.
Lunch will be served before the keynote, and at the conclusion of the seminar, a very nice gift will be given away in a drawing. Registration is free, so sign up now at http://www.springsoft.com/ri-ss-seminar.
To tempt you a little further, here are abstracts of the technical sessions:
1. You are doing CDC verification, but have you achieved CDC Sign-off?
The trends toward SoC integration and multi-core chip design are driving an exponential increase in the complexity of clock architectures. Functionality that was traditionally distributed among multiple chips is now integrated into a single chip. As a result, the number of clock domains is dramatically increasing, making Clock Domain Crossing (CDC) verification much more complex and an absolute must-have in the verification flow.
However doing CDC verification doesn’t mean you have achieved CDC sign-off. Lint-based CDC analysis, though identifies potential synchronization issues and risky CDC structures, but it does not guarantee that a CDC bug will not slip through to silicon. A systematic CDC verification methodology utilizing different CDC verification technologies in a layered approach needs to be in place in order to achieve CDC robust designs and final CDC sign-off.
This presentation discusses what it means to achieve CDC sign-off, highlights the necessary steps required in a CDC verification methodology that supports CDC sign-off, and uses customer experiences to showcase real life success of such methodology. With this knowledge, you won’t be just doing CDC verification, but achieving CDC sign-off!
2. Don’t Let the X-Bugs Bite: Signing off on X-Verification
Designers spend many, many hours verifying that RTL provides the correct functionality. The expectation is that the gate level simulation produces the same results as the RTL simulation. X-Propagation is a major cause of differences between gate level and RTL simulation results, and issues are not detected by logical equivalence checkers. Unfortunately, while most X’s are innocuous at the RTL level, they can also mask functional bugs in RTL. Resolving gate level simulation differences is painful and time consuming because X’s make correlation between the two difficult. “X-Prop” issues cause costly iterations, painful debug, and sometimes allow X-related functional bugs to slip through. This presentation explains the common sources of X’s, shows how they can mask real functional issues and why they are difficult to avoid. It also presents a unique practical solution to assist designers in catching X-propagation bugs efficiently at RTL, avoiding iterations that delay sign-off.
3. SystemVerilog Testbench – Innovative Efficiencies for Understanding Your Testbench Behavior
The adoption of SystemVerilog as the core of a modern constrained-random verification environment is ever-increasing. The automation and sophisticated stimulus and checking capabilities are large reason why. . The supporting standards libraries and methodologies that have emerged have made the case for adoption even stronger and all the major simulators now support the language nearly 100%. A major consideration in verification is debugging and naturally, debug tools have to extend and innovate around the language. Because the language is object-oriented and more software-like, the standard techniques that have helped with HDL-based debug no longer apply. For example, event-based signal dumping provides unlimited visibility into the behavior of an HDL-based environment; unfortunately, such straight-forward dumping is not exactly meaningful for SystemVerilog testbenches. Innovation is necessary.
This seminar will discuss the use of message logging and how to leverage the transactional nature of OVM and UVM-based SystemVerilog testbenches to automatically record transaction data. We’ll show you how this data can be viewed in a waveform or a sequence diagram to give you a clearer picture of the functional behavior of the testbench. For more detailed visibility into the testbench execution, we will also discuss emerging technologies that will allow you to dump dynamic object data and view it in innovative ways was well as using this same data to drive other applications such as simulation-free virtual interactive capability.
4. Getting You Closer to Verification Closure
Techniques for Assessing and Improving Your Verification Environment
Today’s leading-edge designs are verified by sophisticated and diverse verification environments, the complexity of which often rivals or exceeds that of the design itself. Despite advancements in the area of stimulus generation and coverage, existing techniques provide no comprehensive, objective measurement of the quality of your verification environment. They do not tell you how good your testbench is at propagating the effects of bugs to observable outputs or detecting the presence of bugs. The result is that decisions about when you are “done” verifying are often based on partial data or “gut feel” assessments. These shortcomings have led to the development of a new approach, known as Functional Qualification, which provides both an objective measure of the quality of your verification environment and guidance on how to improve it.
This seminar provides background information on mutation-based techniques – the technology behind Functional Qualification – and how they are applied to assess the quality of your verification environment. We’ll discuss the problems and weaknesses that Functional Qualification exposes and how they translate into fixes and improvements that give you more confidence in the effectiveness of your verification efforts.
Get a jump on DAC and find out what’s happening in the world of verification closure and sign-off! Or, if you can’t make it to DAC this year, this is your chance to learn this year’s hot topics. Either way, it’s a great opportunity to learn from the experts for free.
March 21st, 2011 by Carol Hallett, VP of World Wide Sales, Real Intent
As EDA is a global business, even for smaller companies, most of us periodically find ourselves on a plane to visit customers and partners in different countries in order to build a global presence and business. Japan is a key destination which many of us in EDA are quite familiar with, and as the Vice President of Worldwide Sales, you can imagine that I am on that plane frequently. I had planned a trip to Japan, and as luck would have it, my trip was moved up a week to accommodate a customer. I scrambled to make arrangements to visit one of my favorite places in the world in my usual manner.
My flight from SFO to Narita was familiar, typical and uneventful in every respect. Once in Japan, my itinerary was typical and as usual everything came off like clockwork. I made visits to customers, talked with vendors, had conversations with my GM, meals with colleagues, rides on the trains, and strolls along the bay in Yokohama; nothing unusual to report.
While waiting at the gate in Narita airport to board my flight home, suddenly we all found ourselves in a situation that was not part of our original itinerary, as Mother Nature hit us with a 9.0 earthquake. Nothing about this situation was business as usual. As a Californian, I’ve experienced earthquakes before, but nothing on this scale. The magnitude of the shaking and the duration of the quake astonished me, but what was even more amazing is that the airport terminal survived.
As air, rail and freeway travel were suspended and we did not know how long we would be living in the airport, what I then witnessed was much compassion, caring and sharing. Strangers helping strangers – people who in most situations would not have even noticed others were now helping to make sure that all survived this ordeal. We scavenged food, water and blankets and shared with all who were in need. With limited information, we didn’t know the scale of the disaster, and it turns out that Narita Airport was probably one of the safer places to be…but we did not know this at the time, let alone the devastation being wreaked by the tsunami or the pending nuclear reactor crisis.
While I don’t mean to sensationalize, a typical trip was turned very quickly into an adventure of survival. While I was lucky to be able to fly away after a couple of days, the people of Japan are left with destruction and an ongoing nuclear crisis. As I feel compelled to help, I am taking a break from talking about verification in this week’s blog to make a plea to my friends and allies in this industry, which has a global presence and a compassionate heart; and to tell you how you can help our friends in Japan.
Japanese NGOs have the staff and materials needed to help survivors in Northern Japan, but they need financial assistance for logistics. The best way to help is to donate to organizations that have links to Japan so that donations are quickly wired to where they are needed. One such organization is the American Red Cross.
To help, please make a contribution to the Red Cross, by visiting http://www.redcross.org/. Or, you can simply text the word “REDCROSS” to 90999 from any US cell phone. Each time you do this, you will automatically donate $10.00 (of which 91 percent will go directly to the relief effort in Japan).
March 15th, 2011 by Craig Cochran, VP of Marketing and Business Development, Real Intent
Anyone who was around the ASIC & EDA industries 20 years ago will remember that Sign-off Verification used to consist of one step: Sign-off Simulation. There were a number of choices of simulators from the big three “DMV” of that day – Daisy, Mentor and Valid – plus one called Verilog-XL from a little startup called Gateway. ASIC Vendors developed design kits and qualified the simulation libraries for these tools in order to sign-off on the expected function and timing of their designs.
Sign-off simulation in that day was a single process, run with full timing, thereby verifying function and timing simultaneously. As this was computationally expensive, it could not scale as designs grew larger with each process node.
The 90s: Function versus Timing
With full-timing sign-off simulation running out of steam, the industry looked for faster simulation methods that used unit timing or cycle-based simulation. In addition, full-timing simulation did not check every timing condition in the design, leading to the possibility of timing errors slipping through the sign-off process.
Fortunately, synthesized blocks were already using static timing verification, since it was built into Design Compiler, so a path existed to expand timing verification to full-chip with the introduction of PrimeTime. With full-chip sign-off timing verification now available, function and timing could be handled separately. However, a very important requirement to enable this abstraction was that designs had to be fully synchronous.
The 2000s: Intent versus Implementation
With timing abstracted away, sign-off simulation was able to use faster methods that didn’t look at propagation delays and focused only on cycle-accurate functionality. This was fine for RTL but started to break down at the gate level. Fortunately, the synchronous nature of these designs enabled another abstraction – formally verifying that a gate-level design is functionally equivalent to the original RTL source, thus creating the market for formal equivalence checking.
This separated verification of the design intent – primarily performed dynamically – from verification of implementation correctness for both function and timing – primarily performed statically using equivalence checking and timing analysis. Thus, the split between dynamic and static verification fell along the lines between intent and implementation.
Today: SoC Design and Asynchronous Verification
Today, Systems-on-Chip design involves the integration of fully asynchronously connected computation islands, many of which are imported IP with disparate clocking requirements. In addition, power requirements often necessitate that different parts of a chip be clocked at different and/or dynamically scalable rates. Thus, the requirement enabling separation of function and timing is no longer valid at the asynchronous interfaces between blocks. New failure modes arise from corner-case confluences of timing and functionality that cannot be found in either simulation or timing verification, thus breaking the current sign-off flow. A large SoC may have hundreds of clock domains, and communication between them must be synchronized to avoid data loss or corruption. An “Advanced Sign-off” flow for today’s SoCs and future billion-gate chips must be developed that includes full-chip CDC analysis to sign-off on all asynchronous interfaces between computation islands, on-chip interconnect and external interfaces.
SoC Design Complexity and RTL Verification
Large SoCs are also fueling the demand for improved code quality before verification. With SoC design being increasingly driven by consumer product life cycles, we cannot expect that the development timeline to grow with design size. In order to keep simulation from spiraling out of control, higher quality RTL must be checked in for verification, and imported IP must be checked for code quality. RTL code must also be analyzed for efficient implementation in both silicon and emulation. Implementation constraints must also be analyzed for consistency with chip-level requirements. What is needed is a comprehensive RTL sign-off process that uses automatic checks to enable detection of dead code, FSM deadlocks, hazardous coding styles and analysis of X-Propagation risks before simulation begins, as well as dynamic checks to flag issues as they occur during simulation and emulation.
Thus, the sign-off flow must adapt again. Only with a comprehensive approach can an “Advanced Sign-off” flow scale to deliver defect-free SoCs over the coming decade.
March 7th, 2011 by Carol Hallett, VP of World Wide Sales, Real Intent
There are many trade shows & conferences for an EDA company to consider each year, and the decision may not be easy for small companies, as it involves tradeoffs on where the company should spend its resources. However, of all the options, DVCon has consistently proven to be of great value for Real Intent.
Larger trade shows like DAC offer an opportunity to reach more people, but with many different types of engineers in attendance, looking for everything from ESL to full custom design, the small guys can get lost in the shuffle. It can also be hard to get the approval to drive a panel, or hold a technical session. Therefore, small companies must work harder to get noticed among so many vendors, and most importantly, to reach the right audience for their products.
That’s why the smaller and more focused shows, like DVCon, offer a special opportunity for companies like Real Intent. We know that Design and Verification Engineers will be attending this show. We know that they are coming for the technical presentations. We know that the exhibit floor is a place they come to take a break from the brain dump they are getting upstairs. And we know that they will do a walk-through, stop by to hear about our products, and have a glass of wine and enjoy the appetizers that are being distributed by the smiling servers. One of the attendees mentioned to me that this was his favorite part of the show because it felt like old home week…we are all colleagues in this space, and you can feel the closeness in the room while we discuss the changing verification landscape. This has value!
This is exactly what we experience every year at DVCon. And, with an improved economy, this year at DVCon the attendance seemed to be very robust, with a lot of people coming from faraway places such as East Coast, Canada, Korea, and India. Through conversations, we learned that many people were faithful readers of this blog; they followed our news releases and press coverage and came to talk to us specifically to find out more. That is the value of having such a focused event like DVCon.
Another thing we greatly value at DVCon is the opportunity to learn from the Design and Verifiation Engineers that attend. To that end, we conducted a survey to learn what concerns and attitudes they have about various verification topics. And, we selected one lucky survey respondent to win a special prize (more on that later)! Among the questions on the survey, several yielded some interesting statistics that I will share with you.
One interesting question asked whether respondents had ever had a bug slip through to silicon due to a CDC problem. The majority – 60% – replied “Yes”. With the number of clock domains in SoCs going up, this number will probably increase unless designers adopt a comprehensive CDC verification solution and make it a sign-off criterion.
Speaking of that, we also asked respondents whether they consider CDC Verification a sign-off criterion today. Two-thirds replied “Yes”, indicating just how serious they consider CDC as an emerging verification issue. Clearly, simulation and timing verification alone can no longer pass as the only technologies required for verification sign-off.
Finally, we took the opportunity to ask about an emerging issue: bugs that slip through to silicon due to differing interpretation of X-Propagation by simulation and synthesis. This is an issue that has always been there, but with the rapid growth in SoC size and complexity, it is becoming a first-order concern for Design and Verification Engineers. When we asked about this, 40% of respondents told us that they are “Very Concerned” about X-Propagation bugs, while 45% said they were “Moderately Concerned”. Only 15% were not concerned. This is obviously an area in need of a verification solution and will get much more attention going forward.
With such informed attendees and the resulting interesting discussion, it is pretty obvious why we love DVCon: it offers opportunities to not only meet qualified users, and build relationships with people in the industry, but also to have in-depth and intimate conversation with real people having real design & verification challenges and seeking real solutions, and to learn from them!
So – A big thanks to everyone who visited us at DVCon and participated in our survey. Of course, I shouldn’t forget to mention our lucky winner for the drawing of an Amazon Kindle:
Program Manager, Chipset Enablement
Industry Standard Servers
Congratulations Krishnan! We look forward to seeing you all next year at DVCon 2012!
March 2nd, 2011 by Dr. Roger B. Hughes, Director of Strategic Accounts
It is quite interesting to see how very difficult-to-find bugs in a synthesized netlist are often the result of simple errors in RTL code. There are many technologies available to help an RTL designer find coding mistakes, including formal checks and comparatively simple lint checking of the RTL code. Linting technology has been around a long time, but it is often not used as part of the design flow, and when it is used on an entire chip, the sheer quantity of rule violations reported make any sensible analysis for real problems difficult. Why? Because most of the older lint checkers tend to produce very noisy reports where the vast majority of reported violations are of no real concern, yet buried inside several thousand violations are a few that will cause design problems. Older lint checkers also do not have the speed to do checks in real time, while the code is fresh in the designer’s mind and design productivity can be greatly increased on the fly.
One could always argue that any RTL that is non-synthesizable would be reported by the synthesis tool, so why bother to check for it? The answer is that it is the synthesizable – but incorrect – code that is of greater concern. Examples of such incorrect code include: assignments where the widths of the operands do not match, case statements with partially enumerated tags and no default tag, arithmetic operations where the bitlengths of an arithmetic operator are not the same. Novice designers and experienced designers alike often make mistakes. The ability to detect those mistakes is crucial. Here is an example reported by Ascent Lint:
BA_NBA_REG: filename.v:100 Both blocking and non-blocking assignments to ‘VPipeLine’, other at filename.v:82
Example code at line 100:
VPipeLine = VDataIn;
Other use at line 82:
if (i != NSTAGES-2) VPipeLine[i] <= VPipeLine[i-1];
In the above fragment of Verilog code, it can be seen that a combination of blocking and non-blocking assignments to a register are used in the code. Clearly, this is a very bad coding style, and it is something that is very difficult for a designer to spot.
A Lint check of the code can address issues like these very efficiently, provided modern approaches to linting are used. I have seen my customers choose Ascent Lint v1.4 product from Real Intent for several important reasons.
Accuracy (Low Noise)
One of the most important reasons customers choose Ascent Lint is the low noise in the reports. This enables designers to get to a problem very quickly instead of wading through long reports of violations that are of no real concern. In addition, Ascent Lint 1.4 adds the capability to generate incremental reports, so that only new violations that occurred since the last run are reported to the designer, saving valuable time.
Another very important factor is the speed of Ascent Lint, which is at least 10 times faster than the leading competitive product. Often, I have seen speeds of 30 times faster than the competition when the run is done on a full chip. For example, a typical 5M gate design at RTL can easily be linted in just 10 minutes. The gate level netlist of that same design was run through the linting process in just 5 minutes and with very economical memory consumption of only a few gigabytes. The beauty of being able to do lint runs so quickly is that any designer working on the code may quickly run Ascent Lint, check the incremental report for any differences, and immediately correct the code while it is still fresh in his or her mind. This rapid turn-around is simply not possible with the leading competitor’s technology, which forces the designer to run lint overnight and therefore does not help get the code right as it is being written.
Flexibility and Ease of Use
Ascent Lint is very language-flexible. It can accommodate designs in Verilog, SystemVerilog and VHDL as well as designs containing a mixture of all these languages with ease. This enables full-chip Lint checks on designs containing IP from partner companies – a key requirement for some customers. Ascent Lint works at both RTL and gate level, and supports both hierarchical and flattened designs. It is also easy for any customer to develop separate policy files for RTL and for netlists.
Fast, accurate and flexible linting is crucial to our customers. Combining speed with informative and accurate reporting makes Ascent Lint v1.4 from Real Intent a definite winner. Call Real Intent to see for yourself!