Open side-bar Menu
 Real Talk

Archive for 2010

Is Your CDC Tool of Sign-Off Quality?

Monday, March 29th, 2010

First generation tools historically evolved as extensions of simple linters and source checkers or by the application of basic formal engines in an attempt to solve the CDC problem. These tools generated voluminous reports and required designers to painfully plow through tens of thousands of less than informative messages for days and weeks to overcome the “signal to noise” problem. Such a methodology is prohibitive for any practical chip-level analysis. Even employing post-analysis filtering of design structures such as FIFOs and handshakes have proven to be a daunting and time-intensive task for designers trying to find real CDC violations.

While there are multiple vendors offering CDC analysis tools, some developed as extensions of lint engines,  and newer ones like Real Intent’s Meridian CDC designed from the ground up with a first principles understanding of CDC failure modes, any tool must ultimately provide value as a viable sign-off-quality tool for designers and project managers with easy setup and report-analysis capabilities, comprehensive checking, and practical execution times.

A sign-off quality tool must meet the following criteria:

  • Easy to setup and use
  • Comprehensive analysis of all asynchronous crossing issues while being tolerant to design styles
  • Manageable analysis results and easy exception handling
  • Practical run time performance with full SoC flow management

A sign-off-quality tool must include easy and true automated setup for CDC, recognition of all metastability control structures, and not be limited to coding styles/structures in order to recognize, for example, FIFO structures.  It is imperative for such recognition technology to work equally well at the netlist level in addition to the RT level.  With Meridian’s automatic CDC recognition technology, for example, all asynchronous FIFO structures are recognized at both RT and netlist levels.

A sign-off quality tool must be comprehensive in its analysis capability. It would include capability to catch all CDC and asynchronous crossing bugs while being tolerant of design practices. It would also be glitch aware and detect all glitch sources, perform pulse-width verification, perform complete cycle-jitter analysis on clock and data paths, and support free running clock analysis.

Sign-off quality tool must provide for manageable analysis results and exception handling.   Any limitations in the structural recognition of asynchronous control leads to in the analysis reports and makes the manual analysis of the report very time consuming.   Full SOC analysis should be performed in the order of hours and not days for typical design sizes (5 Million to 40 Million gates).  Because of transformations induced by timing-driven synthesis optimizations, and test-driven and power optimization-driven modifications of the clock structures, running at least structural CDC at the netlist level is a must. Running at least structural CDC analysis is becoming ever more important even for the post-layout stages of a design.  With Meridian’s technology, these capabilities are all a reality today with low noise due to Meridian’s complete asynchronous-control-structure recognition.

Finally, execution time must be manageable to the point that it can support quick feedback to the designer. Overnight runs are simply not practical for such analysis. Considering the fact that often CDC analysis is performed at the tail-end of the RTL signoff process when schedule pressures are greatest, using basic linters for CDC analysis generally leads to frustration. It is not uncommon for run-times to prevent designers from finishing complete full chip analysis.

So, if your CDC tool does not pass muster in any or all of the above issues as covered in this blog, please visit our web site home page ( or contact us for a consultation on how the Meridian CDC solution can make CDC signoff a reality.

DATE 2010 – There Was a Chill in the Air

Monday, March 22nd, 2010

There was a chill in the air; people were bundled up with layers of clothes as they walked at a fast clip to get to the convention center in Dresden, Germany on Tuesday 09 March 2010 to attend DATE 2010. The coat rack was filled with all of our coats, hats, gloves and scarves…but the reception by the DATE organizers was anything but cold…

As we arrived on Monday to sign in, we were greeted with “Cheers” and asked our names. Everything was in order, in its place and waiting for us…finding where to go first was our only challenge and that did not prove to be too hard to tackle.

The EDA vendors were well cared for; all of the details were attended to without a hitch. The booths were up and waiting for us when we arrived, everything running like a well oiled machine. The booth organizer was at our disposal, just in case we needed anything at all. The offer for help was not just a polite gesture but a genuine offer.

The conversation around this show was the fact that the technical tracks, with their intriguing subject matters, were the real draw for this show. The tracks were well attended and in some cases with standing-room only. I noticed people briskly walking from one track to the next. There was always a buzz of conversation that this show is truly morphing into a technical symposium.

One thing that was pleasantly surprising about this show was the fact that it was heavily attended by students. The students were not just looking for a job but came with their projects for products that they believe are needed now and in the future of EDA. The projects were well thought out, well researched and very well presented to the public. It was nice to see these students so eager to share their knowledge, but even more eager to learn from the EDA vendors.

There were fewer EDA companies as well as industry attendees at the show this year and the focus seemed to be more on Academia versus EDA companies. That said, the conversations with the show participants on the floor were purposeful, intimate and more in-depth than at most trade shows. As a result, attending the show had great value because you were given special attention, more time and were even given a special status…hence the trip was more than worthwhile. Personally, I liked that aspect of the show.

As a Vendor in the industry, I am hopeful that they can find a good balance that allows the EDA vendors, the Academia, Technical symposium and the User community to find purpose and value in joining DATE; I believe it is good for the industry, the vendors, the companies, and the individuals to have this balance. As a responsible individual and a person that cares about this industry, I shared my ideas with the DATE organizers…I think they have a good plan for next year.

When the show came to a close we packed up our things, bundled up in our winter attire and daringly went out into the cold air outside…but inside we will have warm fond memories of the days spent in Dresden.

Drowning in a Sea of Information

Monday, March 15th, 2010

We are a society inundated with information. At no previous time in history has so much data been available to such a wide population with access literally at their fingertips. But information abundance has not necessarily translated into increased actionable knowledge. It seems that we reach a point where the more information we have available, the harder it becomes to make use of it.

I was recently reading an old Scientific American article by Hal Varian, then Dean of the UC Berkeley School of Information Management and Systems1. In this document, Dr. Varian argues that the fundamental limits of human comprehension will prevent the realization of a future “information economy”. He quotes Nobel laureate economist Dr. Herbert A. Simon as saying, “What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention, and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.” Dr. Varian then proceeds to frame his argument that information itself is meaningless without “some way to locate, filter, organize and summarize it.”2

In the 15 years since this article was written, the expanse of information has continued to grow, and formed the basis for the Internet search engine market. Like many people, I have Google as my browser’s home page. This is how I deal with my desire to find relevant information among the vast quantities of data on the internet. It not only helps me when I know up front what I am looking for, but also to help me refine my understanding by highlighting the best representations of the type of information I want to find.

While Internet search engines have provided a step towards Dr. Varian’s vision of information organization, the Electronic Design Automation industry has lagged behind in this ability. Each year designs chase Moore’s Law and the complexity of Integrated Circuits has increased exponentially. But in many ways, EDA tools are still delivering data to design and verification engineers as if we were back in the days of schematic entry. Sure, there have been some advances, such as reporting data based upon design hierarchy instead of flat netlists, but the sheer size of designs produce volumes of data that cannot reasonably be parsed without hitting Dr. Simon’s “poverty of attention”.

One doesn’t need to look too far to see examples of this. Most engineers (especially in EDA) have had the experience of reporting a tool problem to an EDA company, only to have customer support point out a single line in a voluminous log file that explains exactly the reported issue. Does that mean engineers are more careless than previous generations? Not at all, but our predecessors rarely had to deal with log files in the tens of thousands of lines, and “summary reports” longer in length than the United States Constitution. It’s gotten bad enough that many tools now provide engineers with the ability to turn off warning messages they know they don’t care about. Engineers are busy people, with many conflicting requirements for their attention. EDA should be working more on a Google-style model of information delivery – finding information the engineer knows they want fast, and guiding them towards the best representations of cases with which they are not familiar.

At Real Intent, we’ve made information organization a top priority in all our tools. Because our focus is on automating much of the work required to run verification analysis, we take our responsibility for organizing the analysis information seriously. Our lint tools provide more meaningful warnings with less noise, our automatic formal tools classify failures by cause and effect, and our clock domain crossing analysis aims to highlight the source of problems, not the symptoms. Have we achieved the goal of a Google-like repository of information management? Not yet, but we continue to strive towards making it faster and more efficient for our customers to consume the volumes of data and prevent “attention poverty”. We welcome your feedback on how we are doing.


1. Varian, Hal R., “The Information Economy – How much will two bits be worth in the digital marketplace?”, Scientific American, September, 1995, p 200-201,

2. Ibid.

DVCon 2010: Awesomely on Target for Verification

Monday, March 8th, 2010

DVCon 2010 at the Doubletree Hotel in San Jose, California, was an important and successful event for all of us at Real Intent. We got to re-acquaint the design verification community with us and our products, and to learn more about the pressing needs of the industry. The Conference proved itself yet again to be The Forum for exchanging ideas and methodologies for increasing design verification productivity.

Attendance for the four-day conference, sponsored by Accellera, an industry consortium dedicated to the development and standardization of design and verification languages, was 625. It proved to be yet again to be the best conference for interacting with the EDA industry’s functional design and verification community.

At DVCon, our product demonstrations included the latest versions of the following software products:

  • Ascent for automatic formal, early functional verification, including lint
  • Meridian for comprehensive and precise Clock Domain Crossing (CDC) verification, and
  • PureTime for comprehensive Synopsys Design Constraint (SDC) validation, including glitch-aware timing exceptions verification.

We all enjoyed the DVCon Twitter Tower. Even now if your search Twitter for #dvcon or @dvcon, you will see pointers to blogs, commentary and opinions about the event and what was best at the event.

We concur with the commentary from DVCon’s Chair:

The conference was packed with valuable material all week,” commented Tom Fitzpatrick, DVCon General Chair. “More companies wanted to sponsor tutorials this year so we were able to accommodate the demand by adding a fourth day, giving attendees access to even more information and education. The exhibit halls and receptions were well-attended and the Twitter Tower added a social networking element to the traditional networking that is a mainstay of the conference. Many vendors also commented on the strength of the contacts they made during the week.”

Now, we are getting ready for DATE in Dresden, Germany next week and SNUG San Jose, California, at the Santa Clara Convention Center in March.

Verifying CDC Issues in the Presence of Clocks with Dynamically Changing Frequencies

Monday, March 1st, 2010

Emerging systems have three dimensions of complexity when it comes to making them CDC-safe. First, the number of asynchronous clock domains can range from 10s to as many as 100s for complex systems with many components. Second, the primary clock frequencies vary per component. It is not uncommon for the ratio between the fastest and the slowest clocks to be greater than 10.  Third, the clock frequencies themselves can change dynamically during the course of operation of the chip (for example, when switched from one mode to another for saving power). As a result, CDC verification becomes critical to ensure that metastability is not introduced in the design.

The first and second complexity dimensions above are common causes of failures but can be overcome by customizing the CDC interfaces for the given set of clocks and their fixed frequencies. The third dimension, on the other hand, requires very strict hand-shaking or FIFO-based synchronization schemes that work across the entire range of frequencies. We focus on the third dimension here and provide an example.

Let us look at the schematic shown in Fig 1. There are two clocks xmitClk (pink) and rcvClk (yellow). For simplicity, we assume all the flops in the design are posedge triggered. There is a data transfer (shown as DATA) between the xmitClk and the rcvClk domains that is controlled by the 3-flop toggle synchronizer (shown as SYNC). The toggle synchronizer has 2 flops that synchronize the control signal and a third flop to detect that a valid request signal is sent from the xmitClk domain. Notice that the output of the second flop is fed-back into the xmitClk domain (shown as FEEDBACK) to acknowledge that the DATA has been received. Once the FEEDBACK is seen, the xmitClk domain sends new DATA across the interface.

First, let us consider the case when the xmitClk domain is faster than the rcvClk domain by up to 2 times. Since they are relatively asynchronous, there could be a maximum of 2 posedges of xmitClk within any given cycle of rcvClk. With Real Intent’s Meridian formal analysis, we can determine that the above feedback structure is sufficient for the CDC interface to work correctly (irrespective of the duty-cycles and transition times of the clocks). This is because by the time the xmitClk domain receives the feedback signal, the rcvClk would have received the data.

Now consider the case when the xmitClk domain is faster than the rcvClk domain by 3 times. In other words, there can be 3 posedges of xmitClk within a given cycle of rcvClk. In this case, the xmitClk domain receives the feedback signal before the rcvClk has received the data. The xmitClk can immediately proceed to transfer new data which can corrupt the old data and/or cause metastability. The probability of failure increases rapidly as the xmitClk frequency increases. The interface can be made frequency-independent by taking the feedback signal from the 3rd flop of the toggle synchronizer instead of the 2nd flop (which guarantees that the rcvClk domain receives data prior to sending a feedback).

Without the fix, the CDC interface in the example clearly doesn’t work over the entire operating range of frequencies. Predictability becomes even worse when the clock frequencies are changed dynamically during the course of the chip operation (this is done in practice by applying appropriate software resets etc. carefully between the operating modes).

In order to ensure that the user does not miss any corner-case scenario, Real Intent has invented a new technology for verifying CDC interfaces with dynamic clock frequencies. It is available as the “free-running clocks” feature in the latest release of Real Intent’s Meridian (Meridian 3.0). By formally verifying the design over the entire frequency range in a single Meridian CDC run, we also simplify the otherwise cumbersome process of running the tool for each combination of operating frequencies. We make the case that a CDC tool is incomplete for modern chips without this ability to handle free-running clocks.

Fostering Innovation

Monday, February 22nd, 2010

The news dominating the EDA communications channels of late is Synopsys’ recent acquisition spree of several small, well-regarded emerging companies with innovative technology.  I’m not ready to debate whether these moves signal the demise of the virtual platform market segment.  Instead, my guest blog is on fostering innovation.

Synopsys corporate strategy confirms my theory and one held by others as well:  Small companies are much better able to manage and promote innovation than the larger, more established players.

Most often, it’s a startup or emerging company that develops groundbreaking new technology and that’s due to any number of reasons.  Startups and small companies can offer an entrepreneurial environment conducive for innovative and creative thinking, and frequently encourage their employees to experiment.  These firms have a luxury not afforded by the large, more established companies –– they are not restricted by a hierarchy and a structure that can stifle creativity. 

Small and emerging companies are better able to focus an R&D team on a technical challenge, enabling them to take a fresh, even radical, approach.  They can be more aggressive in identifying and responding to market trends and industry needs, especially if the market capitalization is not large enough for an established player to justify an investment.

Two great examples of the entrepreneurial spirit of emerging verification companies are EVE and Real Intent.  With more than 70% of the development cycle of a system-on-chip (SoC) design being consumed by verification, these two companies are leading the way with innovative products.

I’m quite proud of EVE, an innovative emerging company formed in 2000 that’s turned the emulation market inside out.  Our goal, that we believe to be within reach, is to become the leader in hardware-assisted verification and embedded software validation for any-sized design, regardless of complexity and topology in any industry segment.  Over the years, EVE has unveiled several generations of emulation tools based on standard FPGAs that offer design teams a high return on investment. 

And, Real Intent, the formal verification leader that pioneered intent driven verification, underscores entrepreneurism at its best.  No one believed that formal verification could be commercialized after one company’s failure in the 1990s.  Instead, Real Intent, with its automatic formal verification software, has continued to defy expectations since 1999.

Of course, large companies have positive characteristics as well, just not entrepreneurial or especially innovative.  They often have vast resources for marketing programs and their sales channels are much better developed and coordinated then a startup’s.  These same large companies carefully track the progress of a startup and can be counted on to acquire them, when the timing’s right.  This is all part of the EDA ecosystem that’s worked for many years.

Startup, emerging company, large established players.  In an ecosystem such as EDA, we need both large suppliers and innovative small companies to keep driving and encouraging technological advances.

CDC (Clock Domain Crossing) Analysis – Is this a misnomer?

Monday, February 15th, 2010

The high-tech industry is chock-full of acronyms. Each time a new problem is identified, out comes a new acronym that quickly gets standardized. It is very typical for complex new problems in, for example, our VLSI design industry to take years to understand and solve fully. Unfortunately, in many of these cases, the new acronym gets co-opted by a premature solution or a solution to a subset of the actual problem rather than be associated with the fundamental problem itself. The end result can be confusion and miscommunication as customers who use these incomplete solutions with the fancy acronyms do not realize that their problem is not fully solved and end up with project failure.


A revealing instance of this phenomenon is CDC – Clock Domain Crossing – analysis. As asynchronous crossings became more mainstream because of larger dies and greater system-level complexity on chip, it became clear that managing metastability was of paramount importance. The analysis of the design for proper metastability management came to be known as CDC analysis. Unfortunately, early solutions for CDC analysis only verified single-bit metastability management (synchronizers implemented as back-to-back flops) and data-bus metastability management (controlled by a synchronized common enable signal).  Designers’ understanding of CDC analysis even up to this day, as CDC analysis has become mission-critical for SOC designs, is that it only requires checking these two attributes.



In reality, the above two checks are just the tip of the CDC analysis iceberg. To begin with, they represent only a limited checking of metastability management. In addition, clock domain analysis must check for many more issues than just metastability management like the ones listed here:

-          Data correlation when you have fast to slow clock crossings

-          Cycle jitter tolerance in data crossings

-          Cycle jitter in control crossings

-          Glitch issues even when control busses are gray coded

-          Glitch issues with clock gating implementation

-          Re-convergence of signals synchronized separately to a single clock domain

-          Correct implementation of asynchronous FIFO protocols

-          Correct implementation of resets that cross multiple domains


As with metastability verification, all of the above issues are very difficult to characterize and verify with simulation-based techniques.  There have been many silicon re-spins as a result of not comprehensively verifying the above issues. Examples of failures we have seen happen in practice are quite revealing:

-          An asynchronous reset-control that crossed clock domains but was  not synchronously de-asserted, causing a glitch in control lines to an FSM

-          Improper FIFO-protocol controlling an asynchronous data crossing resulting in a read-before-write resulting in functional failure

-          Reconvergence of synchronized control signals to an FSM that were not gray-encoded, resulting in cycle jitter that, in turn, caused a transition to an incorrect state

-          Glitch in a logic cone on an asynchronous crossing path that was latched into the destination domain resulting in corrupt data being captured

-          Gating logic inserted by back-end tools for power management resulted in glitches on a clock


Verifying the above issues must use a combination of structural analysis and static formal property checking. Older tools that do only a limited amount of checking but continue to use the CDC moniker do the customer a disservice. They also do a disservice to modern tools like the Meridian product family from Real Intent that provides the most comprehensive analysis of CDC related issues.    Meridian does this by identifying all asynchronous crossings, verifying proper metastability management in crossings, comprehensively verifying that logic in asynchronous crossings is glitch free and by verifying asynchronous crossing control protocols.  Since logic is inserted by back-end tools into clock nets, it is important that the tool be able to run on a netlist. Meridian is the only CDC tool that can be run on netlists as well as on RTL, recognizing all asynchronous crossing controls including FIFO’s. Meridian is also the only tool that enables CDC checks to be performed in simulation in addition to static structural and formal analyses.


So, beware of acronyms! Make sure you know what they really represent.

EDSFair – A Successful Show to Start 2010

Monday, February 8th, 2010

From Prakash Narain, CEO of Real Intent

I have to admit that I was apprehensive going to EDSFair in Yokohama this year. Even though the economy is getting better, it is hard to know how many people will actually go to tradeshows. I was pleasantly surprised – EDSFair 2010 turned out to be a wonderful success for us.

Prior to the show, we announced that Professor Masahiro Fujita from University of Tokyo has joined Real Intent as a technical advisor, and that we have shipped Ascent Lint 1.2 with added new features.

The flow of visitors at EDSFair was steady throughout and kept us relatively busy for the two days. As we also noticed at DAC, visitors to the booth were very knowledgeable, patient and had done their homework.  They generally requested detailed demos of the Ascent, Meridian and PureTime product families, had excellent questions, and finished with follow up plans. Overall, it was a very productive two days allowing us to touch base with many of our key existing and potential customers.

I also had a press meeting and gave a seminar on cost effective application of formal technology to improve overall design verification flows. Both were well received.

The methodical work culture in Japan contributes in no small way to the success of EDSFair.  For example, new exhibitors at EDSFair are formally introduced to the audience by a Japanese expert.  This is followed by an organized tour where the audience is brought to the booth and the vendors are given an opportunity to highlight their products. Then there is a brief question and answer session.

We all had a good time at the show. The warmth of the people more than compensated for the cold weather outside!

With a successful EDSFair to start the year, now we are preparing for DVCon 2010 this month. We wish for the success of DVCon 2010 for all exhibitors and attendees.

From Katsuhiko Sakano, General Manager of Real Intent K.K.

リアルインテント社は2010年1月28日(木)と29日(金)の両日、EDSF(パシフィコ横浜)に Ascent(テストベンチ無しの自動リント検証), Meridian(CDC検証), PureTime(SDC及びタイミング検証)ファミリ製品のデモを交えて出展しました。予想以上に既存ユーザー方々や新規の方々も含め120名様以上と お会いすることができ、フォーマル検証及びデザインフローの見直しを検討している方の多さに驚いた。今後更に設計上の問題点を把握し、費用対効果が最も高 い弊社のフォーマル検証ツールを提案していきます。


昨年のリーマンショックの影響以降、US同様に日本も不景気が続いているのだろう。今回は特にリクルターの方の多く来場していてブース毎に人材募集 しているかどうか確認しているのが印象的だった。またEDSFは年に1度のビックイベントでもあり以前の会社の同僚や友人に会うことも同窓会気分でとても 良いところがあります。しかし来年は今年同様な形態で行わるかは大手3社の参加次第でしょう。


Ascent Is Much More Than a Bug Hunter

Monday, February 1st, 2010

Real Intent’s Ascent family of front-end RTL verification tools serves multiple functions in the verification flow.  The most important is that Ascent automatically finds functional bugs that are difficult to catch in simulation.   While finding bugs early on is very important in itself, Ascent is much more than a bug hunter.  Ascent also improves code coverage, saves simulation cycles, and reveals logic optimization potential. Some of these benefits are discussed here.

Code coverage is an important metric for simulation sign-off. Verification teams try to get as close to 100% coverage as possible. It is common for projects to struggle to achieve the 100% coverage due to a combination of reasons:

  1. A logic bug prevents a code block from being exercised
  2. A hole in the simulation test plan prevents a block of code from being exercised

The typical hard-to-detect unreachability bug is caused by unintended correlation between deeply nested control statements. This is very hard to detect in simulation since the test plan must exhaust all combinations of control values to determine that the nested block is unreachable. In other words, deeply nested unreachable blocks waste many simulation cycles, result in less than desired coverage, and, at the end of the simulation, one may still not be sure whether the block is truly unreachable.

Finding such unreachable blocks or demonstrating that hard-to-reach blocks are in fact reachable is relatively easy for Ascent’s formal engines. By using Ascent early on, the verification team can determine and fix unreachability issues before simulation is begun so that simulation cycles are not wasted trying to obtain unachievable coverage. On the flip side, Ascent can also help determine that a block not yet reached in simulation is in fact reachable, thereby indicating to the verification team that the test plan needs to be enhanced. Even better, Ascent can be used to find simulation traces to reach the difficult blocks.

Ascent also reveals optimization potential for simplifying designs.

For example, Ascent uses its deep-sequential formal engines to check for constant nets, constant expressions, unreachable states and unused state bits within a design. Because of the deep sequential analysis required to arrive at these results, the reported constant nets or expressions are often not easily identified manually. Those constants could be design bugs or opportunities for design simplification. A common reason for the presence of such constants is the interaction between new constraints and legacy RTL code, or the effect of system-level constraints on deeply embedded blocks.  An original fragment from a real design where Ascent detected a constant expression is shown in Figure 1(a).  Due to a programming requirement, the following constraint was imposed post facto on the inputs:

assume property @(posedge clk) disable iff (rst) (B==1’b0) |-> (A==1’b0) && (C==1’b0)

As a result, Ascent reported Y as a constant, meaning that the logic can be simply replaced by Figure 1(b).  This change, of course, also results in further simplification in downstream logic.

In summary, Ascent can play a key role in achieving very high coverage with a much smaller amount of simulation as well as find optimization potential in your design – it does much more than just find bugs.

Ascent Lint Steps up to Next Generation Challenges

Monday, January 25th, 2010

The key to greater design productivity is to detect bugs as early as possible and as close to their source as possible. Lint is the first and critical component of the early-verification tool chain. It is easy to use and finds nontrivial bugs that can save your bacon later on. Real Intent has been a pioneer in developing technologies for early verification and in promoting the paradigm.  Earlier this week, in response to customer demand, we announced the release of Ascent Lint 1.2, the next generation lint tool that performs smart syntax and semantic lint checks for complex designs.  While there is a variety of lint and RTL code analysis tools available, Real Intent has stepped up to introduce a distinctive lint tool to address the serious deficiencies in the existing lint offerings.

First a little bit of history: Real Intent tools have always used lint technology to check the design prior to formal verification, and issued violation messages about the design to the user in a log file.  Over the years, upon customer requests, Real Intent exposed more information in a debug-able report file.  Sure enough, it was encouraging to hear from our customers that the Real Intent tool front-end was able to catch crucial issues which some of the lint tools in the market did not. Real Intent has always maintained close relationships with its customers. Ascent Lint 1.2 is a product of these relationships. 

Customer feedback indicates that as design complexity has increased, popular lint tools in use today are starting to show signs of severe performance degradation and noise.  The lint reports generated by some tools have become cumbersome due to the large number of irrelevant messages generated by them.  While some lint tools offer the option of custom rules creation by reusing source code from prepackaged rules, common experience is that the rule language is pretty inscrutable and rules are complex to implement.  Also, if implemented in TCL, these custom rules can run very slowly.

Real Intent’s Ascent Lint speeds up the development of complex system-on-chip (SOC) designs by offering the ability to select from a comprehensive set of smart rules based on industry guidelines.  These rules are implemented in an extremely fast engine with runtimes as fast as about a minute for checking 230 of our most comprehensive  rules on a million gate design. This data point was obtained at a customer site and turned out to be a real eye opener for the customer deeply frustrated with the performance of their existing lint tools. Ascent Lint offers low noise, yet comprehensive reporting which is debug-able through a GUI with cross-probing capability to the design source.  Ascent Lint enables customization of company-specific guidelines by graphically configuring existing rules simply by choosing or entering values in a box. We are committed to continue to provide smart industry-standard and customized rules that detect complex design and coding bugs.

As we see other lint offerings falling off a cliff in the face of rich HDLs and design complexity, we believe that Ascent Lint will be the next generation technology that saves the day.

See a couple of other blogs on Ascent Lint:

S2C: FPGA Base prototyping- Download white paper

Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy