Ramesh is VP of Product Strategy at Real Intent. He brings 25+ years of experience in engineering, customer management, product management and marketing to Real Intent. Prior to joining Real Intent, he led the product marketing of the core product suite related to RTL design at Atrenta. Previously, … More »
April 2nd, 2015 by Ramesh Dewangan
The story of “David and Goliath” from the book of Samuel, has taken on a secular meaning of describing any underdog situation, a contest where a smaller, weaker opponent faces a much bigger, stronger adversary. Not just in EDA, but all companies in different technology industries deal with this struggle.
Organizations have moved from “build once, last forever” to “build fast and improve faster” approach to meet the dynamic requirements of their customers. In order to scale, evolve and respond, companies are choosing between two business philosophies. One which focuses on building larger, process driven yet efficient organizations and the other on smaller more efficient teams.
The panel discussion “The paradox of leadership: Incremental approach to Big Ideas ” at the recent Confluence 2015 conference addressed this question. It explored the pros and cons of each of these philosophies and tried to gauge if there is a preferred way to creating success as part of the conference theme: “Building the Technology Organizations of Tomorrow.” In my previous blog, Billion Dollar Unicorns, I discussed which companies were leading innovators, but the question remains: how do companies get there? Read the rest of Underdog Innovation: David and Goliath in Electronics
March 26th, 2015 by Sarath Kirihennedige
This article was originally published on TechDesignForums and is reproduced here by permission.
Constraints are a vital part of IC design, defining, among other things, the timing with which signals move through a chip’s logic and hence how fast the device should perform. Yet despite their key role, the management and verification of constraints’ quality, completeness, consistency and fidelity to the designer’s intent is an evolving art.
March 19th, 2015 by Ramesh Dewangan
The Age of Unicorns — private companies valued at more than $1 billion by investors. Unicorns are the stuff of myth, but billion-dollar tech start-ups seem to be everywhere, backed by a bull market and a new generation of disruptive technology. According to a recent New York Times article, there are over 50 unicorns in Silicon Valley right now.The business magazine, Fortune, in a Feb. 2015 article proclaimed
Upcoming unicorns formed a popular discussion topic at the Confluence 2015 conference organized by Zinnov, on March 12th in Santa Clara, Calif. The conference theme was “Building the Technology Organizations of Tomorrow”.
Here is a sampling of six unicorns that have emerged as real winners using innovative strategies: Read the rest of Billion Dollar Unicorns
March 12th, 2015 by David Scott
Last week I attended the Design and Verification Conference in San Jose. It had been six years since my last visit to the conference. Before then, I had attended five years in a row, so it was interesting to see what had changed in the industry. I focused on test bench topics, so this blog records my impressions in that area.
First, my favorite paper was “Lies, Damned Lies, and Coverage” by Mark Litterick of Verilab, which won an Honorable Mention in the Best Paper category. Mark explained common shortcomings of coverage models implemented as SystemVerilog covergroups. For example, because a covergroup has its own sampling event, that may or may not be appropriate for the design. If you sample when a value change does not matter for the design, the covergroup has counted a value as covered when in fact it really isn’t. In the slides, Mark’s descriptions of common errors were pithy and, like any good observation, obvious only in retrospect. More interestingly, he proposed correlating coverage events via the UCIS (Unified Coverage Interoperability Standard) to verify that they have the expected relationships. For example, a particular covergroup bin count might be expected to be the same as the pass count of some cover property (in SystemVerilog Assertions) somewhere else, or perhaps as much as some block count in code coverage. It struck me that some aspects of this must be verifiable using formal analysis. You can read the entire paper here and see the presentation slides here.
I was also impressed by the use of the C language in verification — not SystemC, but old-fashioned C itself. Harry Foster of Mentor Graphics shared some results of his Verification Survey, and there were only two languages whose use had increased from year-to-year: SystemVerilog and C. For example, there was a Cypress paper by David Crutchfield et al where configuration files were processed in C. Why is this, I wondered? Perhaps because SystemVerilog makes it easy via the Direct Programming Interface (DPI): you can call SystemVerilog functions from C and vice-versa. Also, a lot of people know C. I imagine if there were a Python DPI or Perl DPI, people would use those a lot as well! Read the rest of My Impressions of DVCon USA 2015: Lies; Experts; Art or Science?
March 5th, 2015 by Graham Bell
The Design and Verification Conference Silicon Valley was held this week. During Aart de Geus’ keynote, he shared how SoC verification is “shifting left”, so that debug starts earlier and results are delivered more quickly. He identified a number of key technologies that have made this possible:
Real Intent has also been talking about this new suite of technologies that improve the whole process of SoC verification. Pranav Ashar, CTO at Real Intent wrote about these in a blog posted on the EETimes web-site. Titled “Shifting Mindsets: Static Verification Transforms SoC Design at RT Level“, it introduces the idea of objective-driven verification:
We are at the dawn of a new age of digital verification for SoCs. A fundamental change is underway. We are moving away from a tool and technology approach — “I have a hammer, where are some nails?” — and toward a verification-objective mindset for design sign-off, such as “Does my design achieve reset in two cycles?”
Objective-driven verification at the RT level now is being accomplished using static-verification technologies. Static verification comprises deep semantic analysis (DSA) and formal methods. DSA is about understanding the purpose and intent of logic, flip-flops, state machines, etc. in a design, in the context of the verification objective being addressed. When this understanding is at the core of an EDA tool set, a major part of the sign-off process happens before the use or need of formal analysis. Read the rest of Smarter Verification: Shift Mindset to Shift Left [Video]
March 5th, 2015 by Graham Bell
This weekend on March 7, there will be Holi celebrations throughout San Jose and Silicon Valley. In a celebration of spring that first started in India, young people gather to throw colored powders on each other, and often water is used to smear the colors as well.
I have taken part several times with friends who grew up in India. It is a lot fun and the food and sweets are excellent. One popular celebration will be in Milpitas on March 7. You can find all the details here.
February 26th, 2015 by Graham Bell
New Ascent Lint with DO-254 Compliance Testing
On February 25 we announced the 2015 release of Ascent Lint for comprehensive RTL analysis and rule checking. The new version for 2015 delivers enhanced support for the SystemVerilog language, DO-254 policy files for compliance testing of complex electronic hardware in airborne systems, deeper rule coverage and easy configurability. We believe it is the industry’s fastest-performance, highest-capacity and most precise Lint solution in the market.
Additional enhancements and new features for Ascent Lint include:
To read further details about the announcement, click here. For additional insights and comments from Srinivas Vaidyanathan, staff technical engineer, including his take on the Cricket World Cup, please watch the video interview below.
February 19th, 2015 by Graham Bell
The Lunar New Year Day is on Thursday February 19, 2015. According to Chinese astrology, 2015 is year of Wooden Ram and is the 4,712th year in the traditional calendar. The original Chinese word for this year is “yang,” a generic term for various horned ruminating mammals. During the translation process, people have interpreted the word differently, and communities pick the animal that represents the qualities they admire. For example, sheep are associated with mildness and moderation, which is seen as an ideal attitude by some Asian societies, so they will call 2015 the Year of the Sheep.
You can learn an overwhelming amount of information at various web pages. The following Wikipedia page is a good place to start: Goat (zodiac). Let’s just say that the Year of the Ram will be an auspicious one and will bring a happy turnaround in fortunes in the coming months.
Happy New Year!
P.S. I am reminded of the stories about early computer translation programs that converted “hydraulic ram” into the equivalent of “water goat,” which is not the same thing!
February 12th, 2015 by Graham Bell
In the YouTube video interview below, Oren Katzir, vice-president of application engineering, introduces the topic of clock-domain crossing (CDC) verification. He identifies what are the four key issues that need to be met to achieve SoC sign-off, and what are the features that Real Intent’s Meridian CDC tool offers to handle the deluge of data that can arise in CDC analysis, and as well, work effectively with different design methodologies. I am sure you will learn something from Oren’s experience with many customers’ designs.
February 5th, 2015 by David Scott
In part one, I shared how Dan Hafeman, CTO at IKOS Systems, championed the use of transaction-level interfaces to hardware emulation, but it had not caught on (except for one very large and generous customer) by the time of the dot-com crash in the early 2000’s.
Since this is a personal history, I’ll start this part with a bike ride. In October 2013, John Stickley took me on a 40-mile bicycle tour from Fort Lee, NJ, to Brooklyn, NY, and back (amazingly, Manhattan is bike-friendly now!) John was one of those R&D engineers who had worked with our customer in the late 90’s on the SystemC modeling side. Ever since, John has been at the center of the emulation transaction modeling world, for IKOS and then for the Mentor Emulation Division after the acquisition of IKOS by Mentor Graphics.
Since I’d last seen John, I’d had the chance to write a SystemC-based interface to ARM Fast Models (ARM’s high-level processor and other models) using TLM2, the transaction-Level Modeling API standard. I’d been out of transaction-based modeling for 10 years, but this was an “Aha!” moment. This was what we had needed back at the turn of the century! The rest of the world had finally caught up to what we had created back then.
John and I had a chance to talk about this after the bike ride, and he agreed with me completely. John was always enthusiastic, but now especially so.
At this point, I’m not the best person to write the history of transaction APIs, but I can try.
First, Accellera standardized the transaction interfaces to emulation as SCE-MI (standard co-emulation modeling interface). The Open SystemC Initiative finally achieved what the Virtual Socket Interface Alliance could not and created a TLM (transaction-level modeling) standard as part of SystemC. TLM2 followed, and I suspect what really launched it was the extensions mechanism.
People will criticize TLM2 for being simple, but it needs to be: address-based reads and writes, basically — and the extensions mechanism offers at least a standard escape hatch for bus-specific features. When ARM created its mapped-instruction-set processor models, they could create the necessary extensions specific to the AMBA bus.
Then Accellera, spearheaded by John Stickley, moved to a later generation standard, SCE-MI 2, which implemented a multi-channel transaction interface that naturally integrated with other modeling interfaces like SystemVerilog DPI (direct programming interface) and TLM2. With easily interoperable standards-based interfaces, plus the availability of high-level models and tools, an entire ecosystem was possible. Now, everyone is talking about this use model with emulators, and even yet higher-level use models like scenario-based test bench generation. In some cases, you can build transaction-based environments that test packet-based systems more richly than in-circuit emulation ever could!
It was good to see an idea that was ahead of its time, as originally conceived at IKOS, finally become well-established in verification. This involved the efforts of not just the IKOS people, most of whom are still developing the technology at Mentor, but also the other vendors, standards bodies, and customers throughout the market.
I was always especially proud of the original IKOS effort, because it was the first large and critical project I had led. But I was even prouder to tell its original inventor, Dan Hafeman, that his long ago idea was now mainstream, and had changed the industry.
This brings my retrospective to the end. If I missed an important piece of the story let me know.