At this year’s ChipEx, STMicroelectronics (ST) will discuss how they used formal methods as a means to improve the productivity of their verification. In particular, they had three key aims:
To close verification projects with appreciably less time and effort than that required by a constrained random approach;
To promote a greater use of assertions by encouraging designers to develop formal properties for their blocks;
To augment or replace legacy in-house flows with mature industry tools. This reduces maintenance overhead and promotes a more robust approach.
They applied formal methods at the unit-level, block-level and the system-level of an ARM based CPU sub-system (see Figure 1). Each project gave different insights into the effectiveness of the formal approach. In order to make an effective evaluation, they developed constrained random alternatives. This allowed them to make direct comparisons and reduced the project’s risk.
A paper at ChipEx will be presented that describes the productivity improvements they experienced using formal methods to verify a critical CPU sub-system that is targeted at mobile applications. In particular, they describe the challenges involved and how a formal tool (Jasper) delivered benefits in terms of effort savings, re-use and insight into IP that was not fully characterized in the context of a new design. The full presentation will also describe their experiences using formal in the context of low-power verification, control status register checking and sequential equivalence.
Figure 1: An ST ARM based CPU sub-system. The shaded blocks were verified using Jasper.
We as consumers want more functionality from our electronic devices whether from our smart phones or household appliances. The problem that we create from these functionality demands is not only an increase in power consumption, but also a significant increase in complexity for how the power in these devices is managed. We as consumers don’t often think about these consequences, but your typical electronic design engineer certainly does.
Today’s electronic designs require that power management and reduction be a central concern throughout the chip design flow from architectural design to RTL implementation and physical design. The power verification dilemma is two-fold. Not only must the design and verification engineer address whether or not the inserted power management circuitry functions correctly, but also that the overall chip functionality is not corrupted by the power intent described in the UPF or CPF descriptions.
When I, a high school senior, got an invite to be a panelist at a panel discussion on “Engineering The Next Generation”, I was a bit surprised. I aspire to be an electronics or a computer engineer, but have still not entered College. My dad, Sunil Kakkar who founded a chip design and verification company SKAK Inc., serves on the technical program committee of DesignCon, told me that electronic design and semiconductor experts from all over the world will come to participate in the conference that will run for 4 days. The conference would be a high tech affair, so I would have to be technical in presenting my thoughts. I was somewhat apprehensive. Will he be able to get their attention – I thought? Could I speak on topics which will make sense to them? What will their reaction be, on a high school senior fromCupertinoHigh Schoolparticipating in the DesignCon conference?
MIT team finds way to manipulate and measure magnetic particles without contact, potentially enabling multiple medical tests on a tiny device.
If you throw a ball underwater, you’ll find that the smaller it is, the faster it moves: A larger cross-section greatly increases the water’s resistance. Now, a team of MIT researchers has figured out a way to use this basic principle, on a microscopic scale, to carry out biomedical tests that could eventually lead to fast, compact and versatile medical-testing devices.
The results, based on work by graduate student Elizabeth Rapoport and assistant professor Geoffrey Beach, of MIT’s Department of Materials Science and Engineering (DMSE), are described in a paper published in the journal Lab on a Chip. MIT graduate student Daniel Montana ’11 also contributed to the research as an undergraduate.
At DAC this year I had a lot of fun doing a live experiment to demonstrate some of the benefits and issues with concurrent design flows. I was at the Cadence Theatre doing a presentation called ‘Controlling the costs of SoC integration‘ and I decided to make the presentation more interactive by creating a design team and seeing some of the effects of getting this team to work concurrently. We demonstrated how a little ‘twist’ caused a big upset for to team deliveries!
The topic I introduced first was how system design flows are now highly concurrent. In the production of a system within a very tight timescale, it would be normal to have architecture definition, software development, virtual prototype development, RTL design and verification all happening at the same time, be it IP, sub-system or SoC level design. I represented this as a set of rotating, interacting cogs.
The world population hit 7 billion last fall, with a billion more expected in a dozen years. “Lifecare” represents an incredible opportunity for the semiconductor industry to promote health, energy conservation, safety and productivity. From smart city infrastructure to medical care advances, from sensors and controls to nanotechnology, what new EDA ecosystems will emerge to better model the real world? Panelists participating in the discussion “Is Lifecare the Next Killer App?” at the Design Automation Conference on June 4, 2012 addressed the question and their remarks are quite enlightening. Moderator Rick Merritt, Editor at large, Electronic Engineering Times led the discussion, which included Kristopher Ardis from Maxim Integrated Products, Fabrice Hoerner, from QUALCOMM Inc. and Greg Fawcett from Palo Alto Research Center.
Jasper’s formal technology has advanced to the point that it can address a broad range of verification and design issues. With a strong foundation in fundamental proof technology and best-in-class capacity and performance, Jasper’s users now apply the tools and technology to address questions of connectivity, x-propagation, clock-glitch detection, protocol cache coherence, deadlock detection, property synthesis and more.
The added scope and breadth of use of Jasper’s tools and technology is leading users to demand a measurable and quantitative approach that will help correlate the results of formal proofs to verification closure, often expressed in terms of verification coverage. What is needed is a methodology that will correlate formal proof results with coverage. A second requirement is for a methodology that can integrate the coverage results from Jasper’s formal technology with other verification tools (simulation). A third requirement is the ability for Jasper tools to use external coverage data to address areas in the design that are not covered by other verification methodologies.
Enough can’t be said about the power to educate based on experience. At this year’s DAC, a few of Jasper’s top users volunteered to give seminars on their best practices for using Jasper Formal technology. If you happened to miss DAC or did attend but didn’t get a chance to visit the Jasper booth, here’s your chance to view the on-line videos from ST, ARM, and NVIDIA on how they utilized Jasper Formal technology to get ahead in their designs.
ST: Low Power Verification and Optimization with Jasper Formal
ST Microelectronics talked about the verification challenges associated with sophisticated low-power designs, and ways those challenges are being addressed by Jasper’s power-aware formal verification technology. The seminar detailed how Jasper’s low-power verification solution applies to:
Parsing CPF information to enable power-aware formal analysis
Cloud computing was the subject of much interest and discussion at this year’s DAC. While I acknowledge that the cloud will play an increasingly important role in our business, its displacement of today’s semiconductor design practices is easily a decade or more away.
The attraction of the cloud is to increase one’s access to raw computing power and software. If you need more speed, or a specialized program, just grab it and go. But the hard work and differentiation for our customers is still done in the trenches, not up in the sky. They are not about to put their proprietary designs on some server, somewhere, or give up their customized and optimized design flows. What can we in EDA do for them today to help with increased design complexity and the need for higher performance?
For me the answer lies in maximizing throughput by leveraging the advantages of parallel computing. This is also a subject of great debate within our community, often involving the technical difficulties of parallelizing EDA software (particularly legacy software) and its impact on traditional business models. These are real challenges, but so are the benefits to our users when we get it right.
It is no news when one talks about increasing complexity of designing the SoC devices. It is a foregone conclusion that designing is a relatively bounded problem compared to verification. Just as design reuse through Semiconductor IP (aka design IP) helped bring the designers up the productivity curve, in the last decade Verification IP (VIP) has done the same for the verification engineers. Two leading methodologies, Verification Methodology Manual (VMM) and Open Verification Methodology (OVM), helped accelerate the adoption of structured verification methodologies using SystemVerilog as well as the creation of commercially available verification IP to independently validate integration of design IP in SoCs. Essentially, both methodologies are a collection of SystemVerilog classes with inherent semantics for their behavior in different phases of the simulation. The user creates verification objects from these classes and attaches them to the design components as traffic/data generators, monitors, checkers, etc.