Mentor Calibre nmDRC
There have been blogs debating whether this release or by extension whether any release that can be characterized chiefly as a performance improvement can be truly considered significant. Most new releases tout performance improvements and even existing applications benefit from the speedup of new computers.
Of course performance is a critical attribute of any software application. When I was first programming computers 100 years ago it was routine to submit a deck and wait until the following day to get your printed output. If there was a typo, an entire day had been lost. Today I (and I suspect you) get frustrated when Google doesn't return Avogadro's number of items from a search in an eye blink. There is something about the human psyche. We can wait an hour for something but not a minute. This is probably because you can go and do something else if you know the wait will be an hour but not if the anticipated wait time is a minute or two. For an application to be truly interactive it must be just that. It must respond in a way that doesn't slow down the thought process. If echoing lagged typing it would render most software applications useless for all but the most accomplished touch typists.
In the case of EDA there are applications such as verification regression tests that run for hours, even days. This situation is true even when using multiple processor machines and compute farms. This has an obvious impact on the productivity of those waiting for the results. Since physical verification occupies a large chunk of the critical path, any significant improvement would reduce time to market and reduce the risk of missing the market window entirely thereby improving potential revenue and profit.
The use of multiple processors to improve performance can hardly be seen as revolutionary. Array processors, math co-processors, graphics processors and the like have been around forever. In some cases one must merely recompile or re-link an application to take advantage of this capability. In other cases the application must be re-coded and possibly re-designed to leverage these devices.
Grid computing allows one to unite pools of servers, storage systems, and networks into a single large system so the power of multiple-systems resources can be delivered to a single user point for a specific purpose. To a user or an application, the system appears to be a single, enormous virtual computing system. Virtualization enables companies to balance the supply and demand of computing cycles and resources by providing users with a single, transparent, aggregated source of computing power. Well publicized uses of distributed computing include SETI (Search for Extraterrestrial Intelligence) and the Gnome project.
Instead of executing faster another common approach is to avoid or minimize the work effort to be done thereby reducing the time required. When doing system backup, it is routine to do a full backup weekly and incremental backups of only those files that have changed on a daily basis. In the area of software development there are source control systems. If an application consists of but a single module, any source code change requires re-compilation and re-linking. But what if there are thousands of modules and only a few have been modified. The source control system is able to identify which modules are impacted and recompile only those modules. Even so, the entire application must be retested. I can speak from personal experience that the most innocuous change can have significant unforeseen consequences. In my last editorial I described Calypto an equivalence checker that among other things will tell you if micro-architectural optimizations have introduced any side effects requiring re-verification.
Other vendors have made claims for runtimes of less than two hours. In the case of CPUs and graphics cards there are well recognized standards or benchmarks for performance. Comparison across vendors is straightforward. When it comes to EDA applications, existing users of a given application may be able to quickly make performance comparisons between releases but comparisons across vendors may not be as easy.
I leave it to the end users to judge the significance of this release of Calibre.
The top articles over the last two weeks as determined by the number of readers were:
Magma Announces U.S. Patent & Trademark Office Asked to Re-Examine '446 and '438 Patents; Third Party Asks for Patents to Be Invalidated The patents are two of the three patents involved in a patent dispute between Magma and Synopsys Inc. that is pending before the U.S. District Court for the Northern District of California. Magma received notice of the requests for re-examination on Aug. 22, 2006 via U.S. Postal Service. Magma has been prevented from seeking re-examination of these patents as the result of a court order that was requested by Synopsys.
Incentia Timing Analysis and Constraint Management Software Adopted by Ambarella Incentia Design Systems announced that Ambarella has adopted Incentia's TimeCraftTM and TimeCraft-CM as its static timing analysis and constraint management software in its nanometer design flow. TimeCraft is a full-chip, gate-level static timing analyzer (STA) for timing sign-off. TimeCraft-CM Incentia's Constraint Manager, TimeCraft-CM, consists of a constraint checker, a qualified SDC (Synopsys Design Constraint) writer, and a constraint debugger.
X-FAB Selects Cadence Solution for Maximum Yield; Cadence(R) Virtuoso(R) NeoCircuit DFM Aids Analysis, Optimization for Analog IP X-FAB Semiconductor Foundries AG, leading analog mixed-signal semiconductor foundry, announced it is implementing Cadence Design Systems' Virtuoso NeoCircuit DFM solution to identify and eliminate yield-related problems early in the design phase and fabrication process.
International Conference on Computer Aided Design (ICCAD) Previews Technical Program Focused on Today's Challenges and Emerging Technologies The Conference will be held November 5-9 at the DoubleTree Hotel in San Jose, California. ICCAD 2006 will feature industry-leading keynote addresses from Phil Hester, Chief Technology Officer of AMD, entitled "An Industry in Transition: Opportunities and Challenges in Next-Generation Microprocessor Design," and Leon Stok, Director of EDA at IBM, entitled "Innovation in Electronic Design Automation."
Other EDA News Jasper Design Automation Integrates Verific's SystemVerilog Component Software With JasperGold Verification System
Jasper Design Automation Announces Immediate Availability of GamePlan(TM) Verification Planner As A Free Download
Incentia Timing Analysis and Constraint Management Software Adopted by Ambarella
Grace Semiconductor Manufacturing Corporation (Grace) selects Silicon Canvas Laker Layout Solution for their Custom IC Designs
MEDIA ALERT/Discover How Alteras Programmable Solutions Drive Innovation in the Broadcast Industry at IBC2006
Agilent Technologies Completes Acquisition of Xpedion Design Systems, a Leading Provider of RFIC Simulation and Verification Software
Sequence Extends EM And V-drop Analysis To Full-Custom Designs
Toshiba Adopts Cadence QRC Extraction for 65-nm Design Flows
Novas Launches 2006 User Conference Program
Aprio Contributes Technology to Si2's Design to Manufacturing Coalition
Other IP & SoC News
FSA Announces 2006 FSA Suppliers Expo and Conference
JamTech Achieves Technology Breakthrough for Digital Audio
Samsung Electronics First to Mass-produce 1Gb DDR2 Memory with 80nm Process Technology
IBM, Chartered, Infineon and Samsung Announce Process and Design Readiness for Silicon Circuits on 45nm Low-Power Technology
New High-End Intel(R) Server Processors Expand Performance Leadership
Semtech Announces Selected Second Quarter Results
AMI Semiconductor, Inc. Appoints Ted Tewksbury as President and Chief Operating Officer
Sigma Designs, Inc. Reports Second Quarter Results
Be the first to review this article