View in your browser

EDA Magazine Review March 29, 2004

Cadence
Hardware/Software Co-verification
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
Jack Horgan - Contributing Editor


by Jack Horgan - Contributing Editor
Posted anew every four weeks or so, the EDA WEEKLY delivers to its readers information concerning the latest happenings in the EDA industry, covering vendors, products, finances and new developments. Frequently, feature articles on selected public or private EDA companies are presented. Brought to you by EDACafe.com. If we miss a story or subject that you feel deserves to be included, or you just want to suggest a future topic, please contact us! Questions? Feedback? Click here. Thank you!

In our personal and professional lives we are all familiar with application software like Internet browsers and office productivity tools. These programs along with EDA software are developed in high level languages (C/C++, Java, ..) and targeted to run on COTS (commercial off-the shelf) PCs and workstations and on popular operating systems (Windows, Unix, Linux). The software developers of these packages have access to powerful Integrated Design Environments (IDEs) for generating, managing and testing code. Programmers have the luxury of running their application in a stable environment and in real time under debug mode. They are largely isolated form the details of the hardware design.


This is not the case for developers of embedded software. In particular they do not have access to the target hardware, until there is at least a physical prototype. This means that a significant portion of the software development effort is delayed, making the overall process more sequential and lengthier than desirable. In an era of shrinking product lifetimes and competitive time to market pressures, this is a serious issue. Also, problems or even improvements that could and should have been resolved in hardware remain undetected until software debug phase. At this stage the time and cost to fix via a hardware solution is excessive. At 130 nm, a mask set for a complex SoC exceeds
$500,000. At 90 nm, pricing approaches $1,000,000. Implementing in software rather than hardware could translate into reduced performance and/or dropped functionality. Lastly, having firmware tested against ideal hardware (no manufacturing defects) prior to first silicon can be very helpful to hardware engineers in their early testing endeavors.


These issues are becoming more serious as the amount of software content in embedded systems grows. As a measure of this growth consider the study in 2002 by VDC, a technology market research and strategy firm. They estimated the number of embedded software developers to be 236,800 and the number of embedded hardware developers to be 130,900. VDC estimated the number of embedded software developers to be growing at 8.3% a faster rate than the number of hardware developers at 4%. Industry experts see software costs equally hardware costs at 130 nanometers and exceeding it at 90 nanometers.


Hardware verification is itself become more challenging. Verification times have increased with rising gate count and as overall design complexity grows. According to a survey by Collett International Research in 2002 only 39% of designs were bug free at first silicon, while 60% contained logic or functional flaws. More than 20% required 3 or more silicon spins. A Collett survey also showed that nearly 50% of total engineering time was spent in verification.


In the traditional development process the hardware and software portions continue independently and in parallel with little communications between the two teams. An ideal solution would enable developers to do software verification against an accurate model of the silicon before first silicon is available, and with sufficient performance to run the complex software expected in an advanced device. This is referred to as co-verification. Since software is coded rather than synthesized, some may prefer the term co-simulation. This approach should shorten product development time through greater parallelism, reduce risk through earlier testing and improve the design through greater
communication between software and hardware design teams.


One approach to hardware verification is to use high end hardware emulation systems such as Mentor Graphics' Vstation (uses massive array of FPGAs) and Cadence's Palladium (based upon custom ASIC design) to drive simulation acceleration and in-circuit emulation. These systems deliver high-capacity and high performance real-system verification. They can scale above 100 million gates with performance up to 1 MHz in a simulation-like debug environment that allows 100% signal visibility into the design. The major draw back of these systems is their cost making them inappropriate or inaccessible for some design projects or smaller firms.


While old fashioned bread boarding is not possible with today's SoCs and ASICs, one can develop custom hardware emulators based on FPGAs. Using FGGA synthesis and partitioning tools such as Certify from Synplicity an entire circuit can be mapped into one or more FPGAs, and the software development environment connected to the board via a standard JTAG interface. Unfortunately the lack of internal visibility precludes the debugging of the design hardware. This approach to rapid prototyping of course is its own hardware design project which consumes time, money and talent. Synplicity claims that a complete hardware prototype in the form of a FGGA-based board can be generated in under a
month for less than $100K, including the tools. Time and cost would increase if multiple FPGA boards were required.


There are a few small firms that offer a general purpose FPGA-based prototyping environment at a more reasonable price point than the high end hardware emulators described earlier. Charles Miller, Aptix SVP Marketing and Business Development, compared the roll-your-approach to walking a tightrope without a net. What happens if the FPGA based prototype doesn't come up when power is applied? Lauro Rizzatti, EVE-USA CEO, pointed out possible problems with clock distribution, memory mapping and limited FPGA I/O pins. The risk of the roll-your-own approach increases with the number of required FPGA boards. Also any change in the design can cause a rework of the custom prototype.


One vendor of FPGA-based prototyping environment is Emulation and Verification Engineering (EVE), founded in France in 2000 by former execs at Meta Systems. EVE's product line is branded ZeBu, short for Zero Bugs, and consists of a family of H/W-assisted verification PCI platforms based on the Xilinx Virtex-II FPGAs. The design under test is mapped onto one or several ZeBu boards via Reconfigurable Test Bench (RTB) made up of additional Virtex-II FPGA's, SDRAM and SRAM chips, and proprietary firmware. The mapping is carried out through any one of the most popular commercial ASIC/FPGA RTL synthesis tools plus the software compilation package included with the ZeBu system to deal with
gate-level partitioning, clocks, and memory models. For hardware software co-verification the processing units can be connected to a software debugger running on the same PC or on a separate PC/WS through a JTAG interface. ZeBu can execute software drivers, operating systems or applications at MHz speed, while providing hardware debugging capabilities. By connecting as many as 8 ZeBu boards the system can reach a maximum verification capacity of 12 million ASIC gates.


Another vendor, Aptix, is a fifty person company based in Sunnyvale and founded in 1989. The company provides flexible hardware platforms for building reconfigurable pre-silicon prototypes (PSPs) based on proprietary Field Programmable Interconnect (FPIC) technology. A monolithic SoC design can be synthesized and partitioned across multiple FPGA devices and integrated with CPUs, DSPs, memories and other IP blocks to complete the circuit. Aptix provides three modes of operation
supporting a variety of performance levels. In co-emulation mode, the user runs an RTL testbench against the hardware prototype running in the Aptix platform at tens of KHz. In vector mode, test vector sets are streamed into the hardware at speeds of hundreds of KHz. Finally, in transaction mode, system level models are run with the hardware to produce speeds measured in MHz. Aptix also offers Software Integration Station, a low-cost Field Programmable Circuit Board Replicate without the hardware probing and debugging features for distribution to software developers. Aptix supports and promotes a block-based approach to embedded SoC design. IP blocks are prototyped and validated
independently
using re-usable testbenches and co-emulation. Once validated, the IP blocks are easily assembled to produce a full system prototype.


A third vendor is Axis Systems, whom
Verisity acquired in February 2004 for $80 million in cash and stock. Axis had revenues greater than $20 million in its last reported fiscal year.
Its emulation and simulation products based upon ReConfigurable Computing (RCC) technology include Xcite for up to 10 million ASIC gates at speeds of 100K cycles/sec, Xtreme for up to 100 million gates at speeds upto 1MHz and Xtreme-II for platform-based design flow.


One problem that FPGA-based prototypes face is incompatibilities between ASIC and FPGA synthesis solutions. Usually, RTL code, synthesis constraints, scripts and the ASIC IP must be changed to move designs between the ASIC and the prototype. This is a time-consuming and error prone manual effort. On March 15th Synopsys announced
Design Complier FPGA (DC FPGA), a new FGGA synthesis tool targeted at this problem. Built upon Design Compiler technology DC FPGA enables the integration of the ASIC and FPGA design environments. DC FPGA accepts the same RTL code, constraints, scripts
and IP libraries as Design Compiler and provides the same interface to Formality formal verification. DC FPGA also offers Adaptive Optimization technology that automatically activates the best core synthesis algorithms based upon multiple parameters, then dynamically control and reorder how the algorithms are applied. In the press announcement SVP Antun Domic said that “Over forty percent of our customers are prototyping their ACISs in FPGAs” .


1 | 2 | 3  Next Page »


You can find the full EDACafe event calendar here.


To read more news, click here.



-- Jack Horgan, EDACafe.com Contributing Editor.





You are registered as: [_EMAIL_].

CafeNews is a service for EDA professionals. EDACafe.com respects your online time and Internet privacy. Edit or Change my newsletter's profile details. Unsubscribe me from this newsletter.

Copyright © 2016, Internet Business Systems, Inc. — 595 Millich Dr., Suite 216 Campbell, CA 95008 — +1 (408)-337-6870 — All rights reserved.