Looking at the mobile computing space today, we see a wide array of products filling every niche in the market. A consumer could easily own a smartphone, tablet, and eReader, even though there is significant overlap in functionality of these devices. Each has its strengths (and weaknesses) that causes the consumer to purchase more than one. For example, an eReader may leverage display technology that makes it superior to a tablet for reading outdoors. In other cases, multiple devices can complement each other. A cellphone could be used to provide tethering for a tablet. Consumers and manufacturers recognize that there is room for multiple products to coexist within the mobile ecosystem, and in recent years, I’ve seen the same pattern in hardware-assisted verification.
Products in the hardware-assisted verification space also have their strengths and weaknesses. Traditional emulators, typically based on custom processors, offer full-chip capacity, easy bring-up, and simulator-like debugging capabilities—features ideal for hardware verification. But their performance, typically in the 500 kHz to 1 MHz range, limits the effectiveness of traditional emulation for software validation and hardware/software co-verification, two requirements for modern SoC realization.
At the other end of the spectrum, FPGA prototypes are the converse to traditional emulation. FPGA prototypes offer significantly higher performance, enabling at-speed connections for real-time testing. But the bring-up is a largely manual process, and the trade-off for reaching real-time speeds is capacity, typically limited to a sub-system fitting into a few, lower single-digit FPGAs. FPGA prototypes are also lacking in hardware debug capabilities, making them preferable for software validation rather than hardware/software co-verification or hardware verification.
Falling into the middle of the spectrum are FPGA-based emulators such as EVE’s ZeBu. These systems leverage the higher performance FPGA components, but add a software layer to provide the ease of bring-up, capacity, and debugging features associated with traditional emulation. Operating at multi-MHz speeds, these systems enable full-chip hardware/software co-verification, a use model previously inaccessible with either traditional emulators or FPGA prototypes alone.
Given these trade-offs, I’ve seen increasing occurrences of co-existence at companies at the forefront of ASIC/SoC development. These are organizations with the largest designs and tightest time-to-market windows, and therefore see the most value in using the right tool for the right job, at every stage of the project. Traditional emulation is used for enhanced hardware verification, where RTL changes may be more frequent. But when the design is more stable, and needs to boot Linux or process HD video frames, FPGA-based emulation is leveraged for its combination of higher performance and hardware debugging. In parallel, FPGA prototypes are used to validate critical sub-systems that interface to devices like radios or cameras that must be tested at-speed.
Design teams have also asked to integrate emulators with other hardware-assisted verification products to extend functionality. That sub-system running at-speed on the FPGA prototype now gets synchronized with the rest of the emulated SoC, providing a full-chip hardware/software co-verification solution that also interfaces with the real-time target hardware.
So, is co-existence for everyone? Not quite. For many organizations, the resources required to own and manage multiple hardware-assisted verification platforms may be prohibitive. Design teams routinely use our FPGA-based emulators across the entire project development cycle for hardware verification, software validation, and hardware/software co-verification, opting for a single solution that meets most, if not all, of their needs. But for those organizations pushing the boundaries of capacity and performance in SoC realization, co-existence is a winning strategy.