The Breker Trekker
Tom Anderson, VP of Marketing
Tom Anderson is vice president of Marketing for Breker Verification Systems. He previously served as Product Management Group Director for Advanced Verification Solutions at Cadence, Technical Marketing Director in the Verification Group at Synopsys and Vice President of Applications Engineering at … More »
Ruminating about Accelerating, Emulating, and Prototyping
April 15th, 2014 by Tom Anderson, VP of Marketing
Last week I published a commentary on the Electronic Engineering Times site about the recent growth in the hardware emulation market. I noted that hardware-based platforms have become almost as big a market as software simulation and that some industry projections see them becoming dominant over the next few years. Of course, our friends at Jasper are predicting that formal will become the dominant verification technology, so it will be fun watching a three-way race.
For this post, I want to dig a bit deeper on hardware platforms in general. Historically, such platforms have been divided into three categories: simulation acceleration, in-circuit emulation (ICE), and FPGA prototyping. The reality is that these are no longer clearly distinct categories; there is a lot of fuzziness and even some overlap. While the market for all three types of hardware platforms is growing, I find that my observations and opinions vary depending upon which specific solution I’m considering.
Way back when I was a CAD manager in the late 80s and early 90s, there were only two viable options between software simulation and fabricated silicon. FPGAs were still quite limited, so people who built hardware prototypes prior to tape-out generally did so using a motley mix of existing components on printed circuit boards or even (gasp!) wire-wrapped boards. These were one-off projects where there was generally no commercial solution available.
But there was simulation acceleration, in which the RTL design was mapped into hardware while the testbench remained in simulation. Getting the design into the box was not always so easy and the speed was limited by the testbench in accordance with Amdahl’s law. I remember evaluating a couple of acceleration platforms, I believe from IKOS and Zycad (who later merged) but never found the speedup compelling enough to be worth the cost.
The other factor at work was the enormous increase in speed of general-purpose processors for many years. A new acceleration platform would come out, be interesting for a while, but lose much of its edge when faster processors sped up software simulation. I suspect that the asymptotic flattening of processor speeds is one reason for acceleration becoming more interesting again. Another big factor is the development of “acceleratable” testbench components so that little if anything remains running in software simulation.
In-circuit emulation dispensed with the testbench entirely, and used the target system as the verification environment. As with acceleration, the design was mapped into a box, but a big bunch of fickle cables connected the box to a target board. The ICE platform acted as a stand-in for the final chip, a great idea in theory. In practice, one had to deal with reliability issues and the speed difference between the emulated chip and the target board in addition to mapping the design (or a reduced version of the design) into the box.
I remember a very professional proposal from Quickturn around 1994 for an ICE platform that would have been able to fit much of a graphics subsystem we were developing. I was more intrigued by this approach than I had been by acceleration, but ultimately the million-dollar price tag was more than we could handle as a startup. Interestingly, my next job was at AMD during the height of their battle to challenge Intel in x86 processors, and there I found literally rooms full of Quickturn boxes all working to emulate the next-generation chip.
There was never a really clear line in the underlying technology used for acceleration and emulation. Eventually vendors introduced boxes that could serve either role. However, these platforms tended to be expensive since they optimized compile time over density when mapping the design into hardware. It was a rare project that had more than one ICE box, so it was used for hardware verification and some hardware-software co-verification (booting the operating system and applications) but not for software development itself.
The desire to have a replicable hardware platform used by key software developers drove some companies to develop their own prototyping systems, usually based on boards stuffed with FPGAs. To keep the cost much lower than ICE, users tried to use as much of each FPGA as possible. This often required partially or totally manual partitioning of the design across the FPGAs as well as longer compile times. These issues have been eased somewhat by the advent of commercial FPGA prototyping platforms with better front-end software.
As I noted earlier, the lines are fuzzy. FPGA prototypes may operate in ICE mode, plugging into the target board. The same boxes may be used for acceleration and ICE. Synthesizable testbenches that can run in both platforms, and perhaps even in the FPGA prototypes, further blur the distinctions. But, also as noted, all forms of hardware platform are growing in popularity as chips get ever bigger and simulation is under more pressure.
I’m still not a big believer in traditional simulation acceleration, but ICE and other platforms freed from software simulation clearly have considerable value. To be effective, it must be possible to diagnose RTL bugs and fix them with quick turn-around time. FPGA prototypes play an important role for software developers once the RTL is stable enough that the platforms can run for days if not weeks without hitting a must-fix hardware bug. All three types of platforms are used by our customers, and we support them completely with our TrekSoC-Si product.
The truth is out there … sometimes it’s in a blog.