The Breker Trekker Tom Anderson, VP of Marketing
Tom Anderson is vice president of Marketing for Breker Verification Systems. He previously served as Product Management Group Director for Advanced Verification Solutions at Cadence, Technical Marketing Director in the Verification Group at Synopsys and Vice President of Applications Engineering at … More » Blast from the Past: Verification in SiliconJanuary 7th, 2015 by Tom Anderson, VP of Marketing
Late last year, we published a series of blog posts discussing how the world of large chip designs is moving toward multi-processor, cache-coherent SoCs. This trend is due to several sub-trends, including the addition of one or more processors, the growth in number of processors, the use of shared memory, and the addition of caches to improve memory performance. The result of this movement is clear: large chips are becoming more difficult to verify than ever. Verification teams face challenges at every turn. It’s hard to run a complete SoC-level model in simulation, especially if the team wants to boot an operating system and run production applications. This may be feasible in emulation or FPGA prototyping platforms, but these cost a lot of money. What we’re starting to see is the truly stunning trend that some teams are taping out SoCs without ever having run the entire design together. This means that full-chip verification and debug isn’t happening until first silicon is in the lab. Let’s explore why this is happening.
First, let’s acknowledge how really big some of these designs are. GPUs may have hundreds of specialized graphic processors and a handful of general CPUs. Servers may have 8-16 CPUs with enormous memories and very high-speed interconnects. Sea-of-processors networking designs may have dozens of CPUs linked together with a complex grid or switching fabric. By any estimate of gate count, the newest chips are a big leap ahead of the prior generation. We’ve seen several projects now where the verification team has simply given up trying to run the entire SoC in simulation for any testing, let alone hardware-software co-verification. These teams race to solidify major subsystems so that they can assemble a full design in an emulation box or a prototyping system. It is rare for a project to afford more than one or two such platforms, so debug becomes a major choke point for verifying the complete chip. Structured design techniques and design reuse make it more likely that well-verified subsystems can be plugged together and work for many cases. However, as we show in our famous SoC iceberg, there are many system-level features that cannot be verified until the entire design is assembled. Waiting until a hardware platform is ready delays finding corner-case bugs until fairly late in the project. The turn time to diagnose a bug, fix it, recompile and download the design, and verify the fix is many hours or even days. In most cases, the goal remains to run the production software together with the complete SoC hardware design before tape-out. In reality, not every team achieves this. Many, if not most, SoC projects budget for one or more chip turns. They expect that they will not have achieved anything near complete verification by the first tapeout, and so plan to do some of the verification in the lab using the first fabricated version of the chip. Of course, debugging is much harder in the real chip than in simulation or even emulation, and the turn time to make fixes and verify them can be months and cost millions of dollars. This is troublesome enough, but we are now seeing projects that never manage to run the complete SoC even in emulation or an FPGA protyping system before tapeout. They say that the entire chip simply will not fit into a hardware platform, or that they can’t afford to buy a large enough box. In such a case, the complete SoC is assembled only in the lab, when first silicon is available. The situation is becoming even worse since some types of products use multiple SoCs on a board and multiple boards in the system. Even if it’s possible to simulate or emulate a full SoC, the entire system still comes together only with actual silicon in the lab. So we have moved from complete hardware-software co-verification before tapeout to hardware verification only to incomplete hardware verification. We have a blast from the past, in which at least some of the verification process is being performed or will be performed in silicon.Doubtless, the EDA industry will help by enabling more verification at the virtual prototype level, increasing capacity of simulators and hardware platforms, and developing better techniques for plug-and-play design reuse. But, as always, such improvements will always be in a race with increasing design size and complexity. The good news, for Breker customers, is that we can help at any stage in the verification process. Whatever portion of your design–IP, cluster, subsystem, SoC, system–you have running in any platform–virtual prototype, simulator, accelerator, emulator, FPGA prototype, or silicon–our Trek family of products can generate high-quality test cases that will stress and verify it. Our next blog post will describe the assistance we can provide in more detail. Tom A. The truth is out there … sometimes it’s in a blog. Tags: Breker, cache, coherency, DV, functional verification, IoT, IP, portable stimuls, SoC, SoC verification, TrekApp, TrekSoC, TrekSoC-Si, uvm, VIP Warning: Undefined variable $user_ID in /www/www10/htdocs/blogs/wp-content/themes/ibs_default/comments.php on line 83 You must be logged in to post a comment. |