January 22, 2007
TotalRecall from Synplicity
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
We treated it as a language. We have of course compilers for System Verilog, for VHDL and for Verilog. We are in development now on a PSL compiler. We want to be able to take in mixed languages designs. It may even contain PSL. As we bring that mixed language design into our synthesis engine we want to be able to bring that assertion into hardware in a functionally correct way. Once the assertion is embedded into the hardware we have developed a mechanism for using that as the trigger, even being able to use multiple assertions as a trigger. That is wrapped into the TotalRecall approach.
Does Synplicity have any patents on this technology?
Patented technology is the key. We have already received one patent on this. We have applied for others. This is a unique or novel approach we are employing here.
Editor: US Patent #6,904,576 Method and System for Debugging Using Replicated Logic
As the designer implements the logic into the FPGA(s) we basically create a separate logic element that we call the TotalRecall logic element. Within that we have a completely cloned version of the original design logic. We take the design module for the entire ASIC and replicate it fully within this TotalRecall logic element. Right in front of it we apply a stimulus memory buffer. As the stimulus is coming into the original design logic, they are also seeded into the stimulus memory. The original design logic is running as it normally would with the stimulus from the inputs and then showing the live output at the primary outputs. The replicated design on the other hand is running backwards in time. The user can dial in to run how much backwards in time they would like. It is basically dependent on memory depth. We don’t see there is a real limit. Today, if you go to any of our board partners, you have the ability to put off-chip memory onto their boards to virtually any depth. The user can dial in how much memory depth they want. We will create the stimulus memory buffer out of FPGA memory, maybe FPGA logic fabric or referencing off-chip memory independent of that. As the replicated design logic is running backwards in time being fed from the stimulus memory buffer. In the original design logic a triggering event will happen, an assertion fires, another bug is detected. Once that happens we freeze the stimulus memory buffer and that creates out of the replicated design the initialization information. Now we have all the initial signal values, state values i.e. memory, DSP block and so forth. That allows the simulator to be run using that initialization data and then moving forward through the stimulus memory buffer, seeing the real stimulus coming through the design. That’s what we form the testbench out of. We know that the end point will be the bug. That is essentially a 100% guarantee. When this is brought into the simulator the user knows he is headed towards that bug. If they need additional granularity or if they want to use
hardware in the loop type approach, this can be used fully with the simulator using the initialization and test bench or they can run so to speak in situ using hardware in the loop type of approach to constantly seed the simulator with new values based upon the stimulus being exercised in the FPGA hardware.
You say the user can determine how far back in time the system goes. Is that across the board or on a case by case basis? In either case, how would a newly trained designer develop a feel for how far back he should go?
We would say treat it as a system level event. Treat it as your system clock. If you need to go back say 10,000 clock cycles, that’s what you would dial in. It is basically clock cycles in the system that we would be referencing from. All of the timing issues across the board we handle automatically. That’s part of the automation we are applying here. All of the ability to keep in sequence or keep synchronized the design logic across multiple FPGAs that’s let’s call it an old problem solved back about 8 years ago with our Certify technology. Certify has mechanisms for replicating logic across multiple FPGAs and keeping them synchronized with the overall design. That same kind of approach is used here. With the dialing in, you’re right, a little bit of guess work or feel for what is happening is required. Typically you would find where the bug is. If you do not have enough memory depth, you would essentially restart things. The nice thing about using real hardware is that it operates in the tens of MHz range. You are running at system level speeds. You are probably using an operating system. You are actually transacting with a real system environment. That’s where you are going to step back through to a line in your software code. That’s why we think in the future there is a real nice connection to software debugging. You can basically step back through to the line in your software you suspect might be the issue and then see how many clock cycles forward that event actually takes place. There is a feel that you have to develop for that. But we went out and talked to about 30 customers around the world abut this approach. All of them viewed it as huge productivity boost from what they do today because they have to do the same kind of guesswork. They have to create a testbench from some point in time that they are not quite sure of and a sequence from that point forward to the bug. It is something where there will probably be guidelines or some level of insight that is given but at this point we do not have a
direct guideline to produce on it.
This is a technology rather than a product announcement. Do you have a timeframe for when some product may be announced?
Not really! We are actually just starting in. We are in alpha testing internally, using real customer designs on a potential product. We are going to begin beta testing that product in January. That’s one of the reasons for the timing of this. We feel that sometime in 2007 we are likely to have at least one product in the market that contains this. That’s where the rubber hits the road. If our beta testing results in passing grades in all categories and a real desire to bring this to the market quickly then there would be two to three months of beta testing followed by two to three months of productization to bring it to market. We may have something by midyear. If beta testing shows that we need to do a little bit more of this and do a little bit less of that and if we have to do more substantial changes, we will do all those things. In no sense are we trying to rush a product to market because we feel this is quite big in terms of magnitude. You can use any clever phrase you want: It hastens the demise of simulation, brings for the first time real hardware/software level capability to the RTL verification process, allows real-time traffic and real-time stimuli to be used in verification, .. There are a number of things that we think are tremendous improvements upon what exists today. We do not want to come into the market with something that is not very
well proven. These are big claims. We are the first to acknowledge that to claim we are covering all the functional bugs in the design is (wow) a big claim. We better have some serious proof. Not from our own tests but from our customers that it genuinely does all that we claim it does. We are a cautious company. We are very R&D oriented. I can’t give you any timeline. To be honest we do not have internally a stake in the ground for when product launches are going to be. We will see how the beta test proceeds in the next two or three months.
Based on current thinking is the product likely to be a new product or an enhanced version of an existing product?
I think there are a couple of ways we can handle it. We are thinking of this as a new product. We will have most likely a new product that incorporates this technology. But we are developing a number of things: not just the TotalRecall part, but the assertion into hardware. There is a potential that some of the underlying technology might find a home in our existing products. Certify, for example, is a very successful product today. It is used by design teams to take large ASIC designs and bring them into multiple FPGAs for prototyping. This technology is likely to be a boost to Certify, a new and improved Certify. Certify now
contains assertion synthesis capability. Until we have the product itself from development nailed down we will not be leaking this out into other products.
You can find the full EDACafe event calendar here.
To read more news, click here.
-- Jack Horgan, EDACafe.com Contributing Editor.
Be the first to review this article