August 20, 2007
Helpful Advice for Entrepreneurs. Also post-silicon validation, debug and in-system bring-up from out of the ClearBlue by Dafca
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
The solution has three components. ClearBlue ReDI (Reconfigurable Debug Instruments) Library, just a simple library of RTL primitives. This library is used by the ClearBlue Instrumentation Studio which is the tool used by the customer to insert instruments into their designs. This is not just inserting instruments but creating and configuring the instruments. The user reads their own circuit design into our tool in RTL form, they navigate their design and make selections, like I would like to observe this bus, observe that interface over there, monitor what is going on around that shared memory subsystem. Essentially they walk around and point out to our tool where there are areas of interest. Using Wizards in our tool, ClearBlue Implementation Studio, they tell us what types of applications they would like to use post-silicon. From that information we decide what type of instruments to construct. This has RL generators. We put all of that instrumentation into the design automatically, stitch everything together, close the design back up and write it out. In addition we also write constraint files. We create a test bench and equivalence checking script. Essentially we do everything to allow the customer to do everything in the rest of design flow. It does not matter to us if it is Cadence,
Synopsys, Mentor, Magma or some combination of the four. The customer continues the design flow. They fabricate the chip. Once the chip comes back, they would use the ClearBlue Silicon Validation Studio. Once they have that up and running and it is communicating with the chip through the JTAG interface, they have access to all the applications: performance monitoring, logic analysis, debugging,
Would you give us a quick synopsis of the major applications?
The most common application and something most of are customers are already doing is some form of logic analysis. We are doing essentially the same thing except our instruments con do not just logic analysis but a lot more. The way we designed the tool to emulate a commercial logic analyzer that you might rent or buy from Agilent or Tektronix. We have a slick graphical interface where customers create patterns for matching in the logical analysis, design a state machine, base a triggering sequence, control what data is written into embedded memory. They have all the basic controls they are used to. Most of our customers get this in a matter of
minutes. They can use the tool without reading our user manual.
The next application we call transaction stimulus. This is the ability to modify circuit behavior in the system at speed. We can observe what is going on inside the chip. Some of our customers have validation requirements that necessitate the ability to control or to stimulate circuit blocks in system at speed. We have a mechanism where we can wrap signals. By wrapping those signals we can gain control over them and supply functional stimulus, download that stimulus through the JTAG interface and turn the switch on that can create transactions in silicon. Next to logic analysis this is the most popular application. Infineon used us for on-chip
validation and this was one of the applications they used quite heavily.
The next application is assertions in silicon. Many of our customers are using some form of assertion based verification solutions in pre-silicon. We allow them to take those assertions and pull them into silicon, into the reconfigurable instruments to analyze the behavior of circuits.
Event analysis is the combination of the three previous applications. Often times when customers are trying to validate their silicon, they have complex transactions that happen across the entire chip. Using just logic analysis is not enough. Using just transaction stimulus is not enough. Or just having half a dozen assertions is not enough. They need to stitch all of these things together. They need to be triggering on one part of the system looking for certain events while assertions are running and while stimulating an IP block to really get to the corner cases in silicon and analyze them thoroughly. They have analyzed them in
simulation. It took them weeks and weeks to do it. They now want to analyze them in silicon but it is often very difficult to do that. It is especially difficult to do that when the chip first comes back into the lab. We are trying to provide them a means to do it very early in that first silicon phase, within a matter of days of getting it into the lab they can be using all of these applications and really seeing what is going on inside the chip in a much more efficient way.
Performance monitoring is a means to observe what is going on, to count events, to measure latency between events, track sequence of certain events. With this programmable instrumentation we have access to all kinds of counters, timers and programmable logic. The customer has the ability to construct a performance monitor unique to his application. For some type of mobile application, they can track the number of certain types of packets and certain types of error conditions. For a digital TV product, they can track the behavior of one of the coders or decoders. There are all types of interesting ways to apply performance monitoring. When people think
about performance monitoring they think of instruction tracking, keeping track of what instructions were executed, what piece of memory was used, what threads were executed in a piece of software. While we can do some of that, what we provide is a much more low level and abstract view and customer defined view of what is going on in silicon. Something as simple and as granular of watching how a single wire wiggles inside the silicon or it can be as complex as watching AMBR bus transaction over a period of hours. It is up to the customer to control the performance monitor in our solution.
Snapshot is the scan based debugging solution where we give the customer access to the scan chains. They can access use them in conjunction with their at speed logic. So they can create triggers to stop the chip and look at the scan chains. Or they can have an assertion and if that assertion fires, they can extract the scan chain. They can deposit new values into the scan chain, restart the chip and cause test conditions that would otherwise be very hard to create.
What were the chips that have come back from the fab intended for?
There are four examples” a serial ATA controller, a printer platform, an extended AMR based processor subsystem and a digital image processing solution. Three are at 90 nm and one (the processor subsystem) at 65 nm. They are relatively small in terms of gate count (2M to 4M). Most of our customers think about instrumenting their entire design but in practice they are typically at a relatively high level in their design hierarchy. So only a handful of clock domains are being instrumented. From a performance point of view, these examples are in the range of 200MHz to 400MHz. Two were ARM based and one was MIPS based. The serial ATA
controller had no embedded processor. We are finding more and more customers are putting us in chips with embedded processors.
In the case of serial ATA controller the customer was trying to prove their new piece of IP was working and compliant to the standard and they could prove to their customer that the IP was compliant and once their customer integrated their IP into a system, they had the means to observe or diagnosis any problems if and when they occurred. The solution served in a sense as a demarcation point between IP blocks.
The second and third examples are both ARM based solutions. They were using our solution to not just observe what is going on but also some level of fault injection capabilities. They were using us to validate the design. They had some software running on the ARM processor and they needed to be able to exercise corner cases of the processor. They were doing this by fault insertion to create conditions that the software behaved correctly when some unexpected condition occurred.
The last case was an observe only use of our solution. The customer had been using Chipscope form Xilinx for a long time and loved the tool but could not use it for his ASIC. They asked if we could provide similar tool for an ASIC. We instrumented 4,000 to 5,000 signals, brought these signals down to multiple debug modules and gave them system wide observability of all the critical parts of their design.
When were the products released?
You can find the full EDACafe event calendar here.
To read more news, click here.
-- Jack Horgan, EDACafe.com Contributing Editor.
Be the first to review this article