August 20, 2007
Helpful Advice for Entrepreneurs. Also post-silicon validation, debug and in-system bring-up from out of the ClearBlue by Dafca
Please note that contributed articles, blog entries, and comments posted on EDACafe.com are the views and opinion of the author and do not necessarily represent the views and opinions of the management and staff of Internet Business Systems and its subsidiary web-sites.
Earlier and separately I had an opportunity to interview Paul Bradley, VP of Engineering for DAFCA.
Would you give us a brief biography?
I am actually not from the EDA industry. I came from the datacomm and telecomm space. I was more a consumer of EDA tools, similar to what DAFCA is building today. I worked on some high end routers, switches, and Ethernet switches. I also did a fair amount of FPGA semiconductor design work as well at Motorola in the early days. Then I worked for a company called Sonoma Systems and CrossCom back in the early 90s. More recently I worked for Nortel Networks in their high end routing and VPM switching groups. I was doing some consulting work for a company called Internet Photonics. A friend of mine that had recently joined DAFCA who I had
worked with back at Motorola, told me what DAFCA was doing. It seemed pretty intriguing to provide all of this capability embedded inside a semiconductor chip. It was something I had been interested in after designing a number of chips and FPGAs. So I joined DAFCA 31/2 years ago. Originally I joined the team as one of the architects and designers. More recently I have moved into management and technical marketing role.
Editor: Paul left out that he was was cofounder and hardware architect at Broadcastle Corporation.
How old is the company?
The company is almost 4 years old. We were founded in 2003 by Miron Abramovici and Peter Levin.
Where did they come from?
Miron’s background is Bell Labs. He is sort of the godfather of test. He wrote probably one of the most popular and heavily used books on test and test methodologies and DFT techniques called Digital Systems Testing and Testable Design. He stayed with Bell Labs and then went to Lucent and Agere Systems. He left Agere to start DAFCA. Peter’s background is fairly diverse. He was a college professor, worked in the White House, worked for a venture capitalist firm (TVM).
How big a company is DAFCA?
Right now, we are about 25 people.
Editor: The company raised $8 million in its first round of funding.
DAFAC was awarded an Advanced Technology Program (ATP) grant totaling $1.8 million from the National Institute of Standards and Technology (NIST).
Only 30 applications were accepted out of hundreds. We were the semiconductor selection. It essentially funded our advanced R&D for the first three years of the company’s life.
Would you tell us a little about what DAFCA is up to?
DAFCA essentially delivers instrumentation IP and software for on-chip, at speed, inc system validation. The instrumentation is important but the primary value we deliver is the software that uses the instrumentation after the chip has been fabricated. We are providing a whole suite of applications.
People tend to think the chip is sitting on a tester or something like that. While that is possible, it is not the primary application. It is when the new semiconductor is installed in a system and the system is being run in the lab at speed, the customer is trying to validate the system in an environment that replicates their customers’ environment.
We already have 4 chips back, three at 90nm and 1 at 65 nm. It provides a fair amount of productivity gain. It takes many of the things our tools do from an instrumentation point of view that people are already doing today. People are already putting instrumentation into their designs. They are doing it by hand. Some firms do not spend a lot of time verifying their designs. It is pretty much an ad hoc solution.
We offer an easy way of inserting compact instruments. All of the instruments are inserted into an RTL design as synthesizable RTL. Our solution is designed to be compliant with all the major synthesis flows. The primary value is through the comprehensive analysis applications.
In the early part of the design cycle in simulation or maybe emulation, you have pretty good observability into what is going on in your circuitry. You can see all of your transactions anywhere in the design you need to. There is a performance issue because often it takes a long time to create all of the scenarios and run test verification sequences to completion. So while you see everything, it is often very time consuming to do so. Once you have fabricated the chip, the observability drops off considerably. If after the fact you need to observe something that you don’t have access to, it is very expensive and difficult to gain that
observability. In a simple form DAFCA is about providing that observability and doing it in a way that is seamless and easy to integrate with design flows. It is very cost effective
We have been talking to customers, a number of key companies, for about 4 years before we had a product in the field. A lot of folks already had the idea that what we are doing makes a lot of sense. They were trying to implement it themselves. The on-chip instrumentation is something people are already doing for a long time. What is happening now with usage of more and more third party IP and with design teams being spread among many organizations across the globe, the instrumentation solutions, the post-silicon validation, test and debug solutions that have been created are very disjoint. One piece of IP has one debug and observability structure, the next
piece has something completely different. And by the way none of the software that leverages these instruments works together. They are all fragmented ad hoc solutions. In the end there are no system wide end user capabilities. They are too fragmented to be scalable and to be used through out a large organization.
DAFCA is trying to automate the implementation of this solution. We are providing this solution through a reconfigurable instrumentation. Many of our customers tell us the instrumentation overhead has to be low. They do not want to dedicate large chunks of their silicon to instrumentation. Our novel concept and part of our patent portfolio is a reconfigurable infrastructure, reconfigurable instruments. In a sense these instruments can have multiple purposes on silicon. At one point in time they can serve as logic analyzer modules. At another point in time they can be used as built-in self test or for fault insertion. Still again for
performance monitoring. We intentionally deigned the instruments such that the overhead becomes less and less of a problem or a barrier to entry for us. The other thing we have done is provide a test platform, a post-silicon validation platform that all of the applications run on. We provide not only graphical tools to configure, control and analyze the data that comes from all the on-chip instruments, we also have the ability to extend the capabilities or functions of our tools through a TCL interface. We provide that for our customers who want to do more than our standard interface.
If there is a signal one wishes to observe and instrumentation has been inserted for that, then no problem. But if one did not know or suspect a priori that a particular signal would be of interest, then how does one get observability after the fact? Do you have access to all of the signals?
Good question! This is the first question we get from our prospects. The answer has two parts. First, understand that our solution is not just about debug. It is not just putting instruments in the areas you think are going to have trouble. It is about putting instruments into a design so that you can prove your design is working correctly.
For most people 99.9% of their design is just fine, no issues. But it takes a long time to prove that. Instrumentation is about choosing wisely to put the instrumentation in the right places so that you can perform that validation step not just the debug step. With that said, what happens if you fail to choose the signals that are important to look at after the fact? We have another solution called SnapShot. We combine our at speed implementation and debug solutions. Most chips already have scan chains in them. Using our technology we wrap these scan chains so that we have access to them through the JTAG port. So if this happens to be a
signal that the customer needs to get at that is not instrumented, we will typically have access to it through the scan chain. Using our software applications the user now has the ability to stop the chip at the precise moment in time where the point of interest occurs or where the signal is doing something interesting and can extract the state of that signal through the scan chain. We provide high coverage through this scan chain debugging technique.
You can find the full EDACafe event calendar here.
To read more news, click here.
-- Jack Horgan, EDACafe.com Contributing Editor.
Be the first to review this article