Open side-bar Menu
 Real Talk
Lauro Rizzatti - General Manager, EVE-USA
Lauro Rizzatti - General Manager, EVE-USA
Lauro is general manager of EVE-USA. He has more than 30 years of experience in EDA and ATE, where he held responsibilities in top management, product marketing, technical marketing and engineering.

Building Relationships Between EDA and Semiconductor Ventures

 
November 8th, 2010 by Lauro Rizzatti - General Manager, EVE-USA

Let’s change topics this month from various approaches to solving the verification challenge to ways to support semiconductor startups.  For this, I turn to EVE’s Vice President of Sales Ron Burns.  He has written a compelling article that will appear in the December issue of GSA Forum on the importance of building relationships between EDA companies and semiconductor startups.  He advocates finding new ways to partner with these new ventures because our synergistic industries have a shared destiny.

From his perspective and one shared throughout EVE, today’s fabless startups are in search of external partners willing to work with them and help them become successful.  These partners need to have a solid understanding of the challenges facing a startup, from funding, product development and globalized teams to core competencies and time to market.  And, they need to be ready to offer assistance and support, as appropriate.  In fact, with some creativity, an EDA company can be a collaborative partner able to help a startup navigate some of these challenges. 

He writes convincing about the need to understand the process.  What he means is this:  Before engaging, the most pressing consideration is to establish a goal for working with a startup.  Thinking in terms of a shared destiny helps clarify short-term revenue objectives and the lifetime value of the relationship, with regard to both cost of sales and support, and technology adoption. 

EDA business models have always had some elasticity, and discounting software or loaner programs are common.  Another is a flexible payment model based on projects and key milestones, with the understanding that the EDA company will help the startup reach these milestones.  EVE, for example, offers a remote access model or peak-load onsite rentals.

Intellectual property (IP) is becoming more and more of an issue and many larger EDA companies are building portfolios useful for a new venture.  While that makes great strategic sense, the startup may not have the resources to buy it. 

There are other options, including several offered by EVE.  Our scalable emulation system supports a wide range of verification components that include transactors, memory models and speed-rate adapters.  The transactor catalog comprises, among others, PCIe, USB, FireWire, Ethernet, AHB, AXI, TLM 2.0, Video, HDMI, I2C, I2S and JTAG components.  The Memory Model catalog includes virtually all popular types of memories, such as DDR, DDR2, DDR3, GDDR5, mobile and flash parts.  A Speed-rate Adapter catalog offers PCI, USB and a multi-media card with a complete set of video protocols. 

We may license any verification component for the duration of the project and offer a program to startups where they can mix in and out certain pieces for the duration of the project under the same license.

The possibilities for EDA companies to build supportive relationships with startups are endless, especially for creative companies willing to do something different.  The strategy is to align with the company’s goals and the EDA company’s tools, services and IP.

Watch your email in box next month for your link to the GSA Forum article by Ron Burns and be sure to let us know if you agree that the EDA and semiconductor ventures have a shared destiny. 

Thoughts on Assertion Based Verification (ABV)

 
November 1st, 2010 by Howard L. Martin, President , Zocalo Tech

Verification approaches or methodologies that increase the probability of designs being correct the first time and can be easily integrated into the existing functional verification flow will find a ready market. If in addition this technology reduces both verification time and cost it will be a major winner.

Assertion Based Verification (ABV) is in that class of technology. However, ABV has not been widely adapted because of the time, cost and difficulty of deployment. The predominate language used today for coding assertions is SystemVerilog Assertion (SVA) language. The difficulty of the SystemVerilog Assertion (SVA) language plus other factors has limited the use of assertions to simple versus more useful and powerful SVAs.

According to Harry Foster (Verification methodologist in Mentor Graphics) in his paper Assertion-Based Verification: Industry Myths to Realities(Invited Tutorial…2008)

“…..it is a myth that ABV is main stream technology.” ……”what differentiates a successful team from an unsuccessful team is process and adoption of new verification methods. Unsuccessful teams tend to approach development in an ad hoc fashion, while successful teams employ a more mature level of methodology that is systematic”. ……

Implementing ABV is somewhat of a “chicken and egg” situation. The industry accepts that ABV can reduce debug time by 50% and there is no question relative to its “goodness” for increasing first time design success. However, the implementation phase today is so open-ended and difficult that only “baby steps” have been taken. Even these small steps are useful; however, the industry hasn’t come close to attaining the full benefits of ABV.

A good analogy to ABV is exercise. Everyone knows the benefits are major and any level of exercise is good. Most of us have taken “baby steps” towards exercise. However, to attain the full benefits requires, in addition to believing in the benefits, a way to reduce the open ended nature. This can be done by creating an exercise program or approach that works within the user’s life style and becomes part of their weekly routine. This typically requires help in the form of books, trainers, equipment, etc. to get started and maintain the motivation for continuing the program and reap the benefits.

Clearly the opportunity exists to provide help for implementing ABV. The methodologies are already in place. We at Zocalo believe that automation plus metrics that takes away or reduces the open ended nature of the technology and works within the users flow will motivate projects to start and continue using ABV, thereby reaping the full benefits of implementation.

To access a (no registration required and free) Assertions white paper, please visit http://www.zocalo-tech.com/files/designer_assertions.pdf.

A version of this article was previously published earlier this month at http://www.electronicscomponentsworld.com/articleView~idArticle~73104_9374540205102010.html.

Who is the master who is the slave?

 
October 25th, 2010 by Rick Eram, Sales & Marketing VP

I was recently involved in a panel of EDA vendors and EDA users. Several EDA vendors were present and the discussion was rather heated. The designers made the argument that tools don’t address the difficulties and challenges of today’s designs. The point was that designers have to deal with the EDA tool shortcomings on daily basis.

The vendors in their own defense had a simple argument; designers need to understand the limitations of the tools and design to the capability of the existing tools. Designers need more training or they don’t fully appreciate the technology was other reasons cited. Maybe the customer is not always right! Follow up discussions with the designers made it clear that they were disappointed. They said the tools don’t work well together, and despite many attempts and claims of integration, the tools are at best a bunch of point tools strung together with many holes in the tool chain. The use models are patched up with scripts and handy work of designers. The chips have grown significantly, yet the tools have changed incrementally at best. Many tools in the flow are based on 10-20 year old technologies. Let’s “sell what we have” mentality dominates a good number of EDA vendors whom are typically run by “industry veterans” who have forgotten about the innovative part of this business and don’t spend the time to understand their customer needs. Instead these “veterans” focus on slide ware and selling processes, wasting their time and designer time, raising cost and risk and not addressing the real problem at the end. They try to convince the customers with fancy slide ware that their approach of fix and patch will address the design challenges, not even understanding what the customer does on daily basis and where they get stuck.  The roadmap is often not much more than a repeating the symptoms gathered from designer not detailed understanding of the problem.   So the vendor band-aids the symptoms and misses the mark on the root cause. So where is the innovation? The business that was born from innovation is dominated by recycling and rehashing decades old technologies. What a pity.

Well it is matter of simple economics. Changes to the flows can be costly and can potentially cause significant down time risk for design teams. On the other hand on EDA side the cost of tool development is high and even with best tools the company may miss the mark adding to risk and delayed revenues!  So what is the problem? How one fixes the broken model?

If you ask any experienced designers they can quickly point out a bunch of issues with tools they use today. These designer issues are often rooted at EDA tool provider’s basic understanding of the problem and the tasks performed by designers. Designers often notice that EDA tools are designed by people who don’t understand SOC design.  The software folks don’t always translate hardware related issues to right applications. This simple fact causes many issues and problems with the tools completeness and effectiveness. All is nothing but added cost and risk of development and deployment of  EDA tools.

Generally once an EDA vendor finds a formula that works they stick to it and try to milk it for what it’s worth.  The tools often address a portion of the designer needs. Then the vendors apply the same engine in more ways than imaginable to solve mutually exclusive problems faced by designer. The result is discontinuity, noise, inaccuracy and lack of interoperability causing nothing but pain for designers. The design flows are littered with such tools. Bottom line the vendor must understand the most detailed issues customer is faced with before designing tools. That’s when innovation takes place. We are all familiar with tools that share nothing but a name, “The Brand” and not much more. Either the integration is nothing but a few slides, or the problem space is so far off between various options of the tools that despite marketing tricks it is impossible to integrate. Nothing but a marketing ploy. Tool vendors are interested in solving big problem, some of which are not even a problem from designer perspective. Sometimes simple observations of what bottlenecks designers deal with on daily basis will reveal a wealth of information and opportunities to improve the tools often missed by tool vendors.  A simple change a small innovation in tool implementation can have a huge impact on the designer satisfaction.

Back to the original question, who is the master and who is the slave? If the tool designers understand the problems faced by the designers in detail and get beyond superficial problem statement slides, then the goal of building better SOCs can be met at lower cost and risk. Bottom line the chip has to be designed and that’s what makes our world to go around. Efficiency and accuracy comes at a cost and if the gains are shared on both tool and design side, the result is higher quality chips, better processes, as well as lower risk. The designer needs to be aware of innovation and recognize slap and patch approach compared to tools designed based on sound engineering fundamental. This saves everyone the cost and increases profits.

Economics of Verification

 
October 11th, 2010 by Jin Zhang, Director of Technical Marketing

Introduction

In the light of the ups and downs of the world economy, it is interesting to review and see how the principles of economics work in the IC design industry, in particular with respect to verification. How much do the day to day decisions people make in doing design and verification reflect the principles of economics?  In this blog, we will look at three micro economic principles and see how we can make the best choice by following these principles.

The 1st Principle – People face tradeoffs

The resources of our planet are scarce; therefore nobody can have all they want. Everybody has to face tradeoffs in making a decision. In today’s economy, the tradeoffs and choices one has to make can be particularly important.

Most managers in the IC design industry have been in a very tough situation the last few years. They have been faced with increased complexity of designs, reduced staff, tighter budgets, shortened project schedules and greater pressures from the market to perform. The choices they have to make under these constraints are challenging.

For example, at a higher level, managers may need to decide on:

  1. With a reduced staff, how many people should I put on the design team vs. the verification team? Or does one person do both jobs?
  2. With a shortened project schedule, which part of the design and verification cycle can be shortened?
  3. With a tighter budget, what kind of EDA tool investments will bring the best ROI?

At a lower level, decisions related to verification could be:

  1. Given that verification takes 70% of the whole design cycle, what technology can help reduce the verification bottleneck?
  2. How much verification can we afford to perform on the block-level vs. the system-level?
  3. How much verification is enough to deliver confidence?

Failure to make the right choices in these decisions could potentially lead to lower quality of product, loss of profit or even bankruptcy in the current economic climate.

To best assess the tradeoffs in making these decisions, one should look at the opportunity cost involved. That brings us to principle number 2.

The 2nd Principle – The cost of something is what you give up to get it

In evaluating each choice, the rule is to see which choice has the least opportunity cost. Opportunity cost is simply what you must give up (the next best alternative) in order to get what you want. For example, you have 2 hours of free time. You could either watch a movie or take a nap. The opportunity cost for taking a nap is the enjoyment from the movie you would have otherwise had. Similarly, the opportunity cost for watching the movie is the much needed rest you would have gotten otherwise. The decision comes down to what is most important to you. It is worth noting that opportunity cost is often hard to measure and depends very much on the individual and situation involved. Nonetheless, opportunity cost is useful when evaluating the cost and benefit of choices, and the choice to go with should be the one with the least opportunity cost.

Given that verification takes 70% of the design cycle and 60% chip re-spins are due to logical/functional errors (Trends in ASIC Prototyping), it is important to invest in technology that can improve verification confidence and reduce the overall verification cycle. We will use the following hypothetical scenario to illustrate how opportunity cost comes into play in the decision making process.

The project manager at company ABC is deciding between buying more simulation licenses to do more system-level verification vs. adding automatic functional verification software to their methodology for more block-level verification. Their current verification methodology is such that limited block-level simulation is performed by designers, due to the effort involved in creating block-level testbenches. Most verification is done at the system-level by verification engineers. The company recently had a chip re-spin due to a functional error found in silicon. The project manager sees the need for more verification at both the block-level and the system-level. However, due to limited budget, they can only make investment in one area. To make the best decision, they must evaluate the verification ROI at the block-level vs. the system-level and go with the option with the most benefit, i.e. the least opportunity cost.

More and more companies are seeing the benefit of block-level verification using automatic functional verification tools. These tools operate with no testbench, therefore requiring little time and effort to setup and run. They employ formal technology to exhaustively verify the RTL blocks to catch bugs such as unreachable states, single or pair wise state deadlocks, dead codes, and synthesis pragma violations in the designs. By performing this kind of verification early in the design cycle, finding and fixing bugs becomes easier. This improves the overall quality of the RTL design before system-level verification begins, as a result, reduces the verification requirement at the system level. It is estimated that employing automatic functional verification tools can catch 50% of the design bugs early while saving 15% of the overall project cycle. This is the opportunity cost that company ABC would have to forgo if the project manager goes with more simulation at the system level.

Similarly, additional simulation at the system level could also lead to improved verification confidence. However, most things face the law of diminishing returns (also called the law of increasing opportunity cost). For example, in a production system with fixed and variable inputs (such as equipment and labor), beyond some point, each additional unit of variable input yields less and less output. The law holds for increased level of simulation. The benefit of additional simulation at the system-level is not as pronounced because significant system-level simulation is already in the current methodology. Therefore, the opportunity cost for investing in functional verification tool is less and it is where the decision should be.

With this decision made, the next question is how much verification should be done at the block level. To answer this question, we need to examine principle number 3.

The 3rd Principle – “How much” is the decision made at the margin

Some decisions in life involve either-or choices, like the one we did earlier. Some decisions involve “how much” choices, which require analysis at the margin. One needs to look at the marginal cost and marginal benefit and find the equilibrium to derive at the optimum solution. Marginal cost is the additional cost imposed when performing one more unit of an activity. Similarly, marginal benefit is the additional benefit received when performing one more unit of an activity. The point where marginal cost and marginal benefit cross is when we achieve the most efficiency.

Following our hypothetical scenario, suppose the following table shows the marginal cost and marginal benefit for an additional week of the automatic functional verification performed at the block level. It is easy to understand that fewer bugs will be found as time progresses. The “marginal benefit ($)” is calculated by the product of the number of bugs found per week and the cost to find one bug at the system level (assuming $200 in our analysis). The marginal cost is simply the salary cost to have the designer perform block-level verification. Comparing the marginal benefit and marginal cost, it is easy to see that the optimal amount of block level verification lies between week 3 and 4.

Weeks of Block Level Verification Total Bugs Found Bugs Found Per Week Marginal Benefit ($) Total Cost Marginal Cost ($)
1 25 25 $5000 $1200 $1200
2 40 15 $3000 $2400 $1200
3 50 10 $2000 $3600 $1200
4 55 5 $1000 $4800 $1200
5 58 3 $600 $6000 $1200

 

Conclusions

Even though we may not know these micro economics principles explicitly, most of our everyday decision makings are done by implicitly evaluating the opportunity cost and doing marginal analysis. By understanding these principles, one can form a clear framework and plug in real numbers to base the decisions upon. This is more important in our current economic condition, because a bad decision could lead to some very undesirable consequences.

 

Hardware-Assisted Verification Tackles Verification Bottleneck

 
October 4th, 2010 by Lauro Rizzatti - General Manager, EVE-USA

An often-repeated industry mantra is that verification takes up about 70 percent of the development cycle, making it the most time-consuming piece of chip design today.  Every indication that we’ve seen over the past 10 years confirms this number.  And while a host of software-based verification tools have been deployed to tackle the verification bottleneck, design teams are turning to hardware-assisted verification platforms to accelerate hardware debugging and software test and integration.  As a result, they’re often successful at reducing their verification budget and beating time-to-market pressures.

Let’s examine this move toward hardware-assisted verification.

Software development can’t wait for working silicon, which means that design teams need to a fail-safe way to verify that their chips will work as intend as they run embedded software.  All the while, they’re grappling with shortened development cycles and designs that reach billions of application specific integrated circuit (ASIC) gates and millions of lines of code. 

 This means that a design team needs to create a working prototype for software development as early as possible and before the end of the hardware design cycle.  The prototype must fit into the general hardware design flow or the design team risks extending the design cycle.

More and more, hardware-assisted verification platforms are used to simultaneously validate hardware and software and, generally, fall into either emulation or field programmable gate array (FPGA) prototyping categories. 

Emulation has had a reputation for offering large capacity and good hardware debug capabilities, but is reputed to be slow, expensive and poorly suited for validating embedded software.  Conversely, FPGA prototypes are cheaper and faster, but do not have hardware debug capabilities and take longer to build and test. 

Many design teams with a large budgets use both approaches.

That’s changing with the latest generation of hardware-assisted verification platforms able to offer features and benefits of both.  Suppliers of these platforms have combined speed for embedded software validation with hardware visibility and debug, giving design teams a way to verify hardware and software as a fully operational embedded system.

One popular emulator based on an FPGA architecture is used for simultaneous hardware and embedded software verification.  It has the speed to validate embedded software and the ability to provide full internal signal visibility for effective hardware debug.

In general, ASIC prototypes require manual code changes for FPGA implementation, followed by logic synthesis and manual partitioning across multiple FPGAs, then place and route.  Designers repeat these steps each time the design is changed, making the prototype ineffective for hardware verification.  This latest generation emulator automatically completes these steps without modifying the original system-on-chip (SoC) source code.  It handles complex clock processing, memory generation, multiplier/ALU logic, bus resolution and multiple-data-rate (XDR) wrapper generation. 

Further, it can compile incremental changes to either the testbench or design under test (DUT).  And, it uses the same hardware and models across the design cycle, making it a single platform for hardware and software verification.

Hardware-based verification platforms are giving design teams a way to break the verification bottleneck and reduce the verification budget.  They’re finding that they can now use a single platform to handle hardware/software architectural tradeoff analysis, hardware debug, hardware regression, software integration and embedded software validation.  Now, that’s a mantra worth repeating.

Excitement in Electronics

 
September 27th, 2010 by Carol Hallett, VP of World Wide Sales, Real Intent

The year was 1972; I had just graduated from High School.  It was decided that I should be working…I was not sure what I was supposed to do for work.  I picked up a newspaper and there was a big article that National Semiconductor was hiring.  I decided to get a job there.  I was not sure what they did but they were hiring, I needed a job so it seemed like a fit to me. 

I went into the lobby of the main building (at that time there was only 3) and asked for a job application.  The receptionist gave me one and I sat down in the chair to fill it out.  There were lots of people coming and going through the lobby.  One gentleman came up to me and asked me what I was doing.  I answered, “Filling out an application for a job”.  He asked me why, I said, “to get a job” (thinking this was a trick question).   He looked puzzled and said, “Why, you already work here”!  I assured him that I did not but I wanted to.  He smiled and said, “Well, your twin works here then, come with me” he continued, “you just got yourself a job”.

That is how I got into Electronics.

On my first day on the job my boss introduced me to the girl that he thought looked so much like me.  She had long straight hair (we all did back then), was my size and build but she was much prettier than me.  I was very grateful that she worked there and I thanked her for helping me to get my first job.  Needless to say we were fast friends and like most twins, inseparable.

I worked at National Semiconductor for 8 years.  National Semiconductor was great about education.  They sent me to Electrical Engineering classes; I was the only girl there.  My bosses wanted me to be an engineer.  The best part was that most of the classes were held right there on their premises.  I could take college classes at work and get college credits and paid for it at the same time.  I loved the classes because they were well organized, well taught and I could usually relate them back to the work I was doing…so it made it very interesting.

When I first started at National I worked in the test area on swing shift and ran a TAC tester.  The goal was to get as many units tested as possible…oh, a goal.  Cool, I can do that!  Each night I tested more units than the night before.  I streamlined the input and output of the machine so that I never let the machine stop.  I organized the paperwork so that it was completed as the parts were being tested.  I learned how to fix my machine so that I did not need to wait for maintenance if my machine went down.  I did preventive maintenance on my machine so that it was working better than any of the other machines on the line.  Everyone hated me; I kept increasing their quotas because I could do more.  Soon I was made lead of the area and I taught everyone else to be more productive.

National was a wonderful place to work.  Each time, I got bored or wanted to learn something new, there was always that opportunity.  After a while I did not have to petition for jobs, I had managers coming to me to ask me to help with a new department, organize a production flow or train others to be more effective.  I worked in Masking, Diffusion, Design, Engineering, and Mask Making and got to be an expeditor, which was fabulous…, it matched my personality…a runner!  As an expeditor, I needed to produce a new product (for example: the very first Ladies LED watch was made by me and I still have it in my jewelry box), fast and without a production line.  So I needed to come up with the flow to produce the item, get time on different lines so that I could do the work and not interrupt their production flow…while at the same time making my schedule.  I met with the product line managers, made a deal with them to use their machines and a time schedule as to when I would need them…and made it all fit my product schedule.  Then I ran from one production line to another to meet or exceed my target…I loved it.

The other memorable position that I had was offered to me by Pierre Lamond.  Pierre was the Executive VP of R&D at the time.  He had heard about me and was pulling together a team of people to open up the “Bubble Memory” production line.  He asked me if I wanted to join.  I said YES, of course!

I had no idea what to expect.  I left my current job and department without question and on Monday morning went to HR to find out where I should report.  They told me the room number.  I thought it was odd because I knew this building very well and the room number she had given me was an empty part of the building.  When I arrived…I was right, it was empty.  The team that was assembled started to arrive and then Pierre came in.  He said that he had chosen us to build the line from the floor up and he meant it literally.  We were in a room with no walls.  We drew up the plans for the production line, met with vendors to get the right equipment, worked with the plumbers, electricians etc to build out the space as per our specifications.  When needed we went to Sears to buy tools, pipe whatever to keep the project on schedule.  Then one day we were able to run our first wafer through the line…it was really an exciting time.

Achieving Six Sigma Quality for IC Design

 
September 17th, 2010 by Jin Zhang, Director of Technical Marketing

The manufacturing industry saw significant improvement in quality within the last few decades due to the implementation of Lean Manufacturing process and Six Sigma quality control measures.

Lean Manufacturing, also called Just-in-time (JIT), was pioneered by Toyota to reduce non value added waste in the manufacturing process through continuous improvement and producing only when needed with minimum inventory of raw materials and finished goods. Six Sigma is a well known, data driven set of standards that use in-depth statistical metrics to eliminate defects and achieve exceptional quality at all levels of the supply chain. Lean Manufacturing and Six Sigma quality (Lean Six Sigma) have merged in theory and practice [1]. This new paradigm requires each employee to assume responsibility for the quality of their own work. To create higher quality, defects need to detected and fixed at the source. Quality is built and assured at each step in the process rather than through inspection at the end. Adoption of Lean Six Sigma in production resulted in the high quality of goods and services that we all enjoy today.

These same principals and philosophy can be directly integrated into the IC design industry to improve the quality of chips. Defects discovered in silicon at the end of the manufacturing process are costly, inefficient and wasteful. Instead, bugs should be detected at the RTL source where they are created. The traditional way of designers writing the HDL code, performing minimum amount of verification and throwing it over the wall to the verification team is the ultimate cause of poor quality, long project cycle and wasted money for investors and stock holders alike. It is time the IC design industry adopts the Lean Six Sigma philosophy to build quality design from the very beginning.

There are a couple of reasons that account for the divide between design and verification. First is the notion that it is better to have another pair of eyes to examine and verify the HDL design rather than trusting the designers who write the RTL. The second is the low verification ROI achieved by using the traditional simulation technique to perform block level verification. A lot of time and effort is needed to create the verification infrastructure, thus negating the productivity gains from early verification.

The first factor requires a change of attitude, as what happened in the manufacturing industry. People need to be made responsible and accountable for the quality of their own work. Detecting failures at the source cost the least amount of time, money and effort. Quality can only improve when individuals are held responsible and results are measurable.

The second factor can be eliminated with the advancement of formal verification technology. Formal verification requires no testbench, therefore reducing the requirement on building verification infrastructure; it performs exhaustive analysis and can often catch corner case bugs that are hard to find through simulation. Debugging at this stage is more efficient because of the intimate knowledge the designer has of the code, the limited scope of logic involved and the fact that formal tools show the source of the problem through error traces. Using these tools early in the design flow can detect bugs at the source and thus significantly improve the design quality.

There are two types of formal functional verification tools in the market. The first one is automatic functional verification. Automatic functional verification tools take the RTL design alone and perform exhaustive formal analysis to catch design bugs that result in symptoms such as dead code, single and pair-wise state machine deadlock etc. This significantly improves the quality of the design with zero effort, offering the best verification ROI.

Another type of formal functional verification is property verification (also called model checking). Designers write assertions in the RTL to describe the constraints of the environment and desired behavior of the block. Property verification tools perform exhaustive formal analysis to detect situations that violate the desired design behavior. It produces error traces to show the sequence of events that lead to the violations. Designers can debug and fix the errors easily because verification is performed within limited scope at the block level.

If every design team adopts these early functional verification (EFV) tools in the design stage and creates accountable measure to make designers responsible for the quality of their own code, we will see significant improvement in design quality as we have seen in the manufacturing industry. This in turn leads to reduced project cycle, saved investment and even competitive advantage in the market place. Achieving Six Sigma quality in IC design is possible with early functional verification.

[1] F. Jacobs, R. Chase, N. Aquilano, Operations & Supply Management, 12th Edition, McGraw-Hill.

A Look at Transaction-Based Modeling

 
September 6th, 2010 by Lauro Rizzatti - General Manager, EVE-USA

A rather new methodology for system-on-chip (SoC) project teams is transaction-based modeling, a way to verify at the transaction level that a design will work as intended with standard interfaces, such as PCIe, and SystemVerilog-based testbenches. 

 

This methodology enables project teams to synthesize the processing-intensive protocols of a transaction-based verification environment into an emulation box, along with the design under test (DUT).  They can then accelerate large portions of the testbench with the DUT at in-circuit emulation (ICE) speeds.  Increasingly, this is done concurrently with directed and constrained random tests.  The adoption of this methodology has been accelerated by the advent of high-level synthesis from providers such as Bluespec, Forte Design Systems and EVE.

 

Today’s emulators look and act nothing like previous generations.  They are fast, allowing the project teams to simulate a design at high clock frequencies, and more affordable than ever.  For an emulator to be a complete solution, however, it must be able to effectively interact with designs without slowing them down.  This is where transaction-level modeling can help by providing checkers, monitors and data generators with throughput the DUT requires. 

 

Benefits of transaction-level modeling include speed and performance to handle bandwidth and latency.  For example, the latest generation emulators can stream data from a design and back at up to five million transactions per second.

 

Reuse is another benefit because emulation can separate protocol implementation from testbench generation in a way that testbenches can be assembled from building blocks. 

 

Various languages can be used to build transaction-based testbenches, including C, C++, SystemC or SystemVerilog with the Standard Co-Emulation Modeling Interface (SCE-MI) from Accellera.  Testbenches drive the data to register transfer level (RTL) design blocks. 

 

Project teams most frequently buy off-the-shelf transactors for common protocols and design their own for a unique interface or application.  Typically, a custom transactor for an interface is a Bus Functional Model (BFM) or Finite State Machine (FSM) written in Verilog register transfer level (RTL) code or behavioral SystemVerilog using a transactor compiler.  More often, project teams have a similar piece of code that can be converted into a transactor.

 

Project teams have reported numerous benefits of this emerging methodology, especially because they can develop tests faster than directed tests.  Moreover, they don’t need the in-depth knowledge of the SoC or protocol.  And, testbenches can be reused when the target standard is used in another design.

 

Pay a visit to any project team anywhere in the world and you’ll find that they implement a whole host of verification and test methodologies on an SoC design.  More and more, transaction-based modeling is gaining widespread acceptance on even the most complex of designs, shortening time to market and easing the project team’s anxiety.

The 10 Year Retooling Cycle

 
August 23rd, 2010 by Prakash Narain, President and CEO, Real Intent

I still remember the enthusiastic talk around the 10-year EDA retooling cycle in 2000.  There was optimism fueled by the dot-com boom. Moore’s Law was in full force. Communications industry was in infancy, ready for innovative new products. Products were evolving quickly, pressuring designers to produce more and more in less time. This, in turn, was fueling an unprecedented demand for new and innovative EDA solutions.

 

Those were the days…  EDA startups were abundant. There were many trade shows, most notably DAC.  Hotels were sold out! The big 3 had huge parties, and oh yes, design engineers could learn of all the new developments over the week.  You really needed a good pair of walking shoes in those days… It was like going to a candy store!

 

From a methodology perspective, automation and re-use quickly became a big focus. Mixed signal designs, multiple clock domains and advanced power management schemes became the norm. Simulators did not have enough horsepower to test all aspects of a chip. Accelerators and emulators became more heavily used, but with them came additional issues.

 

Standards have evolved around key issues. The Verilog language evolved into SystemVerilog. Standards define good coding practices including re-use practices. LINT tools became more heavily utilized to improve the quality of the design and to ensure that re-use guidelines were followed.

 

It is now 2010. The big EDA companies have adopted an all inclusive volume sales model, putting the squeeze on the smaller companies that have to compete with their “free” software.  As a result, there are fewer EDA companies providing innovation. DAC is a much smaller show. And we don’t hear much about this 10 year re-tooling cycle.

 

But Moore’s law is still active, albeit at a slower pace.  Chip sizes continue to grow and complexity continues to increase.  The time to market pressures are as strong as before, if not worse. Verification continues to have key challenges that beg for automation. And, not surprisingly, the 10-years old software has slowly aged and is no longer meeting today’s design requirements.

 

Some lint tools run for 10s of hours on designs when it is possible to run in minutes.  Some CDC tools run for days when it is possible to run in hours.  Some rule checking tools produce 100s of thousands of warnings – the wasted debugging effort may add up to an army of engineers.  The confluence of clocking domains, power domains and DFT requirements have added significant pressure on design methodologies.

 

There may be fewer EDA companies these days but innovation is still going strong.  Products for the next 10-years are available and getting adopted. Precise Lint tools with blazing performance are available. Precise CDC tools make it possible to achieve reliable sign-off on today’s designs. New innovations are underway for solving complex issues such as X-Optimism and X-Pessimism in simulation.  Automatic Formal Analysis tools quickly improve design quality with minimal effort.  SDC tools ensure the effectiveness of time consuming STA efforts. The 10-year retooling cycle is in effect again.

 

So what tools are in your flow?  Are they current?  Are they working well?  Can your supplier respond to your needs?  Are you getting what you paid for?

 

You need today’s innovations to deal with tomorrow’s problems!

Hardware-Assisted Verification Usage Survey of DAC Attendees

 
August 2nd, 2010 by Lauro Rizzatti - General Manager, EVE-USA

Tradeshows and technical conferences serve as great places to survey the verification landscape and the Design Automation Conference in June was no exception.

 

EVE took the opportunity to poll visitors to its booth with a survey similar to the one used at EDSFair in Japan earlier in the year.  Interestingly enough, some of our findings in the DAC survey tracked with findings from EDSFair.  In some cases, they were widely dissimilar.

 

Our DAC attendees who took part in the survey included designers/engineers, managers, system architects, verification/validation engineers and EDA Tool Support or CAD managers.

 

Both sets of respondents noted that challenges are getting more complex as design teams merge hardware and software into systems on chip (SoCs).  The Verilog Hardware Description Language (HDL) wins out as the number one language for both ASIC and testbench design, with SystemVerilog a distant second.  DAC attendees ranked SystemC ahead of VHDL for ASIC design, while VHDL is used more than SystemC for testbench design.

 

Surprisingly, while more than 70% answered that they own between one and 100 simulation seats, 17% claimed to have more than 20 seats compared to only 12% between 100 and 200 seats.  Our conclusion is that very large farms are more popular than large ones.

 

Unlike their counterparts at EDSFair, DAC attendees are less than satisfied with their current verification flow.  Almost 70% of EDSFair attendees claim to be satisfied with their verification flow.

 

DAC attendees noted the same dissatisfaction for runtime performance and rated poorly the setup time for their verification flow.  Also, efficiency in catching corner cases and reusability was ranked less than satisfied to fairly satisfied in both categories.

 

When asked to rate the importance of various benefits of a hardware-assisted verification platform when making a purchasing decision, they chose runtime performance, followed by price as most important.  Visibility into the design and In-Circuit Emulation (ICE) came next.  Compilation performance, simulation acceleration and transaction-based design, while considered important, received lower grades than the other criteria.

 

While simulation acceleration doesn’t rank highly in purchasing criteria, those surveyed claimed that simulation acceleration is the mode they use most for their hardware-assisted verification platform.  ICE is listed as the second most used mode, and stand-alone emulation came in third.  Few use it for transaction-based emulation.  By comparison, the EDSFair survey revealed that transaction-based emulation was second after simulation acceleration and significantly more popular than stand-alone emulation and ICE.

 

The primary use for hardware-assisted verification is ASIC validation, with hardware/software co-verification a close second, a trend we also observed with EDSFair attendees and is most likely because of the move to include embedded software in SoCs. 

Emulation can be used for hardware/software co-verification because it works simultaneously to verify the correctness of both hardware and embedded software.  It can process quickly billions of verification cycles at high speeds.  Unlike older generations that were prohibitively expensive, pricing for today’ emulators is competitive, a key consideration for EDSFair and DAC attendees.

 

The news from Japan in January was positive and I projected that the widespread adoption of hardware/software co-verification would be good for EDA’s verification sector in 2010.  While the DAC survey didn’t offer up the encouraging signs, it did confirm that hardware/software co-verification is taking root.  At EVE, we consider that a plus for the hardware-assisted verification market segment.

CST Webinar Series



Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy