Open side-bar Menu
 Real Talk

Archive for September, 2010

Excitement in Electronics

Monday, September 27th, 2010

The year was 1972; I had just graduated from High School.  It was decided that I should be working…I was not sure what I was supposed to do for work.  I picked up a newspaper and there was a big article that National Semiconductor was hiring.  I decided to get a job there.  I was not sure what they did but they were hiring, I needed a job so it seemed like a fit to me. 

I went into the lobby of the main building (at that time there was only 3) and asked for a job application.  The receptionist gave me one and I sat down in the chair to fill it out.  There were lots of people coming and going through the lobby.  One gentleman came up to me and asked me what I was doing.  I answered, “Filling out an application for a job”.  He asked me why, I said, “to get a job” (thinking this was a trick question).   He looked puzzled and said, “Why, you already work here”!  I assured him that I did not but I wanted to.  He smiled and said, “Well, your twin works here then, come with me” he continued, “you just got yourself a job”.

That is how I got into Electronics.

On my first day on the job my boss introduced me to the girl that he thought looked so much like me.  She had long straight hair (we all did back then), was my size and build but she was much prettier than me.  I was very grateful that she worked there and I thanked her for helping me to get my first job.  Needless to say we were fast friends and like most twins, inseparable.

I worked at National Semiconductor for 8 years.  National Semiconductor was great about education.  They sent me to Electrical Engineering classes; I was the only girl there.  My bosses wanted me to be an engineer.  The best part was that most of the classes were held right there on their premises.  I could take college classes at work and get college credits and paid for it at the same time.  I loved the classes because they were well organized, well taught and I could usually relate them back to the work I was doing…so it made it very interesting.

When I first started at National I worked in the test area on swing shift and ran a TAC tester.  The goal was to get as many units tested as possible…oh, a goal.  Cool, I can do that!  Each night I tested more units than the night before.  I streamlined the input and output of the machine so that I never let the machine stop.  I organized the paperwork so that it was completed as the parts were being tested.  I learned how to fix my machine so that I did not need to wait for maintenance if my machine went down.  I did preventive maintenance on my machine so that it was working better than any of the other machines on the line.  Everyone hated me; I kept increasing their quotas because I could do more.  Soon I was made lead of the area and I taught everyone else to be more productive.

National was a wonderful place to work.  Each time, I got bored or wanted to learn something new, there was always that opportunity.  After a while I did not have to petition for jobs, I had managers coming to me to ask me to help with a new department, organize a production flow or train others to be more effective.  I worked in Masking, Diffusion, Design, Engineering, and Mask Making and got to be an expeditor, which was fabulous…, it matched my personality…a runner!  As an expeditor, I needed to produce a new product (for example: the very first Ladies LED watch was made by me and I still have it in my jewelry box), fast and without a production line.  So I needed to come up with the flow to produce the item, get time on different lines so that I could do the work and not interrupt their production flow…while at the same time making my schedule.  I met with the product line managers, made a deal with them to use their machines and a time schedule as to when I would need them…and made it all fit my product schedule.  Then I ran from one production line to another to meet or exceed my target…I loved it.

The other memorable position that I had was offered to me by Pierre Lamond.  Pierre was the Executive VP of R&D at the time.  He had heard about me and was pulling together a team of people to open up the “Bubble Memory” production line.  He asked me if I wanted to join.  I said YES, of course!

I had no idea what to expect.  I left my current job and department without question and on Monday morning went to HR to find out where I should report.  They told me the room number.  I thought it was odd because I knew this building very well and the room number she had given me was an empty part of the building.  When I arrived…I was right, it was empty.  The team that was assembled started to arrive and then Pierre came in.  He said that he had chosen us to build the line from the floor up and he meant it literally.  We were in a room with no walls.  We drew up the plans for the production line, met with vendors to get the right equipment, worked with the plumbers, electricians etc to build out the space as per our specifications.  When needed we went to Sears to buy tools, pipe whatever to keep the project on schedule.  Then one day we were able to run our first wafer through the line…it was really an exciting time.

Achieving Six Sigma Quality for IC Design

Friday, September 17th, 2010

The manufacturing industry saw significant improvement in quality within the last few decades due to the implementation of Lean Manufacturing process and Six Sigma quality control measures.

Lean Manufacturing, also called Just-in-time (JIT), was pioneered by Toyota to reduce non value added waste in the manufacturing process through continuous improvement and producing only when needed with minimum inventory of raw materials and finished goods. Six Sigma is a well known, data driven set of standards that use in-depth statistical metrics to eliminate defects and achieve exceptional quality at all levels of the supply chain. Lean Manufacturing and Six Sigma quality (Lean Six Sigma) have merged in theory and practice [1]. This new paradigm requires each employee to assume responsibility for the quality of their own work. To create higher quality, defects need to detected and fixed at the source. Quality is built and assured at each step in the process rather than through inspection at the end. Adoption of Lean Six Sigma in production resulted in the high quality of goods and services that we all enjoy today.

These same principals and philosophy can be directly integrated into the IC design industry to improve the quality of chips. Defects discovered in silicon at the end of the manufacturing process are costly, inefficient and wasteful. Instead, bugs should be detected at the RTL source where they are created. The traditional way of designers writing the HDL code, performing minimum amount of verification and throwing it over the wall to the verification team is the ultimate cause of poor quality, long project cycle and wasted money for investors and stock holders alike. It is time the IC design industry adopts the Lean Six Sigma philosophy to build quality design from the very beginning.

There are a couple of reasons that account for the divide between design and verification. First is the notion that it is better to have another pair of eyes to examine and verify the HDL design rather than trusting the designers who write the RTL. The second is the low verification ROI achieved by using the traditional simulation technique to perform block level verification. A lot of time and effort is needed to create the verification infrastructure, thus negating the productivity gains from early verification.

The first factor requires a change of attitude, as what happened in the manufacturing industry. People need to be made responsible and accountable for the quality of their own work. Detecting failures at the source cost the least amount of time, money and effort. Quality can only improve when individuals are held responsible and results are measurable.

The second factor can be eliminated with the advancement of formal verification technology. Formal verification requires no testbench, therefore reducing the requirement on building verification infrastructure; it performs exhaustive analysis and can often catch corner case bugs that are hard to find through simulation. Debugging at this stage is more efficient because of the intimate knowledge the designer has of the code, the limited scope of logic involved and the fact that formal tools show the source of the problem through error traces. Using these tools early in the design flow can detect bugs at the source and thus significantly improve the design quality.

There are two types of formal functional verification tools in the market. The first one is automatic functional verification. Automatic functional verification tools take the RTL design alone and perform exhaustive formal analysis to catch design bugs that result in symptoms such as dead code, single and pair-wise state machine deadlock etc. This significantly improves the quality of the design with zero effort, offering the best verification ROI.

Another type of formal functional verification is property verification (also called model checking). Designers write assertions in the RTL to describe the constraints of the environment and desired behavior of the block. Property verification tools perform exhaustive formal analysis to detect situations that violate the desired design behavior. It produces error traces to show the sequence of events that lead to the violations. Designers can debug and fix the errors easily because verification is performed within limited scope at the block level.

If every design team adopts these early functional verification (EFV) tools in the design stage and creates accountable measure to make designers responsible for the quality of their own code, we will see significant improvement in design quality as we have seen in the manufacturing industry. This in turn leads to reduced project cycle, saved investment and even competitive advantage in the market place. Achieving Six Sigma quality in IC design is possible with early functional verification.

[1] F. Jacobs, R. Chase, N. Aquilano, Operations & Supply Management, 12th Edition, McGraw-Hill.

A Look at Transaction-Based Modeling

Monday, September 6th, 2010

A rather new methodology for system-on-chip (SoC) project teams is transaction-based modeling, a way to verify at the transaction level that a design will work as intended with standard interfaces, such as PCIe, and SystemVerilog-based testbenches. 

 

This methodology enables project teams to synthesize the processing-intensive protocols of a transaction-based verification environment into an emulation box, along with the design under test (DUT).  They can then accelerate large portions of the testbench with the DUT at in-circuit emulation (ICE) speeds.  Increasingly, this is done concurrently with directed and constrained random tests.  The adoption of this methodology has been accelerated by the advent of high-level synthesis from providers such as Bluespec, Forte Design Systems and EVE.

 

Today’s emulators look and act nothing like previous generations.  They are fast, allowing the project teams to simulate a design at high clock frequencies, and more affordable than ever.  For an emulator to be a complete solution, however, it must be able to effectively interact with designs without slowing them down.  This is where transaction-level modeling can help by providing checkers, monitors and data generators with throughput the DUT requires. 

 

Benefits of transaction-level modeling include speed and performance to handle bandwidth and latency.  For example, the latest generation emulators can stream data from a design and back at up to five million transactions per second.

 

Reuse is another benefit because emulation can separate protocol implementation from testbench generation in a way that testbenches can be assembled from building blocks. 

 

Various languages can be used to build transaction-based testbenches, including C, C++, SystemC or SystemVerilog with the Standard Co-Emulation Modeling Interface (SCE-MI) from Accellera.  Testbenches drive the data to register transfer level (RTL) design blocks. 

 

Project teams most frequently buy off-the-shelf transactors for common protocols and design their own for a unique interface or application.  Typically, a custom transactor for an interface is a Bus Functional Model (BFM) or Finite State Machine (FSM) written in Verilog register transfer level (RTL) code or behavioral SystemVerilog using a transactor compiler.  More often, project teams have a similar piece of code that can be converted into a transactor.

 

Project teams have reported numerous benefits of this emerging methodology, especially because they can develop tests faster than directed tests.  Moreover, they don’t need the in-depth knowledge of the SoC or protocol.  And, testbenches can be reused when the target standard is used in another design.

 

Pay a visit to any project team anywhere in the world and you’ll find that they implement a whole host of verification and test methodologies on an SoC design.  More and more, transaction-based modeling is gaining widespread acceptance on even the most complex of designs, shortening time to market and easing the project team’s anxiety.

DownStream: Solutions for Post Processing PCB Designs
S2C: FPGA Base prototyping- Download white paper
TrueCircuits: UltraPLL



Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy