Hard IP, an introduction
  « Previous TopNext »  

Many of the books on subjects related to VLSI design still describe classic design flows. However, the push towards smaller and smaller minimum layout geometries, as in DSM technologies, is increasingly changing design flows, often rendering them more iterative. Needless to say, one of the key goals is still to design a chip that will perform as expected the first time around. Accordingly, the steps taken in the design flow should lead to predictable results. There are many questions concerning DSM design flows, and there seems to be only one recent book that focuses on some of them [2].

To meet the lime-to-market schedule is probably the single most critical requirement. To accomplish this, the number of steps in a design flow, the number of iterations through certain sequences of steps to get things right, needs to be reduced or at least predictable. The complexity and time it takes to go through these steps has lo be as controllable and predictable as possible. The steps required to design a VLSI chip are generally known. The number of iterations needed to get things right are generally not. Because of DSM effects, design flows are in a dramatic slate of flux.


Design methodologies have or should have dramatically changed from pre-DSM technologies to DSM technologies.

The following statements can be made about pre-DSM design methodologies:

  1. Before DSM technologies, resistance and capacitance of metal lines could be ignored except for interconnects in poly. So to avoid timing problems on timing critical interconnects, metal was simply used instead of poly, except for some specific interconnects such as clocks. For such critical nets, even metal lines had to be carefully designed.
  2. Before DSM technologies, the timing models for an entire chip could often be taken from a library of characterized blocks. So for Gate Arrays, Sea of Gates, Standard Cell designs and programmable arrays, a netlist was all that had to be provided to the foundry. Exceptions were, of course, fully custom designs for which no precharacterization was possible. Only a functional simulation was required with timing analysis for setup and hold times to verify that everything was connected together correctly.

The timing of a chip could be based on library elements alone because, before DSM technologies, the on-chip timing of digital ICs was dominated by the active parts of the circuit, the transistors and their associated parasitics.

Timing was localized by the active parts!

Accordingly, careful characterization and often precharacterizations for various technologies of library blocks to be used on chips provided all the necessary timing information. For intercommunication between active blocks, a simple netlist sufficed, which is merely a logical assignment between communicating contact points. This supplies none of the information needed for DSM designs on physical characteristics or timing of the paths between the active blocks, the interconnects. So for pre-DSM technologies, the active parts of a VLSI chip, the transistors, or blocks such as gates, standard cells, macros, determined and dominated the timing of the entire chip.

This localization in timing allowed the timing analysis of an entire chip, no matter how large, to be done on relatively small pieces in isolation. Parasitics could be modeled as lumped elements. Parasitic capacitances where directly determined by the size of the active devices. Their values were known. Because of these relatively small, uncoupled building blocks, the required accuracy could be determined relatively easily with switch-level or transistor-level models. This brought about a high level of confidence in the predicted performance of the chip.

One word of caution: To really be sure that a chip works, a worst case, state-dependent and consequently vector-dependent simulation is needed. However, a full, functional, worst-case simulation is very time-consuming and often avoided.

In conclusion, small building blocks of a chip could be carefully characterized for pre-DSM technologies and digital designs, as if they were standalone. Then, once the active parts were characterized, the timing of the entire chip was under control. During the physical layout of the chip, such as floorplanning and routing, the timing of the chip remained unchanged. Constraints for the interconnect routing had to be specified only for a very limited set of nets, such as clock lines for very high-speed digital circuits.

For the sake of completeness and in contrast to digital circuits, the layout issues were always part of the design challenge for analog circuits, even for pre-DSM designs. This was not so much because of interconnects, but because of symmetry or tracking requirements between pairs of transistors, resistors or capacitors. Thermal considerations, voltage gradients and noise in the chip were other critical issues. For Hard IP migration, there are layout challenges of analog circuits for pre-DSM and DSM technologies. Integrated analog circuits have always been special from many viewpoints. In Chapter 6, we devote some time to discussing analog problems in conjunction with Hard IP migration. Analog migration can be performed successfully and some companies actually do so routinely. But analog migration needs to be carried out with caution.


For DSM technology chips, timing is no longer limited to the active parts. Timing is determined by the active and passive parts together, with interconnects dominating much of the passive parts. In fact, the following general statements can be made for DSM technology chips:

  1. The timing performance of a chip will be determined by both the active parts and interconnects. The active parts of the circuit still need to be characterized carefully, although the interconnects, and not they, dominate the timing.
  2. Accurate timing performance can not be determined until the chip is completely laid out, the physical parameters have been extracted and the simulation models back-annotated. All the data available from front-end design practices is adequate only for estimates.
  3. For DSM technologies, a back-end, postlayout optimization can significantly improve chip performance. In Chapter 3, we discuss just how much one can affect the performance of a chip with postlayout optimization through small adjustments of the location of polygons.

Interconnects constitute an additional difficulty for DSM technologies. Often, interconnects can no longer be modeled as “lumped” R and C values. They now need to be modeled as distributed R/C loads. As technology advances, interconnects may even have to be modeled as distributed L/R/C loads and finally as transmission lines. The larger the vertical distance between interconnects and the back plane (the silicon), the stronger the inductive effects will become, Accordingly, with more and more metal layers and top layers being farther and farther away from the “ground plane,” the inductive effects will get stronger for the top metal layers.

We address the interconnect modeling question in Chapter 3. There is also some good news. We will see that in most cases, there are good approximate and relatively simple models that yield an accurate time delay analysis for many situations.
With much of the recent focus on improving design productivity through IP reuse, many challenging questions remain. While migrating individual blocks is straightforward, mixing and matching designs based on the various different design methodologies and processes is not. But this is not just a migration problem, it is also a question of how to make all these designs work together on one chip, how to interface them. Presently, the problems need to be solved on a case-by-case basis. Considerable progress has already been made, particularly due to the efforts of the Virtual Socket Interface (VSI) alliance, which played a major role in clarifying some of the issues.
As far as Hard IP reuse is concerned, many of those in the engineering community are still skeptical about fully embracing this type of IP reuse methodology. Much of the skepticism seems based on past experiences with compaction. A lot of progress has been made with this methodology. However, it takes time for them to become part of generally accepted engineering practices, as is the case with many new “design” methodologies. Scheduling pressures in particular often force the engineering field to do what is familiar and known to work within a predictable time schedule.

  « Previous TopNext »  
S2C: FPGA Base prototyping- Download white paper

Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
AECCafe - Architectural Design and Engineering TechJobsCafe - Technical Jobs and Resumes GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy