Open side-bar Menu
 EDACafe Editorial
Peggy Aycinena
Peggy Aycinena
Peggy Aycinena is a contributing editor for EDACafe.Com

DAC 2014: Advanced nodes and going back in time for a solution

 
June 12th, 2014 by Peggy Aycinena

In response to my blog this week about the June 5th panel at DAC, “Advanced Node Re-spins: Be Very afraid (maybe)“, Bill Martin, President/VP of Engineering at E-System Design, sent the following comments.

******************

For 15 years, I was on the same process-node-jumping bandwagon. Always looking for that next node to help solve cost, performance, area and speed that might help with the overall schedule. Even in these older (larger) processes, each new process required 2x the resources (people, time, machines, etc.) to achieve tape out.

Fortunately, I was happily ‘stuck’ using VLSI Technology’s foundries, processing and wafers. Although we were not perfect, we did learn quickly to hone processes, models, design flows, etc., to minimize rework. But that world I knew was prior to the ASIC dis-aggregation that has taken place over the past 2 decades, and because there are pros and cons to that dis-aggregation, your summary of Thursday afternoon’s DAC panel brought back some pleasant memories, as well as nightmares. Clearly we need a new mindset!

Since the 1970s, the industry has been pouring significant dollars into silicon advancements: linear shrinking, larger wafer sizes, additional processing to combine (compromise) more diverse functionality onto a homogeneous die, etc. For most of the industry, these silicon advancements were “free”, expected every few years, and immediately consumed by the users.

Users could use the same design tools with minor enhancements to their flows; purchasing personnel expected next generation nodes to cost less, but with 2x the transistor count; foundries had honed the roll out of new nodes and quickly learned how to improve yields and their costs due to high volumes in short time spans. The entire industry was living in nirvana and had no reason to explore other innovations that might provide significant benefits. Life was great.

However, as we continued down the silicon innovation path, cracks started to appear: each generation taking 2x the resources of the previous generation, with rising costs of masks and yield learning curves starting to slow down due to slower volume growth, etc. As well, recent articles have predicted EUV and 450mm wafers will never come to fruition due to technical and/or economic reasons, while FDSOI, FinFETs and homogeneous 3DIC show promise; silicon nirvana has lost its luster.

Now go back in time, circa the 1980s, and rediscover an overlooked solution: multichip modules (MCMs). While working at Mostek, I marveled at the complexity and density of these monster packages, but knew commercial volumes and acceptance would not happen; in the 1980s, only military and very expensive programs would work with MCM packaging innovations.

At the same time, the commercial crowd continued on their scaling crusade, demanding cheaper but more dense packaging (more pins per area, smaller footprint, tighter bonding pitches and converting from through hole to surface mounting), while high volume was satisfied by incremental plastic packaging.

With progress over the past 30+ years, however, MCMs can now be realized with a silicon, glass or other compounds, rather than ceramic, allowing interposers to be denser, cheaper to produce, and with tighter packing of die. So if our new mindset is interposer-based, what else can be rethought from this perspective?

Homogeneous silicon has always compromised somewhere  logic, CPU, memory, RF/AMS, etc.  all have unique requirements that are compromised by homogeneous silicon, more mask layers, more expensive processing, longer processing, etc.

But with interposers, the game has changed. No longer do we need to force different engineering teams to over-engineer their design to fit a given process. They can have a process that is specifically tuned for their needs. No longer is AMS/RF IP the ‘long pole’ in the tent. If you have a fantastic A/D that is in 130nm, continue to process it where it is optimized for performance, yields and therefore costs. Same can be applied for any function within the design: Rather than compromise on all functions, users can optimize each function and connect them via an interposer.

How would this impact the design process? Just imagine if various portions of the design would be done in older technologies that met the goals, rather than pushing the boundaries (schedule, costs, risks). Teams could focus on a smaller percent of their design requiring the most advanced silicon node while the majority could be done in older technologies with easier design rules, flows, etc. Maybe this new paradigm would be hip, and provide the new mindset we need?

“…I like things that don’t change
‘Cause the more something changes, the more it stays the same
I might be simple, take it easy sometimes
But I can be stubborn when I’ve made up my mind…”

Huey Lewis and the News’ “I know what I like”

******************

Tags: , , , ,

One Response to “DAC 2014: Advanced nodes and going back in time for a solution”

  1. Avatar John Swan says:

    Love it Bill. Especially the part about having the fab process tuned for the need, such as memory or superspeed I/O. I liked the Xilinx White Paper on the topic.

Logged in as . Log out »

Verific: SystemVerilog & VHDL Parsers



© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise