Open side-bar Menu
 Real Talk
Graham Bell
Graham Bell
Graham is VP of Marketing at Real Intent. He has over 20 years experience in the design automation industry. He has founded startups, brought Nassda to an IPO and previously was Sales and Marketing Director at Internet Business Systems, a web portal company. Graham has a Bachelor of Computer … More »

New 3D XPoint Fast Memory a Big Deal for Big Data

 
August 6th, 2015 by Graham Bell

After years of research, a new memory technology emerges that combines the best attributes of DRAM and NAND, promising to “completely evolve how it’s used in computing.”

Memory and storage technologies such as DRAM and NAND have been around for decades, with their original implementations able to perform only at a fraction of the level achieved by today’s latest products. But those performance gains, like most in computing, are typically evolutionary, with each generation incrementally faster and more cost effective than the one preceding it. Quantum leaps in performance often come from completely new or radically different ways of solving a particular problem. The 3D XPoint technology announced by Intel in partnership with Micron comes from the latter approach.

The initial technology stores 128Gb per die across two memory layers.

“This has no predecessor and there was nothing to base it on,” said Al Fazio, Intel senior fellow and director of Memory Technology Development.  “It’s new materials, new process architecture, new design, new testing. We’re going into some existing applications, but it’s really intended to completely evolve how it’s used in computing.”

Touted as the biggest memory breakthrough since the introduction of NAND in 1989, 3D XPoint is a new memory technology that is non-volatile like NAND memory, but is up to 1,000 times faster, with a faster speed only attainable by DRAM, and with endurance up to 1,000 times better than NAND. Read the rest of New 3D XPoint Fast Memory a Big Deal for Big Data

Technology Errors Demand Netlist-level CDC Verification

 
July 30th, 2015 by Dr. Roger B. Hughes, Director of Strategic Accounts

Multiple asynchronous clocks are a fact of life on today’s SoC. Individual blocks have to run at different speeds so they can handle different functional and power payloads efficiently, and the ability to split clock domains across the SoC has become a key part of timing-closure processes, isolating clock domains to subsections of the device within which traditional skew-control can still be used.

As a result, clock domain crossing (CDC) verification is required to ensure logic signals can pass between regions controlled by different clocks without being missed or causing metastability. Traditionally, CDC verification has been carried out on RTL descriptions on the basis that appropriate directives inserted in the RTL will ensure reliable data synchronizers are inserted into the netlist by synthesis. But a number of factors are coming together that demand a re-evaluation of this assumption.

A combination of process technology trends and increased intervention by synthesis tools in logic generation, both intended to improve power efficiency, is leading to a situation in which a design that is considered CDC-clean at RTL can fail in operation. Implementation tools can fail to take CDC into account and unwittingly increase the chances of metastability. Read the rest of Technology Errors Demand Netlist-level CDC Verification

Video: SoC Requirements and “Big Data” are Driving CDC Verification

 
July 23rd, 2015 by Graham Bell

Just before the design automation conference in June, I interviewed Sarath Kirihennedige and asked him about the drivers for clock-domain crossing (CDC) verification of highly integrated SoC designs, and the requirements for handling the “big data” that this analysis produces.  He discusses these trends and how the 2015 release of Meridian CDC from Real Intent meets this challenge.

He does this in under 5 minutes!   You can see it right here…

50th Anniversary of Moore’s Law: What If He Got it Wrong?

 
July 16th, 2015 by Graham Bell
Electronics Mag Cover Aug, 1965

Electronics  April 16, 1965

On April 19, 1965, Electronics magazine published an article that would change the world. It was authored by a Fairchild Semiconductor’s R&D director, who made the observation that transistors would decrease in cost and increase in performance at an exponential rate. The article predicted the personal computer and mobile communications. The author’s name was Gordon Moore and the seminal observation was later dubbed “Moore’s Law.” Three years later he would co-found Intel. The law defines the trajectory of the semiconductor industry, with profound consequences that have touched every aspect of our lives.

The period is sometimes quoted as 18 months because of Intel executive David House, who in 1975 predicted that chip performance would double every 18 months; being a combination of the effect of more transistors and their faster switching time.

What if Gordon Moore got his math wrong and that instead of the number of components on an integrated circuit doubling every couple of years, he said every three years? Read the rest of 50th Anniversary of Moore’s Law: What If He Got it Wrong?

The Interconnected Web of Work

 
July 9th, 2015 by Ramesh Dewangan

“Imagine stepping into a car that recognizes your facial features and begins playing your favorite music. A pair of gloves that knows the history of your vehicle from the time of its inception as a lone chassis on the factory floor. “ –Doug Davis on IoT@Intel

Trends in the Internet of Things (IoT) has been fascinating to follow.

In my last blog on the topic I mentioned the 4 challenges facing an IoT system as spelled out by James Stansberry, SVP and GM, IoT Products, Silicon Labs: functionality, energy, connectivity and integration.

Four elements make up successful IoT hardware

Four elements make up successful IoT hardware

This had me thinking… Does this paradigm apply only to the hardware of IoT?

Read the rest of The Interconnected Web of Work

In fond Memory of Gary Smith

 
July 6th, 2015 by Graham Bell

A long-time EDA industry analyst, Gary Smith, passed away on Friday, July 3, 2015 after a short bout of pneumonia in Flagstaff, Arizona.  He died peacefully surrounded by his family.

Gary Smith USNA graduate in 1963

Gary Smith in 1963

Gary was from Stockton, CA and graduated from the United States Naval Academy in 1963 with a bachelor of science degree in engineering.  His class yearbook says: “He managed to maintain an average grade point despite the frantic efforts of the Foreign Language Department. Tuesday nights found Gary carrying his string bass to NA-10 practice.”  Gary continued to be a musician and played his electric bass for years with the Full Disclosure Blues band at the Design Automation Conference Denali party with other industry figures.  The band started out of a jam session in 2000 with Grant Pierce who asked Gary to help put together a group for the following DAC.  Gary had suggested Aart de Geus as lead guitar who ended up giving the band its name.

Gary got into the world of semiconductors in 1969. He had roles at the following companies:

LSI Logic, Design Methodologist (and RTL evangelist), 2 years
Plessey Semiconductor, ASIC Business Unit Manager, 3 years
Signetics, various positions, 7 years

In 1994 he retired from the semiconductor industry and joined  Dataquest, a Gartner Company to become an Electronic Design Automation (EDA) analyst.  Gary described his retirement this way: “instead of having to worry about Tape Outs and Product Launches, I get to fly around the world and shoot off my big mouth (which I seem to be good at) generally playing the World’s Expert role. Obviously there isn’t much competition. Now if I could only get my ‘retirement’ under sixty hours a week I’d be happy.” Read the rest of In fond Memory of Gary Smith

DAC Panel Recap: “You Say You Want a Verification Revolution?”

 
July 2nd, 2015 by Graham Bell

I was an organizer for the industry DAC panel on “Scalable Verification: Evolution or Revolution?” held during the second week of June in San Francisco.  While the industry generally agrees on methodologies used to verify IP blocks or subsystems, we lack consensus on approaches required to verify SoC integration and system-level functionality of embedded systems. One of questions addressed by the panelists was “Can existing standards and methodologies be extended to address system-level challenges, or are new approaches required?”

The panel was moderated by that veteran verification technologist Brian Bailey.  He was excellent in steering the panelists through this deep topic.   The panelists were all from semiconductor companies (not EDA) and included the following:

  • Ali Habibi, design verification methodology lead at Qualcomm
  • Steven Jorgensen, architect for the Hewlett-Packard Networking Provision ASIC Group;
  • Bill Greene, CPU verification manager for ARM
  • Mark Glasser, verification architect at Nvidia.

Brian has written an excellent article on the panelists insights in his column on SemiEngineering.com.  Here are a few quick snippets to entice you in reading the entire piece:

“Simulators are not making effective use of the advances in the underlying hardware. Design sizes are growing faster than the improvements they are making.”

“Design reuse has not helped us, and even if you change only 20% of a design you still have to completely re-verify it. We need to be able to describe features and functionality in an abstract manner, and from that derive the inputs to the verification tools.”

“You might think that we are able to re-use much of our verification collateral from the IP, unit and top levels into the system-level environment, but this isn’t the case. You can’t find new bugs by running stimulus that was used in the past, and this means that the notions of coverage are different.”

You can read the entire column at this link:  Wrong Verification Revolution Offered.   Be sure to add your comments at the end and let Brian know what you think is missing in verification.

Richard Goering and Us: 30 Great Years

 
July 1st, 2015 by Graham Bell

f4ab8983-9c6b-4662-8453-08b0f4b47766_500[1]

Richard Goering at his 30th DAC, San Francisco in 2014

Richard Goering, the EDA industry’s distinguished reporter and most recently Cadence blogger is finally closing his notebook and retiring from the world of EDA writing after 30 years.  I can’t think of anyone that is more universally regarded and respected in our industry, even though all he did was report and analyze industry news and developments.

Richard left Cadence Design Systems at the end of June (last month).  According to his last blog posting EDA Retrospective: 30+ Years of Highlights and Lowlights, and What Comes Next he will be pursuing a variety of interests other than EDA. He will “keep watching to see what happens next in this small but vital industry”.

When Richard left EETimes in 2007, there was universal hand-wringing and distress that we had lost a key part of our industry.  John Cooley did a Wiretap post on his DeepChip web-site with contributions from 20 different executives, analysts and other media heavyweights.  Here are a just few quotes that I picked out for this post: Read the rest of Richard Goering and Us: 30 Great Years

Reset Expectations with X-Propagation Analysis

 
June 25th, 2015 by Lisa Piper, Senior Technical Marketing Manager at Real Intent

The propagation of unknown (“X”) states has become a more pressing issue with the move toward billion-gate SoC designs, and especially so with power-managed SoC designs. The SystemVerilog standard defines an X as an “unknown” value used to represent the state in which simulation cannot definitely resolve a signal to a “1,” a “0,” or a “Z.”

Synthesis, on the other hand, defines an X as a “don’t care,” enabling greater flexibility and optimization. Unfortunately, Verilog RTL simulation semantics often mask the propagation of an X value, while gate-level simulations show additional Xs that will not exist in real hardware.

The sheer complexity and common use of power management schemes increase the likelihood of an unknown “X” state in the design translating into a functional bug in the final chip. This possibility has been the subject of two technical presentations at the Design and Verification Conference during the last couple of years: I’m Still in Love With My X! But, Do I Want My X to Be an Optimist, a Pessimist, or Eliminated? and X-Propagation Woes: Masking Bugs at RTL and Unnecessary Debug at the Netlist. Let’s look more closely at this issue and the requirements for a solution.

Read the rest of Reset Expectations with X-Propagation Analysis

Last Call for Kaufman Award Nominations

 
June 23rd, 2015 by Graham Bell

The EDA Consortium and the IEEE Council on EDA is seeking qualified nominations for the 2015 Phil Kaufman Award.  The nomination deadline is June 30.

Presented by the EDA Consortium and the IEEE Council on EDA, this prestigious award honors an individual who has had demonstrable impact on the field of electronic design through contributions in Electronic Design Automation.  PressRelease.

Additional information on the nomination process is available here.

Information on previous Phil Kaufman Award recipients is available here.

Download the nomination form here.

CST Webinar Series
S2C: FPGA Base prototyping- Download white paper



Internet Business Systems © 2016 Internet Business Systems, Inc.
595 Millich Dr., Suite 216, Campbell, CA 95008
+1 (408)-337-6870 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy Policy