Open side-bar Menu
 EDACafe Editorial
Peggy Aycinena
Peggy Aycinena
Peggy Aycinena is a contributing editor for EDACafe.Com

Wally Rhines: Grand Challenges in EDA

 
June 8th, 2017 by Peggy Aycinena


This is the second in a 4-part series on Grand Challenges in EDA
. Last week’s entry featured Adapt-IP Chair John Sanguinetti. This week’s conversation is with Mentor Graphics CEO Walden C. Rhines.

Rhines has led Mentor Graphics since 1996, following a distinguished career at TI heading up the ginormous semiconductor group there. His PhD is in material science, but his interests are far more diverse. Additionally, his name and his company have been in the news of late because Mentor was just acquired by Siemens, where he continues to serve in a leadership role. Rhines received the Phil Kaufman Award in 2015, the EDA industry’s highest honor.

Given Dr. Rhines’ storied career as a keynote speaker, it’s not surprising that he came to our May 26th phone call fully prepared to articulate what he sees as today’s Grand Challenges in EDA. Rhines says there are “at least three big ones.”


*****************

System Design & Verification …

“You’ve heard a lot of my talks,” Dr. Rhines began, “and know this issue just keeps getting bigger, system design and verification.

“And you’re hearing it from the whole EDA industry: System design now includes the hardware, the software, putting everything together and making it work, and then verifying it.

“The system companies – those who make cars, planes, trains, and other systems – are adopting electronic design automation and verification into their design environments in the early stages. A [strategy] that’s growing really rapidly.

“But while the companies are building these systems, remember that EDA requires developing ways to design and verify the electrical along with the mechanical, and other requirements as well.

“It’s a big problem, a very broad problem, and expands the number of people who can be involved. This is a big game changer.

“Meanwhile, if you look at even the simplest IoT elements, they almost always have analog, digital, MEMS, and RF – and all of that has to be simulated. But our industry doesn’t have a long history of doing co-simulation, whether for big airplanes or small systems.”

“Will you have the chance,” I asked, “to explore that co-simulation solution space more effectively now, given the new reality of Mentor being a Siemens company?”

“Yes,” Rhines said with palpable enthusiasm, “that’s the great thing!

“They’ve been involved in system design for decades. Systems which between cars and planes, are distinguished by their mechanical aspects. Of course, there were not a lot of electrical aspects to those systems 30 years ago.

“Nonetheless, companies like Siemens are [accustomed] to providing design platforms for customers who need answers for the verification of disparate systems – all the way from concept through to design and development.

“Clearly, if you have always done the mechanical and thermal part of design, and now your add the electrical, a company like Siemens is well prepared [to address] the new challenges in design.”

“Still, it all sounds pretty hard to do,” I fretted.

“Yes,” Rhines conquered, “and in addition to the technical challenges, there are organizational challenges.

“In any system company, there are people who design the circuits, the PCBs, the mechanical system, and there are the system architects. It’s difficult for all of these groups to communicate, to work together, to make [the appropriate] trade-offs.”

Reaching back in time, Rhines added, “You were quite flattering in an article you wrote about a keynote presentation I made at DATE 15 years ago, one that addressed these organizational challenges.

“Recapping that talk: It is easy enough to say, make trade-offs between the electrical and mechanical aspects of a system, but these can span completely different divisions of a huge organization. Often these trade-offs are being evaluated [across organizational] divides, which is very difficult

“As I pointed out 15 years ago, however, the reason that startups can survive and be successful is because they have just 1 or 2 people covering multiple domains within the company.”

“Hardware can talk to the software experts without leaving a single brain,” Rhines added chuckling.

*****************
Next-generation Computer Architecture …

Moving to the next Grand Challenge on his list, Rhines said, “This one is a very fascinating one for me, next-generation computer architecture.

“Look at everything going on today in deep learning – driver-less cars, and so on. All of the companies focused on pattern recognition, the ability to recognize if something is an animal or a person. Companies developing the ability to look at images, to process sensory data.

“All of these things are brain functions, done in your brain very efficiently.

“Unfortunately, these things are not done efficiently in a von Neumann architecture – the chief barrier [to next-generation computing] because it’s an architecture not conducive to deep learning.”

“There are a lot of projects around the world,” Rhines noted, “lots of organizations and companies with proposals to replace von Neumann architecture. And this is really good for the design industry, for EDA.

“There will be lots of proposals that will succeed in one domain or another, or won’t. We will find some computer architectures that will, or will not, succeed. And the discussion will include everything from neural networks to the latest Nvidia processor.”

“Is that sort of intelligence being built into the latest EDA tools,” I asked, “the ability to provide design assistance to whichever computer architecture the users are working to achieve?”

Rhines responded, “The tools can only build-in whatever algorithmic development environment you have presented, although it is possible they’ll learn from experience.

“But the real value the tools provide? They make it easier, more efficient, and faster to create alternative [design solutions], and the ability to to evaluate those alternatives. They also allow you to test how well an architecture does with a particular task.”

Invoking today’s received wisdom, Rhines said, “People are talking about computers generating their own algorithms, following on the deep-learning theme.

“But to the extent that computers do that, the results will be much more of a randomness. The system testing to see how one algorithm does, and how it does after changing one variable or another. It will come down to testing the alternatives.”

“Pattern recognition is another great problem, a sort of next frontier” Rhines added. “Classic von Neumann architecture was set up for efficiency, not pattern recognition.”

“Will the EDA tools have to change radically to absorb all of this,” I asked, repeating a theme.

“Tools that do development,” Rhines replied, “will work on von Neumann architectures, but will develop computers able to execute simulation algorithms.

“Nonetheless, for the first generation of non von Neumann computing, pattern recognition will be the challenge.”

*****************
Next-generation Memory & Logic Elements …

Rhines moved to item 3 on his list of Grand Challenges in EDA: “There are two reason I include next-generation memory and logic elements.

“If you look at the mix of memory to logic transistors in 1995, the number was approximately equal when you counted up the number of transistors doing memory versus those doing logic.

“Today that number is radically different. The number today is 99.8 percent of the transistors are in the memory architecture, and only 0.2 percent in the logic. The SoCs being designed today are looking more and more like memory.”

With a chuckle, Rhines continued, “That number is closer to what’s going on inside of my own head actually, where 99-percent of the brain cells are memory cells.

“So, looking at the people who are currently studying the human brain, they are asking how [the architecture of] analog human thinking might apply to next-generation computer memory. Because it’s pretty clear we will have to be able to distribute the memory [to be co-located] with the logic.

“For some, they think this is an endorsement for the memristor – an element half-way between logic and memory, just as the human synapse is composed of a dendrite and an axon.

“Meanwhile, other people are also looking at spin-based memories, [pursuing] a phase-shift memory.”

Channeling his penchant for data, Rhines said, “What I look at in all of this, actually, is the quantitative charts that say when we will run out of current memory of one kind or another.

“Nand Flash is way ahead of dynamic RAM in the market today, for instance. Enormous amounts of Flash memory are being produced, and the technology is growing in 3 dimensions – up to 64 layers, and probably soon up to 128. And the cost per bit is dropping at the same time.”

“At this point in my keynote,” Rhines added merrily, “the plot on my slide would say Flash is going to continue to grow over the next decade – and then run out of gas.

“So what is different about the brain, where it only take a few hundred cycles [of computing] for us to recognize something, but a few million cycles for in von Neumann to do the same thing?

“The answer is in hierarchical memory.

“The memory cell in your brain [is built] on a hierarchical pattern. I don’t have to see all of your face, or always in the same amount of shadow or light, to recognize you. Our brains are very tricky in what data they store to do this recognition.”

Referring to a controversy that erupted during last October’s CEDA Design Automation Futures Workshop hosted at Mentor’s Fremont campus, I asked, “Where would you have been in that fracas? Does memory need to be housed close to the logic to facilitate the move to next-generation computing?”

Rhines chuckled again, “Well, there are certainly religious advocates in each space.

“There’s the memristor crowd, they’re pretty religious. And we did have an event at Mentor specifically about the memristor.

“Then there’s a crowd at HP who have a bunch of articles saying there’s got to be a single memory.

“And there are the spintronics people, I get their report. Several semiconductor companies did a 5-year prediction recently, and they seemed to say the spintronics will not be it.”

“In my opinion, the partitioning of memory and logic that we have today strikes me as being incompatible with making the kinds of orders-of-magnitude increases in power efficiency that are needed.

“We have 11 orders of power efficiency to overcome to equal the human brain. This is undoubtedly going to require an architectural change.”

“And there are a lot of people spending a lot of money to [figure this out], although right now nobody really knows which will hit the cost/performance/power goals.”

“Clearly,” Rhines concluded, “you have to have an evangelist for whichever [technology] to get to the finish line.”


*****************

Efficient brain …

As a courteous closer, I asked Rhines, “Just for an fyi, what is your title at Siemens?”

“I’m now Chairman and CEO of Mentor Graphics, a Siemens company,” Rhines replied, adding with a distinct note of joy, “and now I’m even more efficient.”

“It must be your efficient brain,” I said.

*****************

Tags: , , , ,

One Response to “Wally Rhines: Grand Challenges in EDA”

  1. Avatar Karl Stevens says:

    The von Neumann architecture problems are compounded by RISC computers which have excessive loads and stores.

    Super scalar further compounds the problem because it is widely believed that data will magically be in local register files because it will be the result of previous computation. BUT this blog is about applications that are not computation intensive, rather it is about accessing data that is randomly scattered in random locations.

    It is ironic that FPGAs as accelerators can be given a block of data in local memory and process that data faster than a cpu that has a 10x faster clock rate.

    Of course the FPGA(IC) can do more things in parallel … just as an FPGA can emulate ICs, an IC can “emulate” an FPGA and do it more efficiently because the FPGA does not use all the logic cells and uses active cells for the interconnect fabric.

    In both cases it is the on chip memory that is key.

Logged in as . Log out »




© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise