Grant Pierce: Grand Challenges in IP
May 18th, 2017 by Peggy Aycinena
Here begins the first of four dialogs about Grand Challenges in IP. This first installment is a conversation with Sonics co-Founder and CEO Grant Pierce, who also currently serves as Chair of the ESD Alliance. We spoke by phone earlier this week.
Asked to enumerate the Grand Challenges in IP he sees today, Pierce began: “Having been in the industry for 20 years myself, I am surprised that we still have some challenges ahead of us. We have new entrants into the industry that are more focused at the system level, however, with customers coming in to interact with the IP guys directly to get their custom designs done.
“What I am seeing today, versus 20 years ago, is the emergence of Machine Learning. And that brings with it some technical challenges. On the one hand, they are very familiar – the age-old challenges about bandwidth and throughput – but on the other hand, they are also very new. Today’s applications are driving things together in a totally new way.
“Look at how we have gone from an automobile which was a purely mechanical device, to an auto which is a very intelligent electronic device. In so doing, we are now trying to process a whole of data in real-time, while also sharing a huge amount of data back up into the Cloud.
“What that does is to combine the idea that mobile devices will be an implementation of Machine Learning, processing huge amounts of data, with the idea of running on one of the lowest power applications on the planet – namely, mobile apps.
“Essentially, therefore, in designing for such Machine Learning applications we must figure out where to apply the algorithms that will be developed and how to filter the data set.
“As I mentioned, bandwidth challenges have been with us since the dawn of the digital age, but now we are processing 256 gigabytes per second as systems companies [develop products] that collect tensors at that rate, and higher.
“At the same time, these companies are trying to work in the lowest possible power environment. We will have to make significant improvements in power to meet these needs, and do it under this new emerging application of Machine Learning.
“This is the extreme challenge: How do we get the most amount of data throughput for a whole different kind of processing in a Machine Learning environment?”
These seem like problems that should be solved by your customer’s customer, not by the IP vendor, I noted.
Pierce responded: “It goes back to who is carrying the flag for the system being developed.
“The semiconductor guys are trying to focus on the individual capabilities from the processor point of view – reflected perhaps in today’s best GPUs – or they are trying to focus on the best power performance in their products. But rarely both.
“They tend not to address the combinatorial issues that the system builder is facing. The system builder is trying to address new algorithms on the SoC that have never been seen before.
“It falls to the IP guy now to meet the need. The IP guy has to change the context into which he is delivering the IP.
“Traditionally, GPU providers could burn all the power they wanted to because [their processors] were working in a machine environment with access to all the power needed.
“Now however, companies like Nvidia are going after the Machine Learning market – perhaps a GPU in an Amazon Alexa, which can answer questions in a home setting. This is the very different application environment into which Nvidia now has to play.
“Again, back to the car: The largest amount of data on the planet will be collected from the hundreds of sensors sitting inside the automobile. Designers are trying to use the data to help the driver, plus monitor the data to [enable] learning from up in the Cloud.
“But there’s no way a car, even at 5G, can share that amount of data up into the network at an adequate rate. Therefore, the car locally will have to do a lot of processing on its own.
“Again, it falls to the IP provider to recognize that he may have to change the fundamental requirements of the IP.
“For example, the environment within which that IP operates has to be able to comply with the architecture requirements for low power.
“That means that every IP provider has to be a low-power expert, needs to understand how to manage power on the eventual chip, and how do it in the reconfigurable way that may be needed on that SoC.
“And that means we have to be able to build entire sub-systems – traditionally separate IP blocks – into pretty cogent systems that can have both memory bandwidth and low power.
“These are the challenges I see, not only for Sonics, but for the industry at large. We are moving Machine learning straight out to the edge, flipping from a central command mentality to computing on the edge.”
Pierce recalled his history with MIPS: “I started with MIPS in the earliest days. We were developing reduced instruction-set computing, pursuing the efficiencies we thought we could implement in our computer architecture executing those instructions in the pipeline of the CPU.
“At the end of the day, we built a computer that was able to fetch instructions and data at a higher rate from the memory system, which translated directly into higher performance.
“We had twice the bandwidth to memory, and did an efficient job of retrieving those instructions through our pipeline, which gave us a 5x improvement on performance.
“Similarly, this is why GPUs are better today. They’re just faster than a general purpose architecture from Intel or ARM.
“But now there are architectures challenging GPUs for the Machine Learning market, essentially going back to those same themes from MIPS: How much bandwidth can you handle, and how much power did you burn to get there?
“In the old days, there was limitless power. But now it’s all about: ‘I want to run this huge data set and find the answer quicker than I can blink my eye, but I do not expect to use any additional power in doing it.’
“These demands are crazy, but also doable if as an IP provider I can contribute a block or a sub-system that’s been built to execute very efficiently.
“To do that at Sonics, what we talk about is not the most active moments when the chip is at its peak performance and can process things as fast as possible with the largest bandwidth and lowest latency. Instead, we talk about what happens when the chip is at the edge of the network.
“To do that, we look at what the entire chip is doing and work to maximize what can be done during the idle moments of the chip; keep the deep circuitry turned off that’s not needed for a particular application. We believe it’s the idle moments that are the key.”
Pierce invoked a musical note: “I’m a musician, so I like quotes from genius musicians.
“Miles Davis said, ‘When it comes to music, it’s not the notes you do play. It’s the notes you don’t play.’
“And so it is for SoC architectures and edge computing for Machine Learning. It’s not what’s going on when the chip is active, but what’s going on when it’s not active. Utilizing that situation means not wasting energy.
“The IP vendor today needs to address the challenges of the huge amounts of data and low-power demands of edge computing, and must provide the IP solutions that will meet the needs of today’s emerging markets.
“It’s tough, but we believe it’s doable.”
Grant Pierce co-founded Sonics in September 1996. Prior to Sonics, he held technology industry executive roles at MicroUnity Systems Engineering, ParcPlace Systems, and was the corporate controller for MIPS Computer Systems. His professional history also includes positions at Convergent Technologies, ITT Qume, and Arthur Andersen. Pierce currently serves as Chair of the Board of Directors of the Electronic System Design Alliance.
Tags: ARM, ESD Alliance, Grant Pierce, Intel, Machine Learning, Miles Davis, MIPS, NVIDIA, RISC architecture, Sonics