EDACafe Editorial Roberto Frazzoli
Roberto Frazzoli is a contributing editor to EDACafe. His interests as a technology journalist focus on the semiconductor ecosystem in all its aspects. Roberto started covering electronics in 1987. His weekly contribution to EDACafe started in early 2019. IBM’s neuromorphic initiative keeps heading TrueNorthJuly 25th, 2019 by Roberto Frazzoli
Three weeks ago EDACafe took a look at Intel’s neuromorphic computing initiative based on the Loihi chip, that was described in a paper on January 2018. Let’s now move a few years back and a few miles south – from Intel Labs in Santa Clara to IBM Research Center in Almaden Valley – for a quick overview of Big Blue’s neuromorphic computing initiative based on a chip called TrueNorth, developed by a team led by Dharmendra S. Modha. The DARPA grant and TrueNorth’s ancestors As recalled by Modha in his blog, in 2018 the TrueNorth project celebrated its tenth anniversary. The year 2008 marked a key milestone in the history of this initiative, when the IBM team – along with partners from several universities – was awarded a contract under the DARPA SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) program. Researchers from the IBM team then spent a couple of years studying the most recent findings from neuroscience – placing special attention on the brain of the macaque monkey – and running a series of simulations on IBM Blue Gene supercomputers. In 2010 the team started building two prototype silicon chips, dubbed San Francisco and Golden Gate; both had 256 neurons, about the number found in the nervous system of a worm. Spiking neural networks implemented on those chips proved capable of simple cognitive behavior, such as playing Pong, recognizing handwritten digits, or driving a simple simulated car.
The TrueNorth chip The next goal of the IBM project was scaling to much larger numbers of neurons and synapses. That was achieved basically by building a two-dimensional array of an improved, shrunk-down version of the Golden Gate chip. Tiling 4096 neurosynaptic cores interconnected via an intrachip network, the team designed a device that integrates 1 million programmable spiking neurons and 256 million configurable synapses: the TrueNorth chip. Wafers arrived at IBM Research Center in 2013, providing the researchers with perfectly working chips. Each of them consists of 5.4 billion transistors and occupies 4.3 cm2 area in Samsung’s 28nm process technology. As described by the research team in an article published by Science magazine on August 2014, TrueNorth’s architecture is based on a building block (a core) that consists of a “self-contained neural network with 256 input lines (axons) and 256 outputs (neurons) connected via 256-by-256 directed, programmable synaptic connections. (…) Each neuron on every core can target an axon on any other core. Therefore, axonal branching is implemented hierarchically in two stages: First, a single connection travels a long distance between cores (akin to an axonal trunk) and second, upon reaching its target axon, fans out into multiple connections that travel a short distance within a core (akin to an axonal arbor). Neuron dynamics is discretized into 1-ms time steps set by a global 1-kHz clock. (…) The architecture is scalable because cores on a chip, as well as chips themselves, can be tiled in two dimensions similar to the mammalian neocortex.” One of the major benefits of the neuromorphic architecture based on spiking neurons is of course a high energy efficiency: according to the research team (as reported in the above mentioned article), TrueNorth can deliver 46 billion SOPS per watt for a typical network (where SOPS stands for synaptic operations per second) and 400 billion SOPS per watt for networks with high spike rates and high number of active synapses. In a paper published by PNAS in 2016, the team led by Modha demonstrated that neuromorphic computing can implement deep convolution networks approaching state-of-the-art classification accuracy, can perform inference preserving its energy-efficiency and high throughput, and can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. TrueNorth-based systems As a group of guest authors recall in Mohdi’s blog, the basic TrueNorth-based system built by IBM is a board with a single neuromorphic chip and a Xilinx Zynq-7000 SoC, called NS1e. The next step was putting sixteen of these boards in a single enclosure with an Ethernet backbone, thus creating a system called NS1e-16. Then IBM built a board with sixteen TrueNorth chips (the NS16e), exploiting their ability to tile seamlessly and communicate directly with each other. Both the NS1e-16 and NS16e offer the same number of neurons and synapses, but the former consists essentially of 16 separate 1-million neuron systems working in parallel, while the latter is a single 16-million neuron system, that allows the exploration of much larger neural networks. The next step was the NS16e-4, consisting of four NS16e systems together in parallel to provide 64 million neurons and 16 billion synapses in a single enclosure. The first of these systems was ordered by the U.S. Air Force Research Lab. To build this large piece of hardware, IBM researchers came up with the idea of placing the four subsystems in a unique V-shaped arrangement in a drawer. The ecosystem and the applications To help users create their own applications, the IBM team has developed a whole ecosystem around TrueNorth. As described in a paper at the 2016 IEEE’s Supercomputing conference (that can be downloaded from Modha’s blog), the ecosystem includes hardware, firmware, software, training algorithms and applications – now used by many university, corporate and government laboratories around the world. Part of the stack are the Eedn framework for developing energy-efficient deep neuromorphic networks, and a NeuroSynaptic Core Placement (NSCP) algorithm that maps the neurosynaptic cores efficiently onto the hardware substrate, minimizing the sum of all the paths from source neurons to destination neurons. The above-mentioned Supercomputing conference paper also describes applications created by U.S. Army Research Lab, U.S. Air Force Research Lab and Lawrence Livermore National Lab. Other applications explored by the research community around the world include gesture recognition, emotion recognition, image classification and object tracking, robotics, always-on speech recognition, text image recognition, mobile ultrasound etc. Gesture recognition has been used, for example, for a TV remote-control demo. A large user community The TrueNorth user community involves over 200 partners at more than 50 institutions around the world – mostly universities, government labs, and corporate research centers. Six years after first silicon, with the largest system offering 64 million neurons, IBM’s neuromorphic initiative keeps heading TrueNorth – and we can expect further evolution along this path. |