Open side-bar Menu
 EDACafe Editorial
Roberto Frazzoli
Roberto Frazzoli
Roberto Frazzoli is a contributing editor to EDACafe. His interests as a technology journalist focus on the semiconductor ecosystem in all its aspects. Roberto started covering electronics in 1987. His weekly contribution to EDACafe started in early 2019.

Neuromorphic chip research around the world

 
September 27th, 2019 by Roberto Frazzoli

Carver Mead, the father of neuromorphic engineering, is a giant in the history of the semiconductor industry. He taught the world’s first VLSI design course; he designed the first gallium arsenide gate FET; he co-created the first silicon compiler; he played a key role in the fabrication of the first CMOS chip; he co-founded at least twenty companies; he is even credited with coining the expression “Moore’s law”. This is why his interest for mimicking biological brains – dating back to the late 80s – sounds even more significant: in other words, if Carver Mead took neural inspiration so seriously, it must be a serious thing indeed.

Carver Mead. Image credit: Norman Seeff

Today, thirty years later, neural networks are booming; however, most of the commercial AI/ML applications are only loosely inspired to biological brains. Most of them do not use spiking neural networks; most employ a training technique (backpropagation) that has no direct equivalent in nature. The approach pioneered by Carver Mead, more closely inspired by biological brains, is today embodied in the neuromorphic research, still mostly carried out in labs and universities – but holding the potential for more practical applications. Over the past few months, EDACafe has provided overviews of three neuromorphic chips: Loihi (Intel), TrueNorth (IBM) and SpiNNaker (University of Manchester). This week we will take an extremely quick look at other neuromorphic devices that have been developed – over the past few years, and recently – by universities around the world. Technical details about most of these chips can be found in a paper (main source for this article) co-authored by fifteen prominent researchers, including some of those mentioned below.

MINIFAT

MINIFAT (Mihalas–Niebur And Integrate-And-Fire Array Transceiver) was developed in the lab of Ralph Etienne-Cummings at the Johns Hopkins University (Baltimore, MD). It consists of 2,040 neurons based on the Mihalas–Niebur model, that produce nine prominent spiking behaviors using an adaptive threshold. Each neuron has the capability to operate as two independent integrate-and-fire (I&F) neurons. This resulted in 2,040 M–N neurons and 4,080 leaky I&F neurons. This neural array was implemented in 0.5μm CMOS technology with a 5V nominal power supply voltage.

Ralph Etienne-Cummings. Image credit: Johns Hopkins University

HiAER-IFAT

HiAER-IFAT (where HiAER stands for Hierarchical address-event routing, and IFAT stands for integrate-and-fire array transceiver, same as above) was developed in the lab of Gert Cauwenberghs at the University of California San Diego. Hierarchical address-event routing offers scalable long range neural event communication tailored to locally dense and globally sparse synaptic connectivity (as in grey matter VS white matter), while IFAT CMOS neural arrays with up to 65 k neurons integrated on a single chip offer low-power implementation of continuous-time analog membrane dynamics at energy levels down to 22 pJ/spike.

Gert Cauwenberghs. Image credit: University of California San Diego

DeepSouth

DeepSouth is actually a cortex emulator designed for simulating large spiking neural networks on FPGAs. It was developed in the lab of André van Schaik at the MARCS Institute, Western Sydney University, Australia. Its fundamental computing unit is called a minicolumn (from the term ‘column’ that designates elements in the biological cortex), which consists of 100 neurons. The architecture allows to store all the required parameters and connections in on-chip memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. DeepSouth can simulate up to 2.6 billion leaky-integrate-and-fire (LIF) neurons in real time. When running at five times slower than real time, it can simulate up to 12.8 billion LIF neurons.

André van Schaik. Image credit: Western Sydney University

BrainScaleS

BrainScaleS (Brain-inspired multiscale computation in neuromorphic hybrid systems) was developed at the University of Heidelberg in collaboration with the Technical University Dresden and the Fraunhofer IZM in Berlin. It was a research project funded by the European Union – headed by Karlheinz Meier, who passed away in 2018. The BrainScaleS neuromorphic system is based on the direct emulation of model equations describing the temporal evolution of neuron and synapse variables, with an accelerated timeline compared to biological systems. The whole neuron, including all its synapses, is implemented as a continuous-time analog circuit. BrainScaleS uses wafer-scale integration. Connections between individual dies, and from wafer to PCB, is obtained through post-processing of the manufactured wafer, with a multi-layer wafer-scale metallization scheme. The development of BrainScaleS now continues in the Human Brain Project (HBP).

Karlheinz Meier (1955-2018). Image credit: University of Heidelberg

Dynap-SEL

Dynap-SEL (Dynamic Neuromorphic Asynchronous Processor with Scalable and Learning) was developed in the lab of Giacomo Indiveri at the University of Zurich, Switzerland – with support from the European Union through the NeuRAM3 project. It is a mixed-signal processor comprising four cores, each with 16 × 16 analog neurons and 64 4-bit programmable synapses per neuron – and a fifth core with plastic synapses plus on-chip learning circuits. All synaptic inputs are triggered by incoming address events, which are routed by asynchronous Address-Event Representation (AER) digital router circuits. Neurons integrate synaptic input currents and produce output spikes, which are translated into address events and routed to the desired destination. Dynap-SEL is being offered to the market by aiCTX, a spinoff of the University of Zurich, along with two other versions of the chip called Dynap-SE2 and Dynap-CNN. Also developed in the lab of Giacomo Indiveri is the ROLLS (Reconfigurable On-line Learning Spiking) neuromorphic processor, that comprises 128K analog synapse and 256 neuron circuits with spike-based plasticity mechanisms.

Giacomo Indiveri. Image credit: University of Zurich

2DIFWTA

The 2DIFWTA (2D Integrate-and-Fire Winner-Take-All) chip was developed in the lab of Elisabetta Chicca at CITEC (Cluster of Excellence in Cognitive Interaction Technology), a research center of the Bielefeld University, Germany. In this chip, neurons have both excitatory and inhibitory connections; the group of neurons with the highest response suppresses all other neurons and wins the competition. The 2DIFWTA chip uses a standard 0.35-μm four-metal CMOS technology. It comprises a two-dimensional array of 32 × 64 (2,048) integrate-and-fire neurons. The neurons and synapses are subthreshold analog circuits. This device was explicitly designed for the exploration of cooperative-competitive network dynamics. Recurrent connections are internally hard-wired and do not need to be routed through the AER (Address-Event Representation) bus.

Elisabetta Chicca. Image credit: Bielefeld University

PARCA

PARCA (Parallel Architecture with Resistive Crosspoint Array) was developed in the lab of Yu Cao at the Arizona State University. PARCA performs massively parallel read-and-write operation using resistive-RAM (RRAM) cells. Recently, a 64×64 neurosynaptic core with RRAM synaptic array and CMOS neuron circuits at the periphery was designed and fabricated. The RRAM is monolithically integrated in a 130 nm CMOS process. Part of the research was devoted to mitigating non-ideal effects of resistive synaptic devices on weights precision, update, on/off ratio. Non-ideal effects are also due to device variation and IR drop.

Yu Cao. Image credit: Arizona State University

ODIN

ODIN was developed by a team from the ICTEAM Institute at the Université catholique de Louvain (Louvain-la-Neuve, Belgium). It is a 0.086mm2 64k synapse, 256-neuron online-learning digital spiking neuromorphic processor implemented in a 28nm FDSOI CMOS process, achieving a minimum energy per synaptic operation (SOP) of 12.7pJ. Details about ODIN can be found in this paper.

Braindrop

Developed by a group of researchers that includes Chris Eliasmith (head of the Computational Neuroscience Research Group at University of Waterloo, Canada) and Kwabena Boahen (head of the Brains in Silicon Lab at Stanford University). Braindrop is designed to be programmed at a high level of abstraction. Braindrop’s computations are specified as coupled nonlinear dynamical systems and synthesized to the hardware by an automated procedure. This also compensates for the mismatched and temperature-sensitive responses of its analog circuits. Fabricated in a 28-nm FDSOI process, Braindrop integrates 4096 neurons in 0.65 mm2. Details can be found in this paper.

Chris Eliasmith. Image credit: University of Waterloo, Canada

Kwabena Boahen. Image credit: Stanford University

Other developments

This quick overview is not exhaustive; for example, results of the NeuRAM3 project include ReASOn (Resistive Array of Synapses with ONline learning), a test-chip featuring 2048 memristive devices that implement 1024 memory cells connected to two neurons via a programmable routing fabric. And transistor-channel models of neural systems have been developed in the lab of Jennifer Hasler at the Georgia Institute of Technology. Doubtless, more neuromorphic devices are being developed by other research institutions around the world, and more startups – such as BrainChip with its Akida device – will gain market traction in practical applications. EDACafe will keep an eye on those developments.

Logged in as . Log out »




© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise