Open side-bar Menu
 EDACafe Editorial

Archive for July, 2019

IBM’s neuromorphic initiative keeps heading TrueNorth

Thursday, July 25th, 2019

Three weeks ago EDACafe took a look at Intel’s neuromorphic computing initiative based on the Loihi chip, that was described in a paper on January 2018. Let’s now move a few years back and a few miles south – from Intel Labs in Santa Clara to IBM Research Center in Almaden Valley – for a quick overview of Big Blue’s neuromorphic computing initiative based on a chip called TrueNorth, developed by a team led by Dharmendra S. Modha.

The DARPA grant and TrueNorth’s ancestors

As recalled by Modha in his blog, in 2018 the TrueNorth project celebrated its tenth anniversary. The year 2008 marked a key milestone in the history of this initiative, when the IBM team – along with partners from several universities – was awarded a contract under the DARPA SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) program. Researchers from the IBM team then spent a couple of years studying the most recent findings from neuroscience  – placing special attention on the brain of the macaque monkey – and running a series of simulations on IBM Blue Gene supercomputers. In 2010 the team started building two prototype silicon chips, dubbed San Francisco and Golden Gate; both had 256 neurons, about the number found in the nervous system of a worm. Spiking neural networks implemented on those chips proved capable of simple cognitive behavior, such as playing Pong, recognizing handwritten digits, or driving a simple simulated car.


A more flexible Arm; memristors; record-breaking GPUs; motors into the wheels; and more news from industry and academia

Thursday, July 18th, 2019

Experimenting with Arm technologies before committing to the manufacturing license fee: this is the new opportunity now offered by Arm. The company has announced it is expanding the ways existing and new partners can access and license its technology for semiconductor design. Called ‘Arm Flexible Access’, the new engagement model enables SoC design teams to initiate projects before they license IP and pay only for what they use at production. This way, design teams will get more freedom to experiment and evaluate different options. As the company explained in a press release, typically partners license individual components from Arm and pay a license fee upfront before they can access the technology. With ‘Arm Flexible Access’ they pay “a modest fee” for immediate access to a broad portfolio of technology, then paying a license fee only when they commit to manufacturing – followed by royalties for each unit shipped. The portfolio made available through this new engagement model includes all the essential IP and tools needed for an SoC design: the majority of Arm-based processors within the Arm Cortex-A, -R and -M families, as well as Arm TrustZone and CryptoCell security IP, select Mali GPUs, system IP alongside tools and models for SoC design and early software development. Access to Arm’s global support and training services are also included.

Memristors advancements

Researchers at the University of Michigan have built the first programmable memristor processor, or – as it is described in their paper published on Nature Electronics – “a fully integrated reprogrammable memristor–CMOS system for efficient multiply–accumulate operations.” Besides the memristor array itself, the chip integrates all the other elements needed to program and run it. Those components include a conventional digital processor and communication channels, as well as digital/analog converters to interface the analog memristor array with the rest of the chip. As reportedly claimed by the researchers, memristors promise a 10-100 times improvement – in terms of performance and power – over GPUs in machine learning applications, thanks to their in-memory processing capabilities.

The memristor chip from University of Michigan. Image credit: Robert Coelius, Michigan Engineering


Cheating neural networks; Intel advanced packaging technologies; edge AI; faster verification

Friday, July 12th, 2019

What times we live in. Even neural networks cheat. Some have learnt to recognize a horse by spotting the tag, an archive of horse pictures; so if you paste this tag onto a Ferrari picture, the network will classify it as a horse. Some, when playing Atari Pinball game, have learnt to make high scores by repeatedly nudging the table, taking advantage of the high threshold of the tilting mechanism. Some have learnt to recognize airplanes just from the smooth blue landscape surrounding them in most pictures. These examples are among the results of a work carried out by a group of researchers from Fraunhofer Heinrich Hertz Institute and Technische Universität Berlin (Germany).

Researchers used the layer-wise relevance propagation (LRP) method with the help of a semiautomatic tool called SpRAy (spectral relevance analysis), thus producing “heatmaps” where colors represent how important a certain pixels was for the neural network to make its decision (ranging from green, low relevance, to red, high relevance). As researchers observe, “The above cases exemplify our point, that even though test set error may be very low (or game scores very high), the reason for it may be due to what humans would consider as cheating rather than valid problem-solving behavior. It may not correspond to true performance when the latter is measured in a real-world environment, or when other criteria (e.g. social norms which penalize such behavior) are incorporated into the evaluation metric.”


Intel’s neuromorphic computing initiative

Friday, July 5th, 2019

Dubbed Pohoiki Springs, the new system based on Intel Labs’ “Loihi” neuromorphic processor is expected to be available to the research community soon. It will contain up to 768 Loihi chips, totalling 100 million neurons. Introduced in November 2017, Loihi is Intel’s fifth and most complex chip in a family of different neuromorphic devices, and its architecture is optimized for Spiking Neural Networks (SNNs). Compared to ‘regular’ Artificial Neural Networks, SNNs can be considered more ‘similar’ to biological neural networks in that they incorporate time as an explicit dependency in computations, and their neurons will fire (produce a spike) only when certain parameters reach a specific threshold. In the outgoing ‘spike train’, information is represented by the frequency of spikes or the timing between them. SNNs promise great benefits over ‘regular’ ANNs in terms of performance and power consumption. Among the reasons currently preventing a wider adoption of SNNs in practical AI applications is that conventional processing architectures – such as CPUs and GPUs – are not ideally suited to implement these networks. Hence, the need for SNN-specialized architectures such as the Loihi chip.

The Loihi chip. Image credit: Intel


© 2022 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise