Open side-bar Menu
 EDACafe Editorial

Archive for August, 2019

The one million ARM-cores neuromorphic machine

Thursday, August 29th, 2019

Favorable wind keeps blowing for SpiNNaker, a research project led by the University of Manchester (UK) which in November 2018 reached its ambitious goal: a neuromorphic machine containing one million ARM cores, capable of simulating one billion simple neurons in biological real time. This result came after several years of development, as the SpiNNaker project – funded by the European Union – formally began in 2005, with first working silicon delivered in 2011. The second phase of the initiative, called SpiNNaker2, has already been planned, aiming to a 10x boost in performance and efficiency over the first generation. Let’s take a closer look at SpiNNaker (a contraction of Spiking Neural Network Architecture), as it is described in the project’s website and in two papers – here and here – authored by the research group, where many more details can be found.

The one million ARM-cores SpiNNaker machine. Image credit: University of Manchester

A novel approach to parallel computing

Led by professor Steve Furber from University of Manchester’s Advanced Processor Technologies (APT) Research Group, the SpiNNaker project is based on a novel approach to parallel computing, placing special emphasis on low power consumption and fault-tolerance. As researchers explained, “SpiNNaker breaks the rules followed by traditional supercomputers that rely on deterministic, repeatable communications and reliable computation. SpiNNaker nodes communicate using simple messages (spikes) that are inherently unreliable.” (…) “Three of the principle axioms of parallel machine design – memory coherence, synchronicity and determinism – have been discarded in the design without, surprisingly, compromising the ability to perform meaningful computations.

(more…)

A wafer-scale AI engine; the largest capacity FPGA; AMD’s new server processors; and more news from the industry

Friday, August 23rd, 2019

Processing hardware took center stage in August, with a number of announcements regarding AI chips, FPGAs, server processors, open source ISAs. As for the electronics industry in general, acquisition deals – one of them still in the making – attracted attention too.

The 1.2 trillion transistors chip

With more than 1.2 trillion transistors and an area of 46,225 square millimeters (8.5 x 8.5 inches), the Wafer Scale Engine (WSE) designed by Cerebras (Los Altos, CA) and manufactured by TSMC in its 16nm process technology is definitely an outstanding engineering achievement. Introduced at the Hot Chips conference – which took place at Stanford University from August 18 to 20 – the WSE is an Artificial Intelligence processor, aiming to compete against the GPUs commonly used for this type of applications. As Cerebras explained in a press release, it offers “400,000 AI-optimized, no-cache, no-overhead, compute cores and 18 gigabytes of local, distributed, superfast SRAM memory as the one and only level of the memory hierarchy. Memory bandwidth is 9 petabytes per second. The cores are linked together with a fine-grained, all-hardware, on-chip mesh-connected communication network that delivers an aggregate bandwidth of 100 petabits per second.” According to Cerebras, chip size is profoundly important to reduce training time and power in AI. A large silicon area provides more cores to do calculations, more memory closer to the cores, and the possibility to keep all communication on-silicon. One of the features specifically optimized for AI applications is the ‘sparsity harvesting technology’ invented by Cerebras, to boost performance on workloads that contain zeros. As the company explained, “Zeros are prevalent in deep learning calculations: often, the majority of the elements in the vectors and matrices that are to be multiplied together are zero. And yet multiplying by zero is a waste of silicon, power, and time (…). Because graphics processing units and tensor processing units are dense execution engines—engines designed to never encounter a zero—they multiply every element even when it is zero. When 50 to 98 percent of the data are zeros, as is often the case in deep learning, most of the multiplications are wasted.” Besides its significance from the AI point of view, the mega-chip is obviously interesting from a nanoelectronics technology standpoint: “The Cerebras WSE contains fundamental innovations that advance the state-of-the-art by solving decades-old technical challenges that limited chip size—such as cross-reticle connectivity, yield, power delivery, and packaging,” said Andrew Feldman, founder and CEO of Cerebras Systems.

Cerebras' Wafer Scale Engine. Image credit: Cerebras

(more…)

Apple-Intel deal; Alibaba’s RISC-V chip; semiconductor forecasts; EDA updates; 2D transistors

Monday, August 5th, 2019

Two major announcements attracted general attention over the past couple of weeks; and many more interesting stories poured in from both industry and academia. Let’s take a look at some of them.

Apple to acquire Intel’s smartphone modem business

As widely reported by many media outlets, on July 25 Intel and Apple announced an agreement for Apple to acquire the majority of Intel’s smartphone modem business. Approximately 2,200 Intel employees will join Apple, along with intellectual property, equipment and leases. The transaction, valued at $1 billion, is expected to close in the fourth quarter of 2019, subject to regulatory approvals and other customary conditions. Consistent with Apple’s strategy, the deal will allow the Cupertino tech giant to gradually become independent from external suppliers on this key technology; Intel, on its part, declared last April the intention to exit the 5G smartphone modem business citing “no clear path to profitability.”

Alibaba introducing its own RISC-V-based processor

Dubbed Xuantie 910, the processor recently announced by Alibaba Group’s chip subsidiary, Pingtouge Semiconductor, is based on sixteen RISC-V cores and manufactured on a 12nm process. As reported by EETimes, Alibaba claims this to be the most powerful RISC-V based processor, achieving 7.1 Coremark/MHz at 2.5GHz clock. New features enabling this performance level include a 12-stage pipeline and the addition of fifty instructions. With this move, Alibaba joins the other so-called “hyperscalers” (Internet giants) that have already developed their own chips. Many Chinese media outlets, however, have interpreted this announcement in the context of current trade tensions, with China striving to be independent from US technologies.

(more…)




© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us, or visit our other sites:
TechJobsCafe - Technical Jobs and Resumes EDACafe - Electronic Design Automation GISCafe - Geographical Information Services  MCADCafe - Mechanical Design and Engineering ShareCG - Share Computer Graphic (CG) Animation, 3D Art and 3D Models
  Privacy PolicyAdvertise