EDACafe Editorial Roberto Frazzoli
Roberto Frazzoli is a contributing editor to EDACafe. His interests as a technology journalist focus on the semiconductor ecosystem in all its aspects. Roberto started covering electronics in 1987. His weekly contribution to EDACafe started in early 2019. Makers taste a new Raspberry Pi; ML inference gets benchmarks; and more weekly news from industry and academiaJune 27th, 2019 by Roberto Frazzoli
Hungry makers have started tasting the new Raspberry Pi 4 earlier than expected, way before the 2020 scheduled release date. Also making news this week: inference benchmarks to advance the growing machine learning industry; Apple reportedly buying an autonomous vehicle startup; a new candidate for the role of “universal memory”; and a 2.5uW/MHz MCU promising to make batteries a neglectable item. Raspberry Pi 4 is here Raspberry Pi 4 is on sale from June 24, starting at $35. According to Eben Upton, Chief Executive of Raspberry Pi Trading, “this is a comprehensive upgrade, touching almost every element of the platform. For the first time we provide a PC-like level of performance for most users, while retaining the interfacing capabilities and hackability of the classic Raspberry Pi line.” Raspberry Pi 4 is built around the Broadcom chip BCM2711, a complete re-implementation of BCM283X on 28nm process. The power savings delivered by the smaller process geometry have allowed Broadcom to replace Cortex-A53 with the more powerful 1.5GHz quad-core 64-bit Cortex-A72 core, yielding performance increases over Raspberry Pi 3B+ of between two and four times, depending on the benchmark. The process change has allowed to overhaul many other elements of the Raspberry design. The new board adopts a more modern memory technology, LPDDR4, tripling available bandwidth; the entire display pipeline has been upgraded, including video decode, 3D graphics and display output to support 4Kp60 (or two monitors at 4Kp30); and the non-multimedia I/O limitations of previous Raspberries have been addressed by adding on-board Gigabit Ethernet and PCI Express controllers. Other onboard features include dual-band 802.11ac wireless networking, Bluetooth 5.0, two USB 3.0 and two USB 2.0 ports. To support Raspberry Pi 4, the organization is shipping a new operating system based on the forthcoming Debian 10 Buster release. A standard benchmark for inference performance With the growing offer of AI solutions and accelerators, neural network users need standard benchmarks to compare performance, both in the training phase and in the inference phase. Developing these benchmarks is the mission of MLPerf (where ML obviously stand for Machine Learning), a consortium involving more than 40 leading companies and university researchers – including ARM, Cadence, Cisco, Cray, Facebook, Google, Harvard University, HP, Intel, Microsoft, NVIDIA, Stanford University, Synopsys, University of Toronto and many more. Last year MLPerf launched a benchmark for measuring training performance, based on the time it takes to train deep neural networks to perform tasks including recognizing objects, translating languages, and playing the ancient game of Go. A few days ago the consortium introduced MLPerf Inference v0.5, the first industry standard machine learning benchmark suite for measuring inference performance and power efficiency. The benchmark suite covers a wide range of applications including autonomous driving and natural language processing, on a variety of hardware platforms – including smartphones, PCs, edge servers, and cloud computing in data centers. MLPerf Inference v0.5 consists of five benchmarks, focused on three common ML tasks: image classification from the ImageNet dataset, object Detection from the MS-COCO dataset, machine translation using the WMT English-German benchmark. Apple buys Drive.ai – plus, more autonomous vehicle news Speaking of AI applications, autonomous vehicles are making news this week, too. Apple has reportedly bought self-driving vehicle startup Drive.ai. (Mountain View, CA). And recently another company developing self-driving vehicles, Argo.ai, has announced a partnership with Carnegie Mellon University to form an autonomous vehicle research center. To fund the new initiative, Argo has pledged $15 million over five years. The center will be led by a team of five world-renowned experts, with support from graduate students conducting research in pursuit of their doctorates. On the EDA side, Ansys and AVSimulation (a joint venture between Oktal and Renault) have announced a partnership to improve autonomous driving simulation solutions. AVSimulation's SCANeR Studio – a product that creates an ultra-realistic virtual world – has been integrated with Ansys’ VRxperience, a platform combining virtual reality capabilities with physics-based simulation. The resulting solution aims to drastically reduce physical prototype testing and save time in the validation of automotive safety, making it possible to simulate millions of driving scenarios. Is the “universal memory” approaching? Fast as DRAM or even SRAM, non-volatile as Flash, and with a low energy consumption: the quest for the “universal memory” doesn’t stop. Researchers from Lancaster University (Lancaster, UK) have come up with a new technology that could be a promising candidate for this role. As described in an article published on Scientific Reports, the new device is an oxide-free, floating-gate memory cell based on III-V semiconductor heterostructures with a junctionless channel and non-destructive read of the stored data. It exploits the “spectacular” conduction-band line-up of AlSb/InAs (aluminium antimonide/indium arsenide) for charge retention. In other words, the device uses a charge-confinement system that isolates electrons stored in the InAs floating gate by leveraging the “anomalously-large” conduction-band discontinuity with AlSb. The same properties of the two materials are used for the formation of a resonant-tunnelling barrier. Researchers claim that the new memory combines “the contradictory requirements of non-volatility and low-voltage switching” (2.6V, as opposed to 20V for Flash memories). More data about read/write speed – a key requirement for a “universal memory” – can be expected from further researches. An ultra-low power microcontroller for IoT devices A 32-bit RISC microcontroller was presented recently at IEEE CICC in Austin, TX, demonstrating a power consumption of just 2.5uW/MHz. Built in a 55nm CMOS process, the device was designed by Swiss R&D center CSEM, specializing in ultra-low-power ASIC design, and manufactured by wafer foundry Mie Fujitsu Semiconductor (MIFS), using its Deeply Depleted Channel (DDC) technology. The two partners have joined forces to develop a near-threshold 0.5V ecosystem. The DDC technology is suitable for low-voltage operation thanks to its immunity to random dopant fluctuations. To overcome the impact from process and temperature variations, CSEM and MIFS have applied a variety of design techniques and implemented Body-bias-based Adaptive Dynamic Frequency Scaling (ADVbbFS) as one of the key IPs. The ultra-low power microcontroller is targeting IoT applications powered by tiny batteries or by energy harvesting. |