Catching up on some recent news after a two-week summer break, let’s start by briefly reporting about GPT-3, the new language model from OpenAI. Other updates concern EDA, processors, and more.
Natural language processing with 175 billion parameters
San Francisco-based OpenAI has developed GPT-3, an autoregressive language model with 175 billion parameters – ten times more than Microsoft’s Turing Natural Language Generation model. As explained in a paper, GPT-3 achieves strong performance on many NLP (natural language processing) datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation. GPT-3 can also generate samples of news articles which human evaluators can hardly distinguish from articles written by humans. As for energy usage, the researches explained that “training the GPT-3 175B consumed several thousand petaflop/s-days of compute during pre-training, compared to tens of petaflop/s-days for a 1.5B parameter GPT-2 model.” But they also added that “Though models like GPT-3 consume significant resources during training, they can be surprisingly efficient once trained: even with the full GPT-3 175B, generating 100 pages of content from a trained model can cost on the order of 0.4 kW-hr, or only a few cents in energy costs.”
TSMC 5-nanometer customers
According to a report quoted by Gizmochina, so far the 5-nanometer manufacturing capacity from TSMC has been mainly divided between eight major customers: Apple, Qualcomm, AMD, Nvidia, MediaTek, Intel, Bitmain, and Altera (this last one being listed in the report as a company by itself, separate from Intel). Gizmochina adds that Apple’s demand – “40,000 to 45,000 5nm process capacity in the first quarter of 2020” – has concerned its upcoming A14 and A14X Bionic chips and MacBook processors, while Qualcomm intends to use the 5nm process for its next flagship Snapdragon 875 processors, and MediaTek for the next generation of its Dimensity chips. (more…)
Called MISIM (which stands for “machine inferred code similarity”), the new “machine programming system” developed by Intel in conjunction with MIT and Georgia Tech is an automated engine using neural networks to learn what a piece of software intends to do, by studying the structure of the code and analyzing syntactic differences of other code with similar behavior. As explained in a press release, Intel’s ultimate goal for machine programming is to enable software creation based on human intention expressed in any fashions, whether that’s code, natural language or something else. From an EDA perspective, it will be interesting to see if some aspects of this AI-based code analysis will prove applicable to HDL in chip design, too.
Credit: Intel
NXP microcontrollers gain Glow neural network compiler
NXP’s eIQ Machine Learning Software Development Environment now supports the Glow neural network compiler, with the goal of delivering high performance inferencing for NXP’s i.MX RT series of crossover MCUs – especially for vision and voice applications at the edge. NXP’s implementation of Glow targets Arm Cortex-M cores and the Cadence Tensilica HiFi 4 DSP, with platform-specific optimizations for the above-mentioned series of NXP products. Glow (the Graph Lowering NN compiler) was introduced by Facebook in May 2018 as an open source community project, with the goal of providing optimizations to accelerate neural network performance on a range of hardware platforms. The term “crossover” used by NXP to designate this MCU series refers to the convergence of low-power applications processors and high-performance microcontrollers. Besides Glow, the NXP’s eIQ Machine Learning Software Development Environment also includes inferencing support for TensorFlow Lite. (more…)
Open-source processor IP keeps improving: SiFive has recently launched the new 20G1 release of its RISC-V Core IP portfolio, claiming up to 2.8x more performance, up to 25% lower power and up to 11% smaller area (“based on SiFive internal engineering measurement”). This is one of the many updates from the last few days, which also include EDA innovations, new Arm rumors, details on Intel’s technology roadmap, AI research advancements, and some standard news.
Mentor boosts analog verification speed
Designers of PLLs and SerDes to be implemented in advanced node geometries will be among the Mentor users who will benefit from the new Analog FastSPICE eXTreme technology, targeted at nanometer-scale verification of large, post-layout analog designs. Citing several innovations – such as new RC circuit reduction algorithms, performance improvements to the Analog FastSPICE core SPICE matrix solver, better device noise analysis capabilities – Mentor claims for the new eXTreme technology a simulation performance boost of 10X compared to its previous-generation Analog FastSPICE offering, and a 3X simulation performance acceleration compared to commercially available solutions at similar accuracy settings. According to Mentor, Analog FastSPICE eXTreme is especially valuable for analog designs containing high levels of parasitic complexity and contact resistance.
Nvidia reportedly interested in Arm acquisition
Last week EDACafe briefly informed readers about SoftBank reportedly “exploring alternatives including a full or partial sale or public offering” of Arm. A recent update on this story is the news of Nvidia being reportedly interested in acquiring Arm from SoftBank, “in what could become the biggest-ever semiconductor deal.” Anonymous sources quoted by Bloomberg pointed out that “Nvidia’s interest may not lead to a deal, and SoftBank could opt to pursue a listing of the business instead.” Sources also added that “SoftBank approached Apple to gauge its interest in acquiring Arm,” but “Apple isn’t planning to pursue a bid.”
Acquisitions – either officially announced or just rumored about – make up most of our news summary this week. We will then move to some AI chip updates; but first, let’s take a look at one of the EDA announcements that are going to be in the spotlight at this year’s Virtual DAC, running from July 20 to 24.
Early short circuits fixing with Mentor’s Calibre nmLVS-Recon
Mentor has announced the Calibre nmLVS-Recon technology, aimed at speeding overall circuit verification turnaround time by helping designers identify and resolve selected systemic errors early in the development phase. As explained in the announcement’s press release, early design versions typically contain many gross systemic violations. For example, a “shorted nets” class of violation generates millions of errors and is very compute intensive. Circuit verification engineers can use the Calibre nmLVS-Recon short isolation configuration to find and fix these types of violations quickly and efficiently.
Catching up with an announcement that dates back to a few days ago, this week’s news summary places FPGAs in the spotlight. Other news includes the launch of a Palo Alto-based robotics startup, adding to a Bay Area scenario that features at least another innovative robotics company, Covariant (Berkeley, CA). Advancements in discrete and passive components complete this week’s roundup.
Lattice innovates general-purpose FPGAs
Up to twice the I/O density per square millimeter in comparison to similar competing FPGAs: this is what Lattice is claiming for its new family of low-power, general purpose FPGAs, called Certus-NX. Manufactured using a 28 nm FD-SOI process technology, the new devices boast a much smaller package, greater I/O density, and lower power compared to competing FPGAs of similar gate counts. Compactness enables, for example, to create a complete PCIe solution in 36 mm2. Other features of the new FPGAs include instant-on performance (with individual I/Os able to configure in 3 ms, and full-device startup in 8-14 ms depending on device capacity), support for ECDSA authentication, better soft-error rate (SER) performance. Notable IP blocks available on Certus-NX include 1.5 Gbps differential I/O, 5 Gbps PCIe, 1.5 Gbps SGMII, and 1066 Mbps DDR3. A five-page white paper from analyst Linley Gwennap provides a detailed description of the Certus-NX and a comparison with similar FPGAs from Intel and Xilinx.
Microsoft has just announced it will permanently close all its retail stores around the world – except for four “Microsoft Experience Centers” in London, NYC, Sydney, and Redmond campus locations. The company’s retail team members will continue to serve customers from Microsoft corporate facilities and remotely. This was the biggest news today, but many other interesting things happened recently; some of them are summarized below.
Samsung launches its Cloud Design Platform
Last week EDACafe briefly reported about TSMC’s initiative aimed at using Microsoft Azure to speed up timing signoff for advanced-node SoC designs, with two separate collaboration agreements involving Cadence and Synopsys respectively. More cloud-based EDA news this week are coming from Samsung, that has launched its ‘Samsung Advanced Foundry Ecosystem (SAFE) Cloud Design Platform (CDP)’ for fabless customers, in collaboration with Rescale (San Francisco, CA). The two announcements are different in many respects: according to a press release, Samsung Foundry initiative is not focusing on timing signoff only but is offering a virtual “design environment” where customers can use tools from multiple vendors such as Ansys, Cadence, Mentor and Synopsys. Also, instead of collaborating with a single cloud service provider, Samsung Foundry has chosen Rescale’s multi-cloud platform. Rescale partners with several providers such as Amazon Web Service, Microsoft Azure, Google Cloud Platform, IBM, and Oracle Cloud Infrastructure. Common to both initiatives, obviously, is the goal of using cloud resources to speed up processing. Gaonchips – one of Samsung Foundry’s Design Solution Partners – has already tested the SAFE CDP on its 14nm automotive project using Cadence’s Innovus Implementation System and claims a 30 percent reduction of its design run-time compared to current on-premise execution times.
Mentor adds UltraSoC monitoring IP to its Tessent suite
Siemens, Mentor’s parent company, has signed an agreement to acquire UltraSoC (Cambridge,UK), a company providing a modular IP platform that allows to create on-chip monitoring and analytics infrastructures. UltraSoC’s IP is designed to accelerate silicon bring-up, optimize product performance, and confirm that devices are operating “as designed” for functional safety and cybersecurity purposes. Siemens plans to integrate UltraSoC’s technology into the Xcelerator portfolio as part of Mentor’s Tessent software product suite. Together with Tessent’s design-for-test (DFT) solutions, the combined offering is aimed at creating a ‘Design for Lifecycle Management’ solution for system-on-chips.
CMOS device fabrication at 500°C enables 3D monolithic integration
French research institute CEA-Leti is in the news again this week with another paper presented virtually during the 2020 Symposia on VLSI Technology & Circuits. The work, done in collaboration with Samsung, demonstrates the possibility of fabricating FDSOI CMOS devices without exceeding the 500°C temperature threshold. Conventional CMOS manufacturing processes require temperatures higher that 500°C, making it difficult to build 3D monolithic structures, since fabricating the upper-level transistors could damage the metal interconnects and the silicide of the bottom-level transistors. The low-temperature process developed by CEA-Leti for top-level devices prevents deterioration of bottom-level transistors, paving the way to 3D monolithic integration which promises many benefits over die stacking.
Arm-based Japanese supercomputer is number one in TOP500 list
Called Fugaku, the most powerful supercomputer in the world is installed at Riken Center for Computational Science in Kobe, Japan. The machine, powered by Fujitsu’s A64FX processors containing forty-eight Arm cores, is number one in the latest TOP500 supercomputer list – the new edition of the ranking compiled twice a year by experts from Lawrence Berkeley National Laboratory, University of Tennessee Knoxville, and ISC Group (Frankfurt, Germany). With a High Performance Linpack (HPL) result of 415.5 petaflops, Fugaku dramatically outperforms number two on the list, an IBM-built supercomputer called Summit that delivers 148.8 petaflops on HPL. According to Riken Center, Fugaku also swept the competitors taking first place on three different rankings: the HPCG (High-Performance Conjugate Gradient) benchmark, based on real-world applications; HPL-AI, based on tasks typically used in artificial intelligence applications; and Graph 500, based on data-intensive loads. As underlined by Riken Center, this is the first time in history that the same supercomputer has become number one on these three rankings simultaneously. As usual, the TOP500 list provides many interesting insights. Only 144 systems – out of 500 – are using accelerators or coprocessors, the majority of which (135) are equipped with Nvidia GPUs. The x86 continues to be the dominant processor architecture, used by 481 of the 500 systems. Intel claims 469 of these, with AMD installed in 11 and Hygon in the remaining one. Arm processors are used by just four TOP500 systems, three of which employ the Fujitsu A64FX processor, with the remaining one powered by Marvell’s ThunderX2 processor. Chinese manufacturers lead the list in terms of number of installations with Lenovo (180), Sugon (68) and Inspur (64). The breakdown of system interconnect shows that Ethernet is used in 263 systems, InfiniBand in 150, and the remainder employ custom or proprietary networks.
The Fugaku supercomputer. Image credit: Riken Center for Computational Science
Radar-based blood pressure measurement
Infineon’s Silicon Valley Innovation Center (SVIC), based in Milpitas, has entered in a new agreement with startup Blumio (San Mateo, CA) to co-develop a wearable, non-invasive blood pressure sensor based on Infineon’s XENSIV radar chipset by 2021. Key concept is using a radar sensor to detect the microscopic motions on the surface of the skin caused by pulsation traveling along the artery, then applying proprietary algorithms to extract blood pressure and other heart related metrics from the acquired waveform. According to the German chipmaker, the new sensor has the potential to disrupt the USD 45 billion market for wearable cardiovascular monitoring devices by enabling continuous and precise measurement without a cuff. In its incubator role, the SVIC will provide funding and resources to support the sensor’s commercialization.
Acquisitions
Besides the above-mentioned Siemens-UltraSoC deal, two more acquisition announcements are in the news this week. Preannounced last March, the pending acquisition of Adesto (Santa Clara, CA) by UK-based Dialog Semiconductor is expected to close on June 29, 2020, now that the parties have received the green light from the Committee on Foreign Investment in the United States. And Keysight has completed the acquisition of Eggplant – a software test automation platform provider – from The Carlyle Group.
Advanced semiconductor technology is in the spotlight this week, with some significant innovations that have been presented at the 2020 Symposia on VLSI Technology and Circuits – a virtual event this year. More news concern power devices and embedded software; but first, a few EDA and FPGA updates.
Leveraging Microsoft Azure cloud computing to speed-up timing signoff for advanced-node SoC designs meant to be fabricated by TSMC: this is the goal of two separate three-way collaboration agreements, one involving Cadence and the other Synopsys. In the case of Cadence, the collaboration concerns the Tempus Timing Signoff Solution and the Quantus Extraction Solution, which customers will use with the Cadence CloudBurst Platform; for Synopsys, tools involved in the agreement are PrimeTime static timing analysis and StarRC parasitic extraction. Both Cadence and Synopsys cited massive parallelization and scalability – made possible by cloud computing – as the major benefits enabling a speed-up in timing signoff. White papers providing more details about these cloud-based solutions are available for TSMC customers in the foundry’s website.
Open source suite of development tools for QuickLogic FPGAs
With its QORC initiative (QuickLogic Open Reconfigurable Computing), QuickLogic claims to be the first programmable logic vendor to actively embrace a fully open source suite of development tools for its FPGA devices and eFPGA technology. The initial offering, developed by Antmicro (Sweden/Poland) in collaboration with QuickLogic and Google, supports QuickLogic’s EOS S3 low power voice and sensor processing MCU with embedded FPGA, and PolarPro 3E discrete FPGA family. The EOS S3 open source development tools suite includes FPGA development flow (SymbiFlow); SoC emulation (Renode); Zephyr real time operating system, running on the Arm Cortex-M4F; and QuickFeather development kit.
Image credit: QuickLogic
Seven-layer gate-all-around FET outperforms FinFETs
French research institute CEA-Leti has demonstrated fabrication of a new gate-all-around (GAA) nanosheet device as an alternative to FinFET technology targeting high-performance applications. Researchers have fabricated GAA nanosheet transistors with seven levels of stacked silicon channels, more than twice as many as state-of-the-art today. By increasing the number of stacked channels, CEA-Leti increased the effective width of the device for a given layout footprint, thus inducing higher drive current and achieving a better DC performance than leading-edge devices. CEA-Leti’s demonstration was based on a “replacement metal-gate” process developed for FinFETs.
First demonstration of GAA NanoSheet transistors with 7 stacked channels from tall and straight (SiGe/Si) fins (15nm≤W≤85nm). Copyright CEA-Leti
Tungsten buried power rails improve SRAM performance at 3 nanometers
Belgian research institute Imec has demonstrated a tungsten buried power rail (BPR) integration scheme in a FinFET CMOS test vehicle, which does not adversely impact the CMOS device characteristics, showing excellent resistance values and electromigration behavior. A complementary study shows the advantages at system level of implementing BPRs as a scaling booster in 3 nanometer SRAM designs. As explained in a press release, buried power rails have recently emerged as an attractive structural scaling booster allowing a further reduction of standard cell height. Power rails are traditionally implemented in the chip’s back-end-of-line (BEOL); BPRs on the contrary are buried in the chip’s front-end-of-line (FEOL) to help free up routing resources for the interconnects. Integrating BPRs within the front-end module is however challenging, as BPR processing may induce stress in the conduction channel or cause metal contamination issues. Imec researchers avoided these problems by burying the W-BPR below the fin, deep into the shallow trench isolation (STI) module, and by capping the BPR metal by dielectric until the end of the processing. In a SRAM, moving the VDD and VSS power lines below the device allows more space for word- and bitline, offering a significant performance boost at system level. Imec simulations showed a 28.2% performance improvement for a server-processor having BPR-SRAMs with respect to conventional SRAM bit cells in L2 and L3 cache.
Imec integrated tungsten Buried Power Rails. Copyright Imec
Voltage-controlled MRAMs gain higher write speed and better manufacturability
Imec has also solved two fundamental operation challenges which have so far limited the write speed and manufacturability of voltage-controlled magnetic anisotropy (VCMA) magnetic random access memories (MRAMs): the need for pre-reading the device before writing, and the need for an external magnetic field during switching. Briefly summarizing the explanation provided in this press release, the pre-read step has been avoided thanks to differentiated voltage thresholds for the two memory states; and the external magnetic field is no longer necessary thanks to a magnetic hardmask embedded on top of the magnetic tunnel junction. With these innovations, VCMA MRAMs fabricated using a 300mm state-of-the-art CMOS infrastructure can achieve nanosecond-scale speed and 20 femtojoule write energy, outperforming STT-MRAMs. According to Imec, VCMA MRAMs are now ideal candidates for high-performance, low-power and high-density memory applications.
Rohm lowers ON resistance for SiC MOSFETs
Rohm Semiconductor has announced the fourth generation of its 1200V SiC MOSFETs, claiming 40% lower ON resistance – while still maintaining short circuit withstand time – and 50% lower switching loss. The company has achieved these advancements over its previous generation SiC MOSFETs by improving their double trench structure and reducing their gate-drain capacitance. Bare chip samples of the new devices have been made available from June 2020, with discrete packages to be offered in the future.
Reduction of ON resistance in Rohm’s SiC MOSFETs. Copyright Rohm
“Mission critical edge” software bundles from Lynx
“Mission critical edge” is the expression coined by Lynx Software Technologies (San Jose, CA) to designate edge computing solutions that require robust system-safety mechanisms, state-of-the-art security, and real time determinism with sub-microsecond latency – such as the ones needed by industrial automation, drones, satellites, and avionics. Now Lynx is addressing this emerging market – which, according to the company, will have a $16 billion SAM in 2023 – with three new bundles based on MOSA.ic, its framework for development and integration of complex multicore safety or security systems. The three new bundles are targeted to industrial, UAVs/satellites, and avionics applications respectively. Built on the LynxSecure separation kernel hypervisor, MOSA.ic supports a variety of operating systems and runs on Intel, Arm and PowerPC processors.
Foundry roadmaps made news headlines over the past few days, with TMSC reportedly working on an intermediate 4 nanometers node before moving to 3 nanometers. Besides other semiconductor-related updates, interesting news this week also concern the IT industry in general.
EDA/IP updates: Synopsys, Real Intent, Moortec
Synopsys’ DesignWare True Random Number Generator IP has received validation by the NIST Cryptographic Algorithm Validation Program, making it easier for customer end products to obtain Federal Information Processing Standards (FIPS) 140-3 certification.
Verix DFT, a full-chip, multimode DFT static sign-off tool recently unveiled by Real Intent, promises to reduce static sign-off time by several weeks. The new tool is deployed throughout the design process: during RTL design, as part of addressing asynchronous set/reset, clock and connectivity issues early; after scan synthesis, to check for scan chain rule compliance; and following place & route to assess and correct issues with scan-chain reordering or netlist modification. Time savings are gained from lower setup time, runtime speedup, and the reduced engineering debug and violation fixing due to consolidated reporting.
Autonomous vehicles obviously continue to be a hot theme, both in terms of business – with Amazon reportedly in advanced talks to buy self-driving car tech company Zoox (Foster City, CA) – and in terms of technology. Recent AV tech updates include a new standard and improved simulation solutions. More news this week come from academic research on new materials, both for IT and for power applications; lastly, one more FPGA vendor offering simplified solutions for non-expert designers.
Autonomous vehicles updates: UL 4600 standard, news from AVSimulation and Foretellix
The amount of technical standards specifically addressing autonomous vehicles is growing: last week we briefly reported about IEEE 2846, a “formal model for safety considerations in automated vehicle decision making”; and last April 1 Underwriters Laboratories announced the publication of UL 4600, a “standard for safety for the evaluation of autonomous products”.
Geopolitical tensions keep on influencing the semiconductor industry. Huawei is reportedly trying to convince Samsung and TSMC to build an advanced chip fab without using U.S. equipment; and former Risc-V Foundation – now called “Risc-V International” – has recently incorporated in Switzerland, a move preannounced by Chief Executive Calista Redmond in this interview. But now, back to technology.
Deploying autonomous vehicles at scale will require systems redundancies
A recent blog post from Amnon Shashua, CEO of Mobileye, provides several important concepts about the future of autonomous vehicles and his company’s strategy. The key point stressed by Shashua is that “safety must dictate the software and hardware architecture in ways that are not obvious.” Mobileye has already addressed the safety issues linked to the AV decision-making process: the possibility of uncareful driving has been ruled out by clarifying, in a formal manner, what it means to be “careful” (e.g. when merging into traffic); and the need to make predictions about behaviors of other road users has also been ruled out, by always assuming the worst-case scenario. This approach is also being used as a basis for the new IEEE 2846 standard. Having fixed the decision-making process, the only other possible cause of accidents is a glitch in the perception system, whose minimum MTBF requirement depends on the maximum acceptable accident frequency. This is where scale comes into play, as the absolute number of accidents involving autonomous vehicles obviously depends on how many of them are circulating on the roads. In the example provided by Shashua, a fleet of 100,000 robotic shuttles, achieving a maximum frequency of one accident every quarter would require a perception system with an MTBF of 50 million hours of driving – one thousand times better than the error rate of a human driver. In Mobileye’s view, such an ambitious MTBF can only be obtained by introducing system redundancies, as opposed to sensor redundancies within the system. This means equipping the vehicle with two independent and different perception systems: one based on cameras only, and the other on radars/lidars only. The probability of both systems failing at the same time is extremely low. This is why Mobileye is not pursuing the sensor fusion approach; instead, the company has developed a camera-only perception system. Which, by the way, works very well, as shown by this new 40-minute unedited video shot in Jerusalem.
Amnon Shashua. Image credit: Intel
New Arm processor IP
On May 26 Arm announced four new IP mostly targeting 5G mobile applications, offering significant advances over corresponding previous Arm processors. Cortex-A78 is a CPU for smartphones and other mobile devices, offering a 20% increase in sustained performance over Cortex-A77-based devices within a 1-watt power budget. Cortex-X1 – the most powerful Cortex CPU to date, with a 30% peak performance increase over Cortex-A77 – is the first CPU from the new Cortex-X Custom Program, which allows for customization and differentiation beyond the traditional roadmap of Arm Cortex products. Mali-G78 GPU will deliver a 25% increase in graphics performance relative to Mali-G77, with support for up to 24 cores. Ethos-N78 neural processing unit (NPU) delivers greater on-device ML capabilities and up to 25% more performance efficiency compared to previous Ethos-N77 NPU.
Image credit: Arm
Both Synopsys and Cadence are already providing support for the above-mentioned new Arm IP. Synopsys has enabled tapeouts of optimized system-on-chips for early adopters of Cortex-A78, Cortex-X1 and Mali-G78. Synopsys support includes QuickStart Implementation Kits (QIKs) available today. Cadence is supporting Cortex-A78 and Cortex-X1 with a digital full flow Rapid Adoption Kit (RAK); in addition, the Cadence Verification Suite and its engines have been optimized for the creation of designs based on these two new CPUs.
DARPA appoints research teams to develop security-aware EDA tools
Moving from PPA to PPAS – where S stands for security – when exploring trade-offs or setting design constrains for a new SoC. The goal of DARPA’s AISS program (Automatic Implementation of Secure Silicon) could be described this way, even though the DoD agency uses a different acronym (PASS, meaning Power, Area, Speed, and Security). The AISS program aims at providing SoC designers with new EDA tools that will allow them to specify security constraints, which will then be automatically satisfied by generating the optimal implementation. These future “security-aware EDA tools” will combine an advanced security engine developed within the AISS program, with commercial off-the-shelf IP from Synopsys, Arm, and UltraSoc. DARPA has recently announced the two research teams selected to develop this initiative: one includes Synopsys, Arm, Boeing, Florida Institute for Cybersecurity Research at the University of Florida, Texas A&M University, UltraSoC, and University of California, San Diego; while the members of the other team are Northrop Grumman, IBM, University of Arkansas, and University of Florida. AISS addresses four fundamental silicon security vulnerabilities: side channel attacks, hardware Trojans, reverse engineering, and supply chain attacks (such as counterfeiting, recycling, re-marking, cloning, and over-production).
The DARPA AISS program. Image credit: DARPA
Backscattering startup gets seed financing
Transmitting IoT data by “hitchhiking” existing RF signals generated by wireless devices already present in the environment: backscattering could be described this way. This technology is moving from academic research – with works such as the one from the University of California San Diego, presented at the ISSCC 2020 conference – to real products. HaiLa Technologies, a Canadian semiconductor startup that has recently raised $5 million in seed financing, plans to provide early access to the first Wi-Fi IP core based on its backscatter technology by the end of 2020. This will enable companies to develop the next generation of ultra-low power chipsets for IP over Ethernet over Wi-Fi in the IoT space. HaiLa uses a proprietary backscattering technique which allows modulation of digital sensor data on top of ambient signals of different protocols while maintaining the integrity of the signal to the original specific protocol. According to the company, this ensures compatibility of Haila sensor tags to various existing wireless protocols, resulting in a drastic reduction in deployment costs and risks. One of the technical advisors of Haila is Dinesh Bharadia, a professor at the UC San Diego who co-led the above-mentioned research work.