This week EDACafe takes a quick look at the 2021 edition of the Linley Fall Processor Conference, organized by technology analysis firm The Linley Group at a physical venue in Santa Clara, CA, and followed by a virtual event. Besides updates on deep learning accelerators, the conference also covered ‘conventional’ processing solutions and some other types of IP. This article will only provide a general overview of the event; full content can be accessed from the conference website, downloading the proceedings (presentations slides) for free.
AI trends: bigger training workloads, segmentation of the edge-AI market
In his opening keynote, Linley Group’s Principal Analyst Linley Gwennap reiterated the key concepts from last Spring processor conference, adding updates on the recent AI trends. Among them, the size of NLP models keeps growing: Google’s Switch Transformer has 1.6 trillion parameters. To train ever-larger neural networks, Cerebras and Tesla are using wafer-scale technology and other innovations. In the datacenter, Nvidia is finding tougher competition from Qualcomm AI 100 and forthcoming Intel Ponte Vecchio. According to Gwennap, Nvidia still leads in performance, but not in efficiency. As for edge-AI, this market is fragmenting into high-end chips for camera-based systems and low-power chips for simple sensors. The conference also saw the participation of TechInsights – the Canadian reverse engineering firm that has recently acquired The Linley Group – with a presentation on the performance gap between CPU and main memory. Among other findings, TechInsights analysts concluded that SRAM cell size scaling trend is getting worse than Logic Standard Cell because SRAM cell does not have DTCO (Design Technology Co-Optimization) scaling options.