EDACafe Editorial Roberto Frazzoli
Roberto Frazzoli is a contributing editor to EDACafe. His interests as a technology journalist focus on the semiconductor ecosystem in all its aspects. Roberto started covering electronics in 1987. His weekly contribution to EDACafe started in early 2019. TSMC to get CHIPS Act funding; Google’s Arm-based CPU; evolutionary algorithms in AI; Huawei’s growing importanceApril 11th, 2024 by Roberto Frazzoli
Unsurprisingly, most news updates this week concern artificial intelligence in one way or another, with several new processor announcements. The so-called chip war is also in the news, with CHIPS Act updates and an analysis about Huawei. US CHIPS Act updates: TSMC, Applied Materials US CHIPS and Science Act’s recent updates include some applicant receiving the green-light and others getting a denial, reportedly due to “overwhelming demand”. The U.S. Department of Commerce and TSMC Arizona have signed a non-binding preliminary memorandum of terms to provide up to $6.6 billion to support TSMC’s investment of more than $65 billion in three greenfield leading-edge (2-nanometer) fabs in Phoenix, Arizona. On the other hand, the CHIPS Program Office has announced that it would not move forward with its third Notice of Funding Opportunity to construct, modernize, or expand commercial R&D facilities in the United States at this time. As a consequence, US-headquartered equipment maker Applied Materials may reportedly postpone or abandon its plans to build a $4 billion research and development facility in Silicon Valley. Datacenter processor update: Google, Meta, Intel Two hyperscalers have recently announced new homegrown processors. Google has unveiled the Axion Processors family, its first custom Arm-based CPUs designed for the data center. Based on Neoverse V2 CPU, the new devices will be available to Google Cloud customers later this year. According to the company, Axion processors deliver instances with up to 30% better performance than the fastest general-purpose Arm-based instances available in the cloud today, up to 50% better performance and up to 60% better energy-efficiency than comparable current-generation x86-based instances.
Meta (Facebook) has developed a new generation of its Meta Training and Inference Accelerator (MTIA), which more than doubles the compute and memory bandwidth of its previous solution while maintaining close tie-in to the company’s AI workloads, especially for the ranking and recommendation models. According to Meta, by controlling the whole AI stack the company can achieve greater efficiency compared to commercially available GPUs. The new Meta chip will reportedly be fabricated by TSMC with a 5-nanometer process. As for “commercial” chipmakers, Intel has recently introduced the Gaudi 3 AI accelerator, claiming 50% on average better inference and 40% on average better power efficiency than Nvidia H100 at a fraction of the cost. Risc-V updates: Semidynamics, Imagination Spain-based Risc-V IP vendor Semidynamics is proposing a new approach to the architecture of AI SoCs. Instead of assembling IP blocks for different processor types – CPU, GPU, NPU – with different instructions sets and different toolchains, the company has developed an architecture based on multiple instances of a single IP element. Based a customizable Risc-V 64-bit core, this element includes vector units, a tensor unit and a special unit to avoid cache misses. As a result, the whole AI SoC can be designed using just one IP supplier, one Risc-V instruction set and one tool chain. UK-based Imagination has unveiled a new Risc-V-based CPU in its Catapult CPU IP range. The APXM-6200 is a 64-bit, in-order application processor with an 11 stage, dual-issue pipeline. According to the company, the new CPU delivers a 65% improvement in normalized performance and 2.5x improvement in normalized performance density compared to equivalent CPUs already on the market. Customers can choose between single-, dual- and quad-core configurations depending on their performance requirements, with per-core power control and cache coherency. AI capabilities are underpinned by support for the Risc-V vector extensions along with fast data-coupling for AI accelerators. Using evolutionary algorithms to automatically merge AI models Japan-based AI research company Sakana AI has recently raised $30M in a seed funding round led by Lux Capital, with strong backing from Khosla Ventures. According to market intelligence firm CB Insights, Sakana AI currently has the highest valuation per employee (the ratio between company valuation and number of employees) among AI startups, at $67 million. The company uses evolutionary algorithms (the survival of the fittest) to automatically merge existing open-source AI foundation models, with the goal of creating new ones. For example, by merging a language model specialized for Japanese and a language model specialized for math, Sakana AI succeeded in automatically generating a model that excels at solving mathematical problems in Japanese. Further reading According to a study from IGCC and Merics think tanks, Huawei is emerging as the leader of China’s national team in semiconductors, dominating chip manufacturing and seeking to integrate the country’s entire supply chain. Huawei is quietly expanding its presence across the supply chain, including in lithography and EDA. As “team lead,” Huawei often performs the role of integrator, developing into a behemoth similar to a South Korean chaebol, like the massive conglomerate Samsung. The study maintains that analyzing China’s progress in semiconductors is becoming more difficult, because U.S. technology restrictions on firms included on its Entity List create an incentive for Chinese companies to hide their achievements and their involvement in various initiatives. For example – according to the study – Huawei is currently building or supporting the construction of five semiconductor fabs in China. In these projects, however, Huawei does not use its own name, instead relying on other semiconductor companies in China like Fujian Jinhua Integrated Circuit Co. For its role, Huawei receives generous support from the Chinese government. Its claims of being a private company like any other are therefore less credible, the study concludes. The recent launch of Nvidia Blackwell AI accelerator continues to elicit analysis and comments. In this post, SemiAnalysis explains that Nvidia’s claim of a 30x higher performance for Blackwell over Hopper is based on a carefully selected set of benchmarking parameters. And in this post, Fabricated Knowledge elaborates on Jensen Huang’s motto “the data center is the new unit of compute” to highlight Nvidia’s rack-level competitive advantages, based on copper interconnect, liquid cooling and highest compute density. |