EDACafe Editorial Roberto Frazzoli
Roberto Frazzoli is a contributing editor to EDACafe. His interests as a technology journalist focus on the semiconductor ecosystem in all its aspects. Roberto started covering electronics in 1987. His weekly contribution to EDACafe started in early 2019. OrCAD X; memory tiering; treating a circuit like a neural networkSeptember 15th, 2023 by Roberto Frazzoli
Arm’s Initial Public Offering is proving successful: share price increased almost 25% soon after the company’s Nasdaq listing, which translates into a $65 billion valuation. More themes this week include memory tiering in the datacenters and a new way to use AI in chip design. AI-enhanced, cloud-based PCB design The AI-in-EDA trend extends to PCB tools. The new Cadence OrCAD X Platform promises up to 5X faster PCB design thanks to generative AI automation to reduce placement time, and by leveraging Cadence OnCloud integration. According to Cadence, the solution is optimized for small and medium businesses, offering a new, easy-to-learn and easy-to-use PCB layout canvas. New MLPerf benchmarks MLPerf Inference v3.1 introduces two new benchmarks to the suite. The first is a large language model (LLM) using the GPT-J reference model to summarize CNN news articles. The second is an updated recommender, modified to be more representative of industry practices, using the DLRM-DCNv2 reference model and a much larger dataset. The latest MLPerf results also include, for the first time, the MLPerf Storage benchmark, which measures the performance of storage systems in the context of ML training workloads.
Memory tiering solutions from Enfabrica Nvidia’s supremacy in artificial intelligence applications has reduced venture funding for startups developing AI chips, with the number of U.S. deals this quarter falling 80% from a year ago, according to Reuters. Some startups, however, are still attracting investors, and – based on two recent cases, d-Matrix and Enfabrica – it seems that the attention is now focused on solutions targeting the “memory wall” at the datacenter level: in other words, the problem of GPUs spending too much time waiting for data. Last week EDACafe reported about d-Matrix, which addresses the processing needs of inference in generative AI applications. According to d-Matrix, “While GPUs are incredibly powerful for gaming or mining cryptocurrency, their performance is suboptimal for running generative AI. The unique memory bandwidth demands of running AI inference results in GPUs spending most of the time idle, waiting for more data to transfer in from DRAM. Along with reduced throughput and added latency, moving data in and out of DRAM also requires energy that drives up power and cooling costs. But GPUs have been the best available solution until now.” Hence, d-Matrix has developed a Digital In Memory Compute (DIMC) architecture to replace GPUs. According to Enfabrica, which is addressing the memory wall in modern data center infrastructure, “The notion that the only usable memory for AI-workload-crunching GPUs is ultra-low latency DRAM integrated in the GPU, or right next to the GPU, isn’t a solution to the memory wall — it’s just an acknowledgment of it.”. Enfabrica focuses on “memory tiering” and has developed a switch silicon (called Accelerated Compute Fabric) that blends “CXL.mem disaggregation, performance/capacity layering, and RDMA networking to implement a scalable, high-bandwidth, high-capacity, latency bound memory hierarchy feeding any large-scale AI compute machine.” Enfabrica (Mountain View, CA) has recently closed a $125 million Series B financing round with support from existing and new investors including Nvidia. This white paper from analyst Bob Wheeler provides deeper insights on memory tiering. TSMC’s investments TSMC’s Board of Directors has recently approved the purchase of 10% equity interest in IMS Nanofabrication from Intel Corporation for an amount not exceeding US$432.8 million. Austria-based IMS Nanofabrication, owned by Intel, is the global technology leader for multi-beam mask writers. TSMC’s Board of Directors has also approved an investment in Arm, in an amount not exceeding US$100 million based on Arm’s share price at IPO. More news about TSMC concern the company’s new fabs around the world. According to a Reuters report, TSMC is frustrated at its new fab in Arizona, where unions oppose the arrival of Taiwanese workers – a measure the company would consider due to difficulties in recruiting people locally. This would be one of the reasons why the Taiwanese foundry has growing confidence in Japan, where it is considering adding a second fab besides the one under construction. According to the sources quoted by Reuters, TSMC sees workers in Japan “as more willing to work a punishing schedule with overtime.” As for the fab that TSMC plans to build in Germany with local firms, the Taiwanese foundry is reportedly “concerned the work culture there, with long vacations and strong unions, will hit output.” Tower-Fortsense LiDAR imager After the Short-Wave Infrared sensor developed with TriEye, which EDACafe reported about last week, Tower Semiconductor is now announcing an advanced 3D imager for LiDAR application based on dToF (direct Time-of-Flight) technology, developed with Fortsense. The new product, FL6031, is based on Tower’s 65nm Stacked BSI CIS platform with pixel level hybrid bonding. Events International Conference on Silicon Carbide and Related Materials, 17-22 September, Sorrento, Italy International Test Conference, October 8-13, Anaheim, CA Further reading This blog post from Google DeepMind represents a new contribution to the much-debated theme of artificial intelligence in chip design. The researchers developed an AI-based approach to designing more powerful and efficient circuits by treating a circuit like a neural network. The proposed ‘circuit neural networks’ are “a new type of neural network which turns edges into wires and nodes into logic gates, and learns how to connect them together.” The Wireless Broadband Alliance (WBA) has released a report on Wi-Fi 7 titled “Get Ready for Wi-Fi 7: Applying New Capabilities to the Key Use Cases.” Based on the IEEE 802.11be (Extreme High Throughput) standard, Wi-Fi 7 offers double the bandwidth and three times the speed of Wi-Fi 6 thanks to channel widths up to 320 MHz and 4k QAM. Additionally, Wi-Fi 7 offers advanced support for latency-sensitive use cases, thanks to multi-link operation (MLO) in the 2.4 GHz, 5 GHz, and 6 GHz bands. Lastly, this blog post from Rene Haas celebrates Arm’s Nasdaq listing. |