EDA research is alive: the DAC 2023 technical program received a record high number of submissions. For the Research Track, the Conference’s Technical Program Committee reviewed 1,156 submitted research manuscripts and accepted 263 for presentation and publication. In addition, 269 Engineering Track submissions were reviewed with 71 accepted for presentation.
Hyperscalers updates: Microsoft, Meta, Google
Microsoft and AMD are reportedly collaborating on artificial intelligence chips. Unlike what one would expect, the AI processor design is being provided by Microsoft, not by AMD: it is the Athena chip that was in the news a couple of weeks ago. Microsoft is also reportedly providing financial support to bolster AMD’s AI efforts.
Meta (Facebook) has reportedly hired an Oslo-based team (at least ten engineers) that until late last year was building artificial-intelligence networking technology at British AI chip unicorn Graphcore. According to Reuters, Graphcore closed its Oslo office as part of a broader restructuring announced in October last year.
Among the many innovations introduced on occasion of the recent Google I/O event, Google Cloud has announced the private preview launch of the next-generation A3 GPU supercomputer for training and inference of generative AI and large language models. The A3 VMs combine Nvidia H100 Tensor Core GPUs and Google’s custom-designed 200 Gbps IPUs, with GPU-to-GPU data transfers bypassing the CPU host and flowing over separate interfaces from other VM networks and data traffic. This enables up to 10x more network bandwidth compared to Google’s A2 VMs. The A3 supercomputer’s scale provides up to 26 exaFlops of AI performance.