Artificial Intelligence keeps creating “circles” – that is, producing consequences that feedback on AI itself, further amplifying its impact. Besides the well-known chip performance “virtuous circle” (powerful processors enabling AI, and AI enabling even more powerful processors), another circle is looming – virtuous or vicious, depending on the point of view. It can be described as follows: chip fabs capacity enables the construction of large datacenters, and the success of AI applications fueled by large datacenters generates the demand for even larger fab capacity. Clearly, this insatiable hunger for GPUs is “virtuous” from the point of view of semiconductor sales, but can be considered “vicious” from an energy consumption standpoint. According to a CB Insights estimate, in 2024 three million operational H100 Nvidia chips – in datacenters around the world – will collectively consume almost 14 GWh, more than the whole Guatemala state (13 GWh).
Sam Altman reportedly looking to spur the construction of new fabs
According to two press reports (here and here), OpenAI’s CEO Sam Altman is trying to launch a network of new fabs to ensure his company an abundant supply of GPUs and reduce its reliance on Nvidia. Reportedly, Altman has been discussing the project with investors from United Arab Emirates and Japan’s SoftBank Group – as well as with potential foundry partners like TSMC, Intel and Samsung. OpenAI would be the new fabs’ primary customer. “Big tech” companies are the biggest buyers of Nvidia’s H100 GPUs. In a Thread post (quoted by Reuters), Mark Zuckerberg recently wrote “We’re building a massive amount of infrastructure. At the end of this year, we’ll have ~350k Nvidia H100s — and overall ~600k H100 equivalents of compute if you include other GPUs”.