Earlier on, the race for AI hardware was largely about securing large quantities of GPUs. Things are now shifting increasingly towards high-bandwidth memory, which is starting to look like the real bottleneck as major cloud companies attempt to handle the substantial data loads associated with newer and more efficient models.
In the first phase of the data center buildout undertaken by the big tech hyperscalers—Microsoft (MSFT), Alphabet (GOOGL), Amazon (AMZN), Meta (META), and Oracle (ORCL), among others—the race was to secure GPUs to run large language models (“LLMs”) like ChatGPT, Gemini, etc. Hardware tech firms in general have been doing well since this change in the market, breaking far and above software firms, which have been selling off for the opposite problem: AI is taking them out.