The big topic on Wall Street is on Palantir and perhaps Nvidia, not withstanding the whole China issue after the meeting between Trump and Xi not bearing any fruit. That’s stirring up some nerves as evident with the risk selloff so far this week, in particular the one yesterday.
It once again highlights the vulnerabilities to the AI trade to certain news and/or market developments. But perhaps, one of the most thought-provoking headlines actually came from an interview with Microsoft CEO, Satya Nadella, and OpenAI CEO, Sam Altman, earlier in the week. This was what Nadella had to say:
“The biggest issue we’re now having is not a compute glut. It’s power. You may actually have a bunch of chips sitting in inventory that you can’t plug in – in fact, that is my problem today. It’s not a supply issue of chips. It is actually the fact that I don’t have warm shells to plug into.”
And by “warm shells” here, he’s making reference to data center shells. In other words, the whole basis of the AI trade/argument might have to be relooked into. AI isn’t limited by silicon and chips. It is limited by electricity. In other words, the real constraint is not compute but being able to access power and data center spaces.
Nadella’s point is that AI growth has taken on such a rapid pace that even the big guns are struggling to keep up with infrastructure development.
Just imagine Microsoft pouring billions into buying Nvidia chips and processors to “bolster AI investment”. Wall Street will cheer that on as it hopes for more innovative progress in the next six to twelve months.
However, what happens if Microsoft cannot actually find a home for these chips and processors due to the bottleneck above?
Essentially, those chips and processors are just sitting in boxes and not generating the desired revenue or AI progress that Microsoft actually would like. One can train AI models to better themselves and improve in six to twelve months. But setting up a data center to get access to power? It’s going to take way longer than that.
Every new data center that these guys i.e. Microsoft, Google, Meta etc are looking to set up would need hundreds of megawatts of power. And to set up that kind of infrastructure and energy resource might take years.
That means the current demand set and bottleneck points to the narrative that “AI investment” should not reward those who pour the most money into chips and processors, but rather those who managed to lock in power early and get the right infrastructure set up quicker than anyone else.
It’s easy and sexy to talk about progress in terms of AI innovation, with model improvements and flashy algorithms to show for. However, that might not count for anything if you don’t have sufficient access to power to keep that up and scale bigger as demand continues to grow at such breathtaking pace.
Besides that, Nadella also makes mention to not wanting to overbuy one particular generation of GPUs considering the present bottleneck situation. And that’s also another important thing to note. If the useful life of a particular GPU model is only going to be cut even shorter by power bottlenecks, it makes no sense for tech firms to be pouring in so much capital when Nvidia is just going to release better and faster chips every year.
As such, the negative impact from this bottleneck on power is not only hitting at the likes of Microsoft but it also will tie back to Nvidia at the end of the day.
Why is all this important?
The thing is Wall Street might still be partying like it’s 2023 and 2024 in cheering on the AI boom. However, the landscape and meaning of that might have just changed completely under our noses. It’s no longer just about who has the best chips and the best AI models. It’s also about who can have the most reliable and scalable access to power and electricity to keep the butter churning.
And so whichever name gets to take the lead in that space, will be the one that investors reward very handsomely next in the AI trade.