With the rapidly growing demand for artificial intelligence outstripping many modern technology companies’ capacity, a race to build the infrastructure needed to feed ever more sophisticated, power-hungry AI models is underway. Although prior ones centered on GPU shortages and chip supply limitations, in a recent interview, Microsoft’s CEO, Satya Nadella, disclosed a shocking adversary: electricity.
Chips in Storage, Power in Short Supply
While Nadella offered a candid admission, he revealed that Microsoft has “a lot of cutting-edge AI chips, largely GPUs, sitting in warehouses that haven’t been installed.” However, the reason is straightforward: “I can’t plug them in because I don’t have enough warm shells.” Therefore, it is not a chip issue but a power capacity and incomplete data center problem.
The term “warm shells” refers to data center buildings that are already constructed and equipped with the necessary infrastructure —power, cooling, and networking —to launch immediately. However, the power bottleneck means many chips remain unused even though they are equipped and ready. In other words, while in the previous years companies had to compete for the limited supply of GPUs to make AI work, now the challenge is to implement the necessary infrastructure as quickly as possible to organize the AI work.
The Immense Energy Demands of AI
Operational AI data centers are far more power-intensive than standard compute centers. The energy consumed by, for instance, a massive AI training cluster can equal that of an entire modest city. To manage these thermal loads, Microsoft and other technology behemoths are investing in new energy sources and cooling options, including microfluidic cooling, liquid cooling, and high-voltage direct current power distribution.
Also, businesses are looking for new ways to get electricity, such as renewable energy sources and small modular nuclear reactors, to keep up with the rising demand. Microsoft has been trying to get utilities to sign power purchase agreements and invest in nuclear energy to ensure its AI infrastructure has stable, scalable power.
Broader Industry Implications
Nadella’s confession reflects a critical shift in the AI hardware contest: sourcing chips is no longer enough; orchestrating the rapid construction and commissioning of large-capacity data centers has also become vitally important. As AI workloads increase exponentially, bottlenecks have shifted to power grid capacity and to the pace of construction and regulatory approvals.
Google, Amazon, Meta, and other AI leaders are having similar problems. Many of their data center projects have been put on hold or scaled back because the grid can’t handle them. This structural imbalance, where computing power grows exponentially and infrastructure capacity grows linearly, could set the pace for AI innovation in the years to come.
The Future of AI Infrastructure
The need to overcome the power bottleneck is driving people to spend heavily on energy infrastructure and cooling technologies. Microsoft’s plan includes developing its own AI chips optimized for efficiency and building complete data center systems that combine computing, networking, and cooling to make the best use of power.
In the end, Nadella’s comments paint a picture of an AI era in which energy, not just silicon, is the most critical resource. No matter how many chips are available, the company that best handles the challenges of power infrastructure, regulatory environments, and construction logistics may beat its competitors.
As the need for AI continues to grow rapidly, this new bottleneck shows how tech giants are having to address new problems as they work to make AI even more potent in the future.
			
			