AI-Driven Temperature and Humidity Control in IDC Facilities
The AI boom has sparked massive discussions about data center energy use. IDC facilities face increasing pressure to maintain precise temperature and humidity levels while minimizing power consumption. AI-driven environmental control systems analyze real-time data, predict thermal loads, and adjust cooling dynamically. This reduces energy waste and prevents overheating during peak computing periods. As AI models grow larger and more complex, social media debates highlight the need for greener data center operations. Intelligent climate control is becoming a defining feature of next-generation IDC infrastructure.
Predicting the future is challenging, depending on how many power-hungry graphics processing units (GPUs) are deployed to meet the demands of artificial intelligence technologies, and of course, on further increases in air conditioning to lower data center temperatures. A report from the International Energy Agency indicates that by 2026, data center power consumption will grow to at least 650 TWh (40%), but could reach as high as 1,050 TWh (128%).
1. Data centers support the trend of artificial intelligence
Artificial intelligence is an extremely power-intensive technology, requiring data centers with sufficient computing power and power delivery capabilities to support its operation.
A recent study by the Swedish research institution RISE clearly demonstrates the enormous changes brought about by the rapid popularization of this technology. For example, ChatGPT reached 1 million users within just five days of its launch in November 2022. They reached 100 million users within two months, while TikTok took nine months and Instagram took two and a half years to reach the same user scale.
For reference, a Google search consumes only 0.28 Wh, equivalent to lighting a 60W light bulb for 17 seconds.
In contrast, training GPT-4 requires 1.7 trillion parameters and 13 trillion tokens (word fragments), a completely different proposition. To achieve this, multiple servers containing 25,000 NVIDIA A100 GPUs are needed, each consuming approximately 6.5kW. OpenAI states that training took 100 days, consumed approximately 50 GWh of energy, and cost $100 million. Clearly, artificial intelligence will drastically change the game for data centers, requiring computing power and energy consumption levels far exceeding anything we have seen so far.
2. Data Center 48V Architecture
Early data centers used a centralized power architecture (CPA), which centrally converted the mains power (grid) voltage to 12V (bus voltage), then distributed it to each server, and used relatively simple converters to locally convert it to 5V or 3.3V logic levels.
However, as power demands increased, the current (and associated losses) on the 12V bus became unacceptably high, forcing system engineers to switch to a 48V bus layout. According to Ohm’s law, the current was reduced by a factor of four, and the losses by a factor of four squared. This configuration is known as a distributed power architecture (DPA).
Meanwhile, the voltage of processors and other components continued to decrease, eventually dropping to sub-volt levels, necessitating multiple secondary voltage rails. To address this, second-order conversion technology was used, using a DC-DC converter (called an intermediate bus converter – IBC) to convert the 48V voltage to the 12V bus, and then outputting other voltages from the 12V bus as needed.
3. Demand for high-efficiency MOSFETs
Power loss within data centers presents challenges for operators. First and foremost, they are paying for electricity that doesn’t contribute to server operation. Secondly, any wasted energy translates into heat, which must be managed. With hyperscale AI servers demanding up to 120 kW (and certainly increasing over time), even at 50% load, calculating 2.5% loss at 97.5% peak efficiency, each server wastes 1.5 kW of power—equivalent to a constantly running electric heater.
Managing this heat may require cooling measures in the power conversion system, such as radiators or fans. These measures increase the size of power supplies, taking up space that could be used for more computing power, and fans, in particular, consume more power and increase costs. Since temperatures within data centers need strict control, excessive loss also raises ambient temperatures, meaning more air conditioning is needed for cooling. This is both a capital expenditure and an operating cost, while also taking up space.