DC Thermal Management, Power Kit Is Getting Easier To Find And A Lot More Expensive

Months-long delays for critical datacenter infrastructure, including power and thermal management systems, have become the norm since the pandemic, but a fresh report from Dell'Oro Group shows that reality is giving way to more functional supply chains.

While the analyst group's latest datacenter physical infrastructure report showed revenues were up 18 percent during the first quarter, and some of this was due to larger shipment volumes, prices are also on the rise.

Even though customers may not have to wait as long to get their kit, they're likely to pay considerably more than this time last year. Depending on the component, customers can expect to pay anywhere from 10-20 percent more, Dell'Oro analyst Lucas Beran told The Register, adding that thermal management and cabinet power distribution equipment have seen some of the largest price hikes.

Still, the situation has improved considerably since last year, when customers could find themselves waiting anywhere from 12-18 months just to get their hands on the UPSes, PDUs, and racks necessary to support additional capacity. Currently Dell'Oro estimates lead times at six to 12 months. For reference, lead times need to drop to three to six months before they're back to pre-pandemic levels.

"Throughout 2023, datacenter physical infrastructure vendors are going to whittle down at their backlogs to get back to, more or less, historical norms," he said.

Long term, Beran notes that the trend toward higher TDP components and the hype surrounding generative AI are likely to have an impact on the market.

In particular, easier access to this kind of equipment could help datacenters cope with a new generation of watt-gobbling chips from Intel, AMD, and Nvidia. Today, CPUs can easily consume 400W under full load, up roughly 120W from last generation, and in addition to the challenge of getting all that power to the rack, operators also need to account for more demanding cooling requirements. 

The situation is even more challenging for customers with GPU clusters, which might pack four to eight 700W GPUs into a single chassis. For those training large language models, tens of thousands of GPUs may be required.

There are technologies available to handle the thermal output of these systems — rear-door heat exchangers, direct-to-chip liquid cooling, and immersion cooling — however, all of them require substantial facilities investments to deploy and operationalize.

Despite the AI hype, Beran doesn't expect these trends to directly impact revenues until 2024 or 2025 at the earliest. Even still, he remains optimistic about the future of datacenter physical infrastructure market, and predicts revenues to grow by as much as 12 percent in 2023. ®

RECENT NEWS

From Chip War To Cloud War: The Next Frontier In Global Tech Competition

The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more

The High Stakes Of Tech Regulation: Security Risks And Market Dynamics

The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more

The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics

Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more

The Data Crunch In AI: Strategies For Sustainability

Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more

Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser

After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more

LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue

In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more