The growing adoption of artificial intelligence (AI) is reshaping the design of datacentres to tackle the power, cooling and networking challenges posed by increasingly powerful AI models, according to a panel of industry experts at Gitex Asia 2025.
For one, the emergence of reasoning models has dramatically increased computational needs, said Luke Mackinnon, senior vice-president and managing director for Asia at Australia’s NextDC, noting that such models can generate 50 times more tokens and require 150 times more compute power compared with inference models.
“It’s really those reasoning models that are the catalyst for a lot of the sovereign and enterprise AI clouds that we expect to see,” he said. “And of course, it’s all about cooling and power density.”
The surge in density has been a concern for datacentres for some time. Sunil Gupta, co-founder and CEO of India’s Yotta Data Services, noted that while traditional central processing unit (CPU) workloads might require 6-10kW per rack, AI workloads that run on graphics processing units (GPUs) demand considerably more power, even with rear door heat exchanger cooling systems.
“I’m looking at about 50kW per rack, which is eight to 10 times more than what you would put into a normal rack,” he said, adding that liquid cooling would be necessary from the outset to support future GPU designs that will require as much as 250kW per rack.
Retrofitting existing datacentres to accommodate AI workloads has its own challenges, too. Eugene Seo, managing director for datacentres at CapitaLand, explained that while technically feasible, the transition can bring difficulties.
“To convert a cloud facility into an AI datacentre can technically be done. What’s more challenging is from a financial management standpoint,” he said, citing potential customer churn and the capital expenditure needed for upgrades such as coolant distribution units. “There’s a lot more piping and it gets operationally intensive.”
Networking in AI datacentres also differs significantly. Miles Tang, vice-president for AI datacentres at China Unicom Global, pointed out the high-speed interconnect requirements for AI clusters, as well as the need for multiple power supply units to be active simultaneously to meet the demands of power-hungry AI servers.
Asher Ling, chief technology officer and managing director for Singapore and Malaysia at Princeton Digital Group, stressed the need for reliable access to renewable energy.
“Is there unlimited access to cheap renewable energy with regulations that will enable us to go on that brown-to-green journey? That’s on the minds of a lot of the largest tech companies,” he said, adding that India and Australia have regulatory frameworks that have helped to facilitate renewable energy adoption by datacentres.
Seo concurred, noting that the datacentre is an extension of the energy distribution system. “It’s literally a substation, so how we think about the business is that renewable energy for datacentres and energy distribution are really two sides of the same coin,” he said.
Datacentre design
The distinction between AI training and inference workloads further complicates datacentre design. Training requires massive east-west traffic in the datacentre, whereas inference generates more north-south traffic to users, demanding low latency and proximity.
“Inference will need to be closer to the consumption,” said Mackinnon, comparing its future proliferation to content delivery networks. Gupta added that inference is like any other web application where latency matters to users, potentially driving these workloads to edge locations.
Looking ahead, operators anticipate further innovation but also face uncertainty. Mackinnon suggested trends such as “GPU shards” that can handle smaller chunks of AI workloads in parallel, and liquid cooling as a service to manage the high costs and variable usage patterns of AI infrastructure.
However, Gupta warned of the risk of obsolescence and challenging economics. The rapid pace of development of GPU technology means infrastructure built today might struggle to support future chips, while short customer contract terms for GPU capacity – often less than a year compared with long-term colocation deals – make returns on investment uncertain. “It is going to be a very uncertain market for some time,” said Gupta.
Despite the challenges, the panellists agreed that the industry is at the forefront of a major technological shift. “We’re just at the cusp of this AI revolution,” said Ling. “We are definitely in the right industry at the right time of human history.”