ARTICLE

Powering AI at Scale: What Your Data Centre Should be Delivering

Mar 16, 2026
author logo
STT GDC
SHARE
Link copied!

Artificial intelligence (AI) is quickly becoming a vital business tool that is transforming organisations across industries, from healthcare and finance to manufacturing and mobility. This transformation has been especially pronounced in Asia Pacific, with research indicating that countries such as Singapore, Australia, New Zealand and South Korea are outpacing most North American and European markets in enterprise AI adoption
 

Yet as more organisations embrace AI to boost operational efficiencies and reduce costs, many are finding that their data centres aren’t keeping pace. That’s not surprising: AI workloads are unlike anything traditional IT infrastructure was built to handle. 
 

AI workloads typically encompass highly compute-intensive processes such as model training and inference – where the model puts its training into action to respond to questions or make predictions. They also involve large-scale data processing and fine-tuning for specific applications in areas such as natural language processing.
 

Training a single advanced AI model can involve processing petabytes of data and executing quintillions of calculations, while inference workloads can also entail trillions of operations. These are processing challenges that would overwhelm standard facilities. 
 

To handle these workloads, organisations require AI-ready data centres that can deliver the massive power density, ultra-low-latency networking and advanced cooling systems necessary to sustain such large-scale processes. 
 

In turn, facilities that offer these capabilities are fast becoming critical business enablers. Without them, organisations risk falling behind competitors that can innovate and deploy AI at scale with greater speed and efficiency. But what are the core characteristics, design principles and infrastructure features that define an AI-ready data centre, and what should organisations prioritise when selecting a facility?
 

Putting performance and scale first

To handle AI’s higher power and performance demands, data centres will typically incorporate the following features.

  • High rack power density – Rack power requirements are rapidly rising in AI-enabled environments, driven largely by high-performance, GPU-based systems. While many inference and smaller‑scale use cases continue to operate within the 20–40kW per rack range on CPU-based architectures, the growth of large‑scale AI training environments place significantly greater demands on infrastructure – pushing leading operators to provision for much higher rack power densities than those required by legacy CPU‑centric systems.

  • Dense compute clusters – Each cluster may have hundreds of GPU-dense racks, packed together with superfast networking systems and advanced cooling. This can deliver the equivalent deep-learning compute in roughly 1/40th the footprint of a conventional CPU-focused data hall.

  • Low-latency, high-speed interconnects – These ultra-fast network fabrics can deliver markedly lower latency (often approaching sub-microsecond levels) and far higher bandwidth than the traditional networks used in conventional data centres, enabling large datasets to be shared almost instantly for model training and real-time AI tasks.

  • Scalability – Infrastructure must be able to grow with increasing AI model complexity and deployment size.
     

Designed to integrate compute, power and cooling

Organisations should also bear in mind the extraordinary power density, heat output and spatial requirements of modern AI hardware. In AI-ready facilities, these factors have already been considered. Most are designed holistically, with compute, power and cooling integrated from the ground up.
 

An effective approach includes:

  • Sizable, flexible floorspace – Advanced data centres need to support large numbers of dense GPU clusters, high-speed networking, effective power distribution, and advanced cooling systems. One example of this is the upcoming US$27 billion Hyperion campus being created by Meta, which will span more than 4 million square feet.

  • Clear ceiling heights and floor loadings – The density of AI racks makes them not just taller but also dependent on additional layers of services above them. Increased rack weights, along with the higher density of services around and above the racks, require greater loading capacity in both floor slabs and ceilings.

  • Optimised airflow and thermal management – Integrating liquid or hybrid cooling into the physical layout ensures that the intense heat from dense GPU or tensor processing unit (TPU) clusters is efficiently dissipated. While traditional racks often draw under 20 kilowatts (kW) per rack, AI-optimised racks already exceed 100 kW per rack and are due to exceed 1,000kW within a few years, making cooling demands far greater.

  • High-capacity, resilient power and cooling delivery – Redundant, resilient power systems are needed to support energy-intensive AI workloads without bottlenecks. In data centres that are running AI training workloads, compute can operate as a single system, and downtime on even a single GPU can impact the current workload for the entire facility.   

  • Modular infrastructure – Pre-planned modular designs enable the rapid deployment of additional racks and clusters as AI models grow in complexity, reducing downtime and accelerating time to value.
     

A robust infrastructure for high-performance workloads

Building on these design considerations, AI-ready data centres replace traditional cooling and power systems with infrastructure robust enough to support sustained, high-performance workloads. 

Most include:

  • Advanced cooling systems – Liquid or hybrid solutions can efficiently address the thermal demands of dense, high-performance compute deployments. ST Telemedia Global Data Centres’ (STT GDC) facilities, for example, support advanced cooling solutions such as direct-to-chip, immersion cooling and rear-door heat exchangers to deliver consistent reliability and maintain efficiency for high-performance computing (HPC) workloads.

  • Compatible power distribution – Certain AI racks can only accept specific power distribution topologies. AI-ready data centres are designed to provide not only these topologies, but also the flexibility to pivot to other technologies such as HVDC in future IT equipment refresh cycles.

  • Robust power distribution – AI training workloads can act as a single group that continually fluctuates between high and low power draw within milliseconds. These rapid shifts can lead to premature battery failures and create instability in other supporting systems, such as diesel generators – thus necessitating purpose-built, resilient infrastructure. 
     

Operational readiness: Tools, talent and sustainability

Even with the right physical infrastructure, operational gaps in monitoring, skills and planning can undermine performance. AI-ready data centres counter that through AI-enhanced monitoring and planning. 

 

These approaches can dynamically adjust cooling and power consumption based on real-time sensor data to optimise performance and efficiency. Research indicates that Google has reduced energy usage for cooling by 40% across its data centres by deploying AI, while Microsoft has achieved a 30% reduction.  


AI-ready data centres typically rely on specialised talent to manage these high-density, AI-optimised environments and ensure alignment with environmental, social and governance needs, embedding processes that maintain regulatory compliance and sustainability goals. 
 

Why AI-ready means future-ready

As all this indicates, AI-ready data centres empower organisations to harness the potential of this transformative technology by providing the robust, high-performance infrastructure required to support compute-intensive workloads. 
 

Crucially, AI-ready data centres are future-ready. Built to adapt to next-generation workloads and the evolving demands of AI, they can deliver the flexibility, scalability and resilience organisations need to grow and innovate with confidence, while optimising efficiency and sustainability. 
 

This is especially significant in Asia Pacific, where AI momentum is accelerating. According to IDC, AI and generative AI investments in the region are projected to reach US$175 billion by 2028, growing at a compound annual growth rate (CAGR) of 33.6% between 2023 and 2028.
 

Powering the AI-driven future

At STT GDC, we believe AI‑ready infrastructures separate industry leaders from followers. That is why all our data centres are built to be AI-ready from the ground up, so customers can confidently deploy advanced platforms that unlock the technology’s full potential. 
 

In addition, our STT Singapore  6 and STT Bangkok 1 data centres – the first of our facilities to be certified under the NVIDIA DGX‑Ready Data Center program – offer optimised environments for DGX-based clusters and other accelerated computing platforms, streamlining deployment for advanced AI workloads. 
 

Built for resilience, sustainability and performance, our infrastructure empowers enterprises to innovate, scale and lead in an ever-evolving digital landscape.
 

Want to find out more about how STT GDC is powering the future with AI-ready digital infrastructure? Explore our solutions today.