Data center excellence in the age of machine learning, Big Data

Data center excellence in the age of machine learning, Big Data

Facility managers must balance fluctuating workloads and make sure cooling performance is optimized.

The age of the high-density data center is upon us, and it brings with it just as many risks as rewards.

According to Data Center Knowledge, the average rack density was 3kW to 5kW at the close of the last decade. Today, the averages are generally higher, with facility managers documenting ranges from 5kW to 13kW range and, in some extreme cases, 25kW. For context, 25kW is enough electricity to power a large home or mid-sized business, only it's condensed into a single data center rack. 

This colossal amount of energy is necessitated by the volume of data people generate every day (2.5 quintillion bytes by some estimates). Advanced analytics such as machine learning and artificial intelligence that make sense of Big Data also force the limits of rack server densities.

Data center managers, in an attempt to minimize footprint and improve efficiency while sustaining these new workloads, naturally gravitate to higher density rack servers. They take up less space, but their energy toll is significant, and it turns electrical load balancing into a tremendously precarious balancing act.

What's more, higher density racks generate more heat. Depending on the ambient temperature, a cooling failure could induce overheating in as little as one minute. It raises the all-important question: How do facility managers balance these massive, often fluctuating workloads and ensure they optimize cooling performance to handle the spike in heat exhaust?

Facility managers must balance fluctuating workloads and make sure cooling performance is optimized.Facility managers must balance fluctuating workloads and make sure cooling performance is optimized.

First things first: Intelligent power distribution is a must

High-density racks need power distribution units (PDUs) that support real-time power monitoring and intuitive load distribution features. This is important for many reasons:

  1. Even load distribution: A well-designed PDU color-codes sub-circuits on the PDU so the most energy-hungry equipment in a given rack are distributed across sub-circuits, reducing the likelihood of a short. 
  2. Real-time load balancing: Making enough power capacity available without stranding power is nothing short of a data center trapeze act. Over-provisioning and under-utilizing capacity wastes money. Conversely, under-provisioning power capacity and then exceeding available limits will spell disaster during peak utilization times. The only way to walk this fine line between durability and efficiency in high-density data centers with fluctuating workloads is to monitor power utilization in real time and continuously compare that against existing capacities. Over time, the data trends will help you identify opportunities to add more equipment without risking shorts. 
  3. Switched power: Another way to improve efficiency without sacrificing durability is with remote switching. Intelligent PDUs with this function enable authorized staff to remotely cut the power on individual outlets of any given power strip. This means servers that are eating energy but are not currently in use can be remotely powered down so they don't needlessly consume electricity. 

Intelligent power is by the far the most critical component of the high-density data center. Attempting to implement power-hungry workloads without it will put you at risk of increasing overhead in the form of wasted energy or, worse, disruptions borne of improper load balancing. 

As for the heat problem ...

More rack equipment means more heat is generated - heat that needs to be expelled.

In the past, rising temperatures have been taken as cues to dial up the cooling capacity. That won't work in high-density racks. Maintaining safe ambient temperatures for the longest period of time requires consistent expulsion of hot exhaust back into return plenums so that the air can be treated. Cranking up cooling capacity will just waste more money in the long run because it won't get to the root of your hotspots. 

Accordingly, managers with high-density workloads need to use active containment chambers to save money on cooling capacity without putting their equipment at risk (remember, it only takes a minute in some cases before a high-density server begins to overheat).

Active containment chambers are installed above high-density racks. They use embedded, pressure sensor-powered fans to intelligently control the rate at which exhaust air is drawn into the return plenum - RPM adjusts according to current airflow conditions. This drastically reduces the chances that hot air will accumulate in or around high-density racks, and it's far more efficient than just increasing the cooling capacity. 

The high-density data center is facilitating a brave new world, one where massive quantities of data can be parsed by machine-learning engines to glean extraordinary insights. It will take every trick in the book for data center managers to protect these new capabilities. Our recommendation: Start with intelligent power and cooling. 

Share