Data center cooling simplified: Protect the cold from the hot

Data center cooling simplified: Protect the cold from the hot

Data center cooling simplified: Protect the cold from the hot

Balancing the efficacy and efficiency of data center cooling isn't necessarily easy, but it doesn't have to be super complex, either. Really, data center cooling is a matter of protecting cold air from hot air. 

Granted, data center operators often discover that despite taking caution to seal every possible hole in their racks (around the edges, beneath them, etc.), hot spots continue to form. This reflexively prompts an increase in cooling capacity, which negatively influences power usage effectiveness (PUE). 

In practice, though, dialing up cooling capacity and further dwelling on insulation and rack blanking panels isn't the best move. Both are important aspects of proper thermal control, but neither can fully address bypass airflow and air recirculation, which, according to TechTarget contributor Vali Sorrel, comprise the dastardly duo most likely responsible for foiling your data center cooling efforts – especially in high-density facilities. 

Active containment for high-density racks

"Actively expel exhaust that  accumulates in the tops and rears of racks."

Yes, blanking panels are important for segregating the hot and cold aisles. However, they only work assuming that hot air is actually removed from the cold aisle in the first place. In other words, you might have near-perfect insulation of your rack, but you're not actually protecting your cool air if the moment it gets to the rack it mixes with warm air that has not been delivered into the return plenum. Your IT load will then start to imbibe this mix of treated air and exhaust. 

When this happens, increasing cooling capacity can help to an extent. Specifically, it can lower the total temperature of the recirculation air, so it is safe for servers. However, as Sorrel pointed out, that means your airflow intake temperatures at the front of the rack will have to be far lower than is necessary, perhaps in the mid-50s, which is incredibly inefficient.  

Thus, what appeared at first glance to be a data center cooling problem is revealed for what it really is: a data center containment problem. Making CRACs work harder may treat the symptom (higher rack temperatures), but it won't address the actual problem, and it will invariably have a negative impact on PUE. 

Solving the problem at its source requires two courses of action:

  1. Make sure that enough cool air actually reaches the rack in question. 
  2. Actively expel exhaust that may otherwise begin to accumulate in the tops and rears of racks. 

These can be achieved using active rack containment, which entails positioning containment chambers above individual racks. The "active" element comes into play through pressure sensor-driven fans built into the containment chamber. Blade RPM will automatically adjust based on to real-time air-pressure readings, so that exhaust continues to flow into return plenums. This drastically reduces recirculation airflow by making sure hot air is always being pulled up and away from the IT load. Thus, there is no need to overcompensate by increasing the amount of supply air.

And as an added bonus, hot-aisle temperatures are significantly reduced since more exhaust is expelled directly into return plenums. This makes maintenance at the rear of cabinets significantly less grueling for staff. 

Image of thermometer on server rack.You need active containment for maximum hot air capture.

Front-to-back airflow for ToR switches

The system described above, and the one used in most data center rack setups for that matter, facilitates front-to-back airflow. This is the norm for the majority of facilities. However, as more data center managers shift to using top-of-rack switches, they've encountered a dilemma.

Many switch models will facilitate side-to-side airflow. Meanwhile, other popular models contain air intake fans on the same side as the switch ports, which data center operators usually orient toward the rear of the cabinet for ease of access. The latter setup means that airflow intake will be drawn from the hot aisle. Considering ToR switches are already positioned at the hottest part of the rack – top and rear – the threat of overheating is significantly compounded. If that switch fails due to overheating, it can induce an outage of the entire rack. 

To alleviate this potentially damaging scenario, data center managers need to redirect the airflow from side-to-side or back-to-front so that it is front-to-back. This can be achieved by installing a relatively affordable piece of hardware that can be placed in front of your ToR network switch to redirect airflow (regardless of its current orientation) so that it moves in a front-to-back pattern. This option is far more cost-effective than replacing your perfectly functional, existing network switch. At a fraction of the cost, treated air will continue to be drawn in from the cold aisle while exhaust is expelled into the hot aisle – just the way it should be. 

Share