Network switch cooling: Challenges and solutions

Network switch cooling: Challenges and solutions

Network switchboards are a mess of cables, which makes it hard to keep cool.

Server room cooling is a constant question for data center managers, as there are quite a few different kinds of cooling systems to choose from. Containerized cooling, hydrocooling and free air cooling are all viable ways to pump cool air into and through a facility, ensuring energy efficiency and health of the servers. One crucial part of the server rack that needs to be maintained is the network switchboard, and keeping it cool is one of the biggest challenges of today's data centers.

A step back: What's cool?
In order to make sure a data center is achieving maximum efficiency and power usage effectiveness, managers need to follow strict guidelines. According to the 2014 guidelines set forth by the American Society of Heating, Refrigeration and Air Conditioning Engineers, the maximum temperature for data centers is 89.6 degrees Fahrenheit, and the minimum is 59 degrees F. Relative humidity should range from 20 percent to 80 percent. Data center managers that keep their facilities at the higher end of these temperature guidelines and the lower end of the humidity envelope can achieve optimum energy efficiency, which leads to cost savings in the long run and a smaller carbon footprint.

But the question remains: What's the most useful and business-savvy way to cool the server room? Data center managers and executives need to decide between multiple different kinds of cooling systems, and it can be a difficult choice without all the necessary information. This brings us to why network switchboard cooling is a challenge and what can be done about it.

Rack hygiene and network switches
This term refers to the way the rack is designed and maintained throughout its lifespan. Improper rack design can lead to discrepancies as to where cool air gets directed. For instance, network connection devices are usually mounted at the back of the container to provide easier access to necessary cables. However, this presents a problem for cooling systems because cool air can be wasted trying to reach that far back on the rack - the air ends up moving through the space that's already been heated by the server, sometimes severely diminishing its cooling capabilities.

In the technology industry, companies are beginning to notice the issues with switch cooling as the servers become more physically dense. Network and processor vendors like Microsoft and Cisco formed the Consortium of On-Board Optics in March 2015 to discuss the issues surrounding network equipment cooling. The basic issue is that the switchboard where Ethernet cables plug into servers has gotten so dense that it's difficult to cool. The consortium's goal is to create a set of standards for switch design to facilitate optimum cooling efficiency of the rack.

"At this scale, a change that may seem insignificant when you look at one switch or one network interface card gets magnified by the million or more devices we have in our networks," said Brad Booth, the principal service engineer of networking at Microsoft, in a blog post about the event.

Booth's comment speaks to the data center cooling question as a whole. The Consortium is basically asking: What small changes can be made to network switchboard design to make a big difference in cooling efficiency in the long run?

Solutions from Geist
When network switches don't receive adequate amounts of cool air, it can result in hot spots and, potentially, overheated servers or fires. This dangerous situation can be prevented with Geist's SwitchAir cooling system. SwitchAir delivers cool air directly to network devices regardless of where they're mounted - including at the back. This means that instead of cool air becoming contaminated by hot exhaust coming from the servers, the switches receive the cooling directly, even in super dense environments like the ones Booth mentioned.