High Performance Computing (HPC)

Date:
2015-01-19 22:42:46
   Author:
10Gtek
  
Tag:

No data center can afford to run out of cooling capacity. But just because temperatures are running too high doesn’t mean that the data center is nearly out of cooling capacity. Experts advise facility managers to take a closer look at airflow management before adding cooling capacity.


A recent Building Operating Management survey showed that 27 percent of respondents expected to run out of cooling capacity in their data centers in the following 24 months. And nearly three quarters of them plan to add cooling capacity to solve the problem.


To evaluate the need for additional cooling capacity, facility managers need to know three things, says Chris Wade, national technical services program manager, Newmark Grubb Knight Frank. “First they need to know what they have in the data center. They need to have comprehensive visibility of the data center assets.


Then they need to know what the capacity is of their cooling units. And then they need to know what the power capacity of their UPS — what can their UPS deliver. If these things are known, and the cooling capacity is over the required redundancy, then adding more cooling may not be required.”


The importance of the first two points is obvious, but the third is also critical for evaluating cooling capacity. “If I only have X amount of kW, I need to be able to cool X amount of kW,” says Wade. “If I’m already at that level, plus the required level of redundancy, adding cooling isn’t going to resolve the real issue — airflow.”


If a facility manager is having problems keeping the servers cool, the problem is usually poor airflow management, the problem is often that the data center isn’t using the cooling capacity it already has. “Usually there are significant losses throughout the data center, most often the result of poor airflow management,” says Jason Clemente, design engineer, Integrated Design Group.

 

Those airflow problems fall into two categories: recirculated air, with warm air coming off the servers mixing with cold air coming from the cooling units; and bypass air, with cold air never reaching the servers.
In some cases, the airflow problems can be solved by low cost measures like moving around perforated tiles or adding blanking panels. (Click here for more on low-cost ways to handle data center hot spots.)


But low cost measures aren’t always enough. Investing in containment — using hot-aisle, cold-aisle, chimney, or in-rack approaches — is one option for data centers that haven’t already taken that step. For example, says Wade, if a data center has a lot of high-density servers, one option is to put those servers into a hot-aisle or cold-aisle containment system to further isolate warm air from cool air. “The goal is to cool the server by removing the heat. We know it’s hot, but we’re in a containment unit where the air inlets all face the cool aisle, and the heat exhaust is directed to a hot aisle and removed,” says Wade.