Department of Energy Using Warm Water to Cool New Data Center

By Sean Michael Kerner | Sep 5, 2012 | Print this Page
http://www.enterprisenetworkingplanet.com/datacenter/department-of-energy-using-warm-water-to-cool-data-center.html

The U.S. government is currently in the process of building one of the most efficient data centers in the world.

The data center is being built by the U.S. Department of Energy's National Renewable Energy NREL Data CenterLaboratory (NREL) in partnership with HP and Intel and will house a new High-Performance Computing (HPC) system. The facility is known as the Energy Systems Integration Facility and is located in Golden, Colorado.

Steve Hammond, NREL Computational Science Director, told Enterprise Networking Planet that the goal of the new data center is to have a Power Usage Effectiveness (PUE) of 1.06, which is substantially better than the industry average of 1.91.

"The compute resources will support the breadth of research at NREL with increased efficiency and lower cost for research into cleaner energy technologies," Hammond said.

When it comes to NREL's own data center, Hammond stressed that his organization has taken a holistic approach.

"We have taken a chips to bricks approach measuring both the bytes and the btu's," Hammond said.

Warm Water Cooling

The PUE reduction for the NREL data center is being achieved by way of a number of innovations. One of the primary ones is the use of warm water to cool the data center and the server rows.

Ed Turkel, Manager, Worldwide HPC Marketing at HP told EnterpriseNetworkingPlanet that when looking at power usage in the data center, a lot of it is in the infrastructure used to cool the data center itself, typically by way of large air conditioning units.

HP's new warm water approach is a more efficient method than air conditioning for a number of reasons. For one, the water pumps use less power than typical air conditioning unit fans. Turkel added that overall water is a better conductor of heat in contrast to air, and as such, a data center needs less of it to get similar levels of cooling.

The warm water system runs through the floor of the data center as well as through the server racks.

"The thermal exchange is directly to water inside the rack so it's not exchanging heat with air inside the rack or anything like that, we're bringing the warm water to the servers themselves," Turkel said.

Going a step further, the new NREL data center will then reclaim the heat from the data center servers for other purposes, including heating the building.

"The classic data center has lots of cold air that is approximately 60 degrees supplied to the front of the racks, in an effort to help keep your chips from getting hotter than 150 degrees," Hammond said. "Then you get 80 degree hot air out of the back of the racks and you try to eject that heat and declare victory."

Hammond explained that in contrast, the NREL approach will supply water that is approximately 75 degrees and then after running through the servers, will return water that is 95 degrees. That return water will then be the primary heating source for the building.

Intel Xeon Phi

While server cooling is a key source of power efficiency, NREL is going a step further by taking advantage of a new generation of Intel HPC chips with the Xeon Phi.

Steven Wheat, General Manager, High Performance Computing at Intel explained to EnterpriseNetworkingPlanett hat the Xeon Phi has a similar instruction set with Intel's Xeon E5, though there are a few key differences. For one, whereas an Intel Xeon E5 typically packs about 8 CPU cores, the Intel Phi can have 50 or more CPU cores.

"We have been able to demonstrate a teraflop of sustained performance on a single Xeon Phi processor," Wheat said.

From the networking interface perspective, NREL is using a flexible LAN On Motherboard (LOM) design where not all Network Interface Cards (NICs) on placed on all boards. The flexible LOM design also saves on power.

Linux

Sitting at the core of NREL's new HPC data center deployment is the open source Linux operating system. HP's Turkel explained that his company has gone to great lengths over the years to fully optimize for power usage on Linux.

"With our latest generation systems we've taken many of the Linux process daemons and offloaded them from the system and we have them running in a management processor that is run on a node," Turkel said.

Sean Michael Kerner is a senior editor at InternetNews.com, the news service of the IT Business Edge Network, the network for technology professionals Follow him on Twitter @TechJournalist.