Novel Approaches to Server Cooling

When it comes to lowering power consumption in the data center, it only makes sense to devote most of the attention to the chief energy culprit: the server.

By Arthur Cole | Posted Aug 20, 2010
Page of   |  Back to Page 1
Print ArticleEmail Article
  • Share on Facebook
  • Share on Twitter
  • Share on LinkedIn
When it comes to lowering power consumption in the data center, it only makes sense to devote most of the attention to the chief energy culprit: the server. And while great strides have been made in reducing the amount of energy a typical server draws directly, dissipating the heat generated by its components can produce equally impressive savings.

The question is how to do it. Power systems companies like APC have long advocated a facilities approach featuring power heat exchangers, hot and cold aisle containment and other techniques. The company's latest system is called InRow OA, an overhead cooling system that draws heat rising from the server racks, cools it down, and then recycles it downward where it can be used to cool off hot components. The system features a newly designed Refrigerant Distribution Unit (RDU) with a non-toxic cooling substance that the company says is up to 50 percent more efficient than standard air- or water-cooled systems.

Re-using cooled air to lessen the heat in the server room is one of the more effective means of conservation. But what if you could capitalize on all that hot air for an even more productive use, say, to generate additional electricity in-house?

That's the plan at Applied Methodologies Inc., which has been developing the field of thermoelectrics to the point at which it is ready to deliver a working enterprise solution, provided it can get financing lined up. The company says that for about $20 it can outfit standard servers with a special semiconductor that can convert differences in air temperature into usable electricity. The system produces about 10 volts and five amps in standard operating environments, which can then be used to offset consumption in the server itself or other devices.

It's an intriguing idea, but one that still has some hurdles ahead of it. First off, the majority of servers' power draw is in wattage, not volts or amps, so the actual energy being generated is miniscule. And traditional thermoelectric technology is only truly efficient at temperatures much higher than the 80-115 degrees F found in most data centers. So in order to draw more energy, you'll have to turn up the heat in the server, which won't be terribly helpful to the processors.

Then again, what if someone were to devise a whole new class of server processors that dramatically reduce energy consumption and make heat dissipation a non-concern to begin with? That's the thinking at Smooth-Stone, which aims to repurpose the low-power devices used in mobile phones for the high-data environments in the enterprise. The company has the backing of Texas Instruments, Arm Holdings and other chip suppliers, although it faces the daunting hurdle of performing at a 64-bit level and the fact that software would have to be rewritten to suit the new environment. But the company says it has chips in development that will overcome both the performance and software issues.

Clearly, then, there is more than one way to draw down the heat load in the server farm, which is just as well considering that an industry as diverse as IT will need as many options as possible to make a meaningful dent in overall power consumption. And it's helpful to know that solutions available today are not the end of the line but will be complemented by newer designs still on the drawing board.

Comment and Contribute
(Maximum characters: 1200). You have
characters left.
Get the Latest Scoop with Enterprise Networking Planet Newsletter