Power Efficiency in Large and Small Packages

They say necessity breeds invention. If so, it's small wonder that so many energy efficiency and power management solutions are hitting the enterprise channel right now.

By Arthur Cole | Posted Aug 23, 2011
Page of   |  Back to Page 1
Print ArticleEmail Article
  • Share on Facebook
  • Share on Twitter
  • Share on LinkedIn
They say necessity breeds invention. If so, it's small wonder that so many energy efficiency and power management solutions are hitting the enterprise channel right now.

High energy prices have a way of focusing attention on the bottom line, and now that virtualization and consolidation have permeated data infrastructures to what many consider to be their practical limits, for the moment at least, there seems to be a concerted effort to remake power architectures to meet more stringent demands for energy efficiency.

"As an industry, we have just begun to focus on power in the data center," says Andrew Feldman, CEO of SeaMicro, "so opportunities for improvement exist at the component, memory, CPU, system and data center levels. Going green is not just a feel-good slogan ? it produces significant cost savings. We are seeing large enterprise customers making concerted efforts to match their compute workload to the most power efficient server architecture."

At one level, this is reflected by the increasing demand for low-power server, storage and networking components. To that end, newly devised micro servers from SeaMicro and others are responding to specific needs in the enterprise ? high-density, low-cost tasks like Web hosting and cloud services. Increasingly, though, micro architectures are finding their way into more general-purpose processing ? things like front-end workload processing and even support for virtual environments.

The problem here is in determining the point at which the deployment of low-power resources starts to exceed the power draw of high-end models. This is the classic dilemma between scale-up and scale-out architectures. Do you pack more cores into a single chassis and then parcel them out to multiple virtual environments? Or do you load up on separate devices and move the data handling and I/O responsibilities onto network resources? Or both?

In all likelihood, the answer will come down to the data environment. There is some question as to whether the new low-power processors ? Intel's Atom, AMD's Phenom and ARM Holding's ARM ? are suitable for higher-order enterprise tasks. These chips were originally developed for mobile and cellular devices, and at best can provide the kind of multinode architectures suitable for Memcached and Hadoop applications. But heavy number crunching needed for database and other enterprise functions? That's a tougher call. Perhaps that's why we have yet to see a server-ready Atom release from Intel, which means enterprise users will have to make do with low-power Xeons like the new E3 line.

Low-power hardware is destined for the data center regardless of whether the powers-that-be make a conscious decision to deploy it. The simple fact is that just about every piece of hardware in the channel will have a data/watt advantage over previous generations going forward. Normal refresh cycles, then, should steadily reduce overall energy reduction.

Organizations looking to pursue a more active program will likely turn toward the growing legions of power management systems hitting the market. Vendors like Schneider Electric are quickly outfitting their power and cooling platforms with advanced management stacks, often with support for broader data management systems so as to provide a single view when determining how shifting data loads and various hardware/software configurations will affect consumption.

"Moving compute loads around alters the power and cooling picture significantly and may lead to lower utilization of power within the data center," says Henrik Leerberg, product line director for Enterprise Software at Schneider Electric. "However, by enabling managers of the physical layer to gain insight and transparency into where cooling is being consumed in real time and thereby distributing the capacity to where it is needed, power usage will be optimized significantly."

Schneider has developed a new InfrastruXure suite designed to integrate into the VMware vSphere platform, giving admins the ability to keep an eye on power and cooling requirements as data loads are migrated to and from underlying hardware components.

"By connecting the physical layer management and virtualization software, we can inform the IT manager where to place virtual loads or where not to place them," Leerburg says. "Power and cooling should follow the load, and because this is not practical in real time, the virtual environments must ensure that required capacity is available in the racks to which loads are being migrated. If the power and cooling infrastructures are interrupted or otherwise compromised, they can easily run out of capacity when most needed, leading to broken service level agreements."

Translation: The last thing you need to happen is to suddenly find that the resources you thought were available are not because power and cooling can't handle the load.

One challenge in integrating power management within the data infrastructure is that it requires more than just a blending of technologies, but of personnel too. Power and cooling generally falls under facilities management, which may not be attuned to the needs of the data side, and vice versa.

Emerson Network Power is looking to break down these barriers through its recent acquisition of Avocent. The idea is to provide a single framework for both power and data management, to at least get the two groups working in the same environment.

"Data center managers (need to) establish a daily working relationship with the facility managers," says Peggy Canale, government segment manager at Avocent ? Emerson Network Power. "Historically, the contact between these two teams has been contained to infrequent meetings on long-term requirements and planning. Emerson Network Power is about giving these two groups the common set of tools and a holistic view that bridges what IT needs and what facilities can deliver. In the power area, monitoring from a facilities perspective all the way down to the outlet level is already here."

In fact, the need to integrate data and facilities management has spawned a new class of management software: Data Center Infrastructure Management (DCIM). Over the past year, a number of power management firms like Server Technology Inc. and iTRACS have rolled out full DCIM solutions, designed for active monitoring and load control right down to the server level. In this way, admins are able to track data patterns and system interdependencies to ensure that minimal resources are being utilized while still maintaining service level guarantees.

Before long, this ability could extend right down to the virtual layer. Microsoft and Duke University recently launched a research project aimed at tracking energy consumption of individual virtual machines. With the knowledge in-hand, power can be distributed according to application needs rather than system hardware and then automatically adjusted as demand rises and falls.

Data management is already complicated enough without the added burden of coordinating power and cooling resources. But modern enterprises are quickly evolving from a set of discrete environments to a more cohesive, organic whole in which changes in one area can substantially affect others.

This holds true for the building infrastructure as much as the data infrastructure.

Comment and Contribute
(Maximum characters: 1200). You have
characters left.
Get the Latest Scoop with Enterprise Networking Planet Newsletter