Interop: What Are Your Datacenter Metrics?

Analyst argues datacenters now have an exciting place in IT.

By Sean Michael Kerner | Posted Sep 18, 2008
Page of   |  Back to Page 1
Print ArticleEmail Article
  • Share on Facebook
  • Share on Twitter
  • Share on LinkedIn

NEW YORK -- Datacenters. Every big enterprise has them, but how many actually have solid metrics to determine the value of their datacenter?

In a session at Interop, Andreas Antonopoulos, senior vice president and founding partner at Nemertes Research, asked participants how they measure their datacenters metrics. He noted that the metrics people use tell you a lot about their role or how they think of the datacenter. It can be thought of in terms of servers, square footage, CPUs and the number of CPU cores. Yet there is another key metric that must always be put into the equation and measured against all the others -- power.

"97 percent of people we surveyed had no clue about how much power they used in terms of cost," he said. "The problem is power was almost free until five years ago, but that's not the case anymore. Now power costs."

So what's a datacenter manager to do? Antonopoulos argued that we should all follow Google, Yahoo and Microsoft (GYM) and build datacenters far away from dense urban areas, which tend to have higher energy costs, though the availability of IT staff can sometimes be an issue.

"Why is Google in South Carolina?" Antonopoulos asked. "Chinese T-shirts. South Carolina used to be a world center for cotton mills, but China devastated that industry, and so South Carolina has lots of power stations with spare capacity."

The other problem in measuring datacenter metrics is the fact that most current datacenters were built for peak level of demand. It's an design that Antonopoulos argued is not predictable and makes the datacenter inflexible and inefficient.

The solution is to move from a design architecture to a runtime architecture using provisioning tools and virtualization where servers can be repurposed and reallocated as needed.

Antonopoulos noted that he's seen datacenters waste power while idling waiting for peak capacities. As a rough estimate he noted that by spooling up servers and resources as required, instead of merely provisioning for peak capacity, datacenters could save as much as 30 percent of power requirements.

Instead of tiered datacenter structures where Web, application and database servers all exist in separate silos, Antonopoulos strongly advocates a more flexible approach there as well. He argued that network architecture should be flat and simple with fewer layers, simpler design, lower latency, fewer hops and higher capacity.

The key is virtualization, which enables better utilization and the ability to move servers around as needed. Virtualization is a particularly hot topic this week as VMware, Cisco (NASDAQ: CSCO) and others roll out new initiatives.

Problems With Virtualization

Overworked networking professionals face some difficulties with virtualization. "Virtualization is a virus," Antonopoulos said. "You bring it in, and it works, then you become a victim of its success and people think it'll work in other areas too."

When virtualization spreads, it also crosses organizational silos, since it traverses both networking and application disciplines. Still it's a trend that Antonopolous argues changes datacenter metrics, providing greater density, power management and capacity utilization per square inch of datacenter space.

"Datacenters have become an exciting area of IT innovation," Antonopoulos said. "Datacenter doesn't just spew out bits. It's a building that sucks in electricity and shoots out processing," he explained. "It's a converter -- it converts electricity into computation."

Article courtesy of InternetNews.com

Comment and Contribute
(Maximum characters: 1200). You have
characters left.
Get the Latest Scoop with Enterprise Networking Planet Newsletter