Keeping Server Utilization in Check
For several years now, CIOs have been receiving plaudits for boosting the typical 20 percent utilization rate to 80 percent or more. As the mantra "do more with less" takes over the thinking in the executive suite, however, many are finding themselves up against the wall to continue producing gains -- driven largely by people who don't fully understand the mechanics of enterprise computing or the consequences of overdoing it.
The goal, then, is to find that sweet spot for every computing environment where maximum utilization is weighed against maximum reliability. Unfortunately, this is a moving target in these days of data fluidity and massive resource scalability.
But as with most things digital, there are many paths to this data center nirvana. Among the more promising are new data intelligence techniques that provide a more accurate picture of what is happening out there at any given time. CiRBA, for example, has a new set of metrics that are said to drill down even deeper into server infrastructure to gain real-time insight into workloads and available resources. Rather than simply looking at CPU, memory and I/O utilization, the Fully-Loaded Utilization (UFL) metric takes into account things like service level agreements, risk tolerance and even DR policies to gauge exactly how many server cores are needed to handle actual workloads, reclaiming idle capacity in pursuit of maximum efficiency.
As resources scale onto grid architectures and the cloud, however, keeping tabs on efficiency becomes even more problematic. Platform Computing says it has hit on a solution by extending the Apache Hadoop MapReduce model to enterprise-class distributed architectures. Platform's implementation in its LSF and Symphony products allows MapReduce to support more than 1,000 simultaneous applications across more than 40,000 cores, improving utilization while maintaining high reliability and central management.
Improved utilization can also take place on the micro level, according to Diskeeper. Known primarily for its disk defragmentation technology, the company's new 2011 Enterprise Server product does the same for servers using the IntelliWrite fragmentation prevention system, cutting 85 percent of fragmentation before it even gets to the disk. By cutting frag at the server, the company says it can improve file access and boost server/workload ratios. It has also released the Titan Defrag Engine for high-volume workloads.
No matter how you boost utilization, however, it is crucial that automation systems be kept fully up-to-date on a given resource set's acceptable limits, according to Data Center Dynamic's Yevgeniy Sverdlik. As more responsibility is transferred from human managers to machines, the chance of catastrophic failure increases due to technology's inability to make intuitive decisions beyond the scope of its predetermined mandate. New techniques, like the ability to gather utilization, temperature and other data directly from servers themselves should go a long way toward improving automated operations.
Now that the utilization genie is out of the bottle, there's no putting it back in. Of course, there's no reason to do that considering the tremendous cost and efficiency benefits that utilization provides. Just be careful how it is implemented, however, as a simple misstep could cause a big-time headache.