Everyone likes to get new stuff. Heck, that’s what Christmas is all about, and why it has emerged as a primary driver of the world economy.
In the data center, new stuff comes in the form of hardware and/or software, which lately have formed the underpinnings of entirely new data architectures. But while capital spending decisions are almost always focused on improving performance, reducing costs or both, how successful has the IT industry been in achieving these goals over the years?
Is bare metal the solution to poor data center performance?
According to infrastructure consulting firm Bigstep, the answer is not very. The group recently released an admittedly controversial study that claims most organizations would see a 60 percent performance boost by running their data centers on bare metal infrastructure. Using common benchmarks like Linpack, SysBench and TPC-DC, the group contends that multiple layers of hardware and software actually hamper system performance and diminish the investment that enterprises make in raw server, storage and network resources. Even such basic choices as the operating system and dual-core vs. single-core processing can affect performance by as much as 20 percent. The problem is compounded by advanced techniques like hyperthreading and shared memory access.
Even supposedly high-performance solutions like Flash memory can have a dark side when surrounding infrastructure is over-engineered. At the recent Flash Memory Summit, Wu Peng, Alibaba’s chief technologist, said that performance degradation is a significant problem but highly complex architectures make it difficult for applications to assess the health of underlying hardware. The company is currently trying to streamline its hardware and software builds, concentrating first on error correction on the RAID level and looking at new programming models in the long term. Alibaba, by the way, is committed to an all-Flash architecture and is responsible for about 1 percent of worldwide Flash consumption.
How poor management affects data center performance
The real cause of over-architected data environments isn’t the technology, however. It’s the people in charge, says tech consultant and Enterprise Networking Planet contributor Joe Stanganelli. Performance shortfalls are usually the product of poor management, rather than a lack of resources. New administrators often inherit the mess left by old administrators and often resort to patchwork problem-solving rather than a reordering of underlying infrastructure. C-level executives should keep in mind that the answer to poor resource management isn’t always new resources, but new management.
To hear some tell it, though, humans won’t be of much use anyway once data center automation really kicks in. Former Sun Microsystems CEO Vinod Khosla told the recent Gigaom Structure conference in San Francisco that with data architectures on the verge of truly dynamic, software-driven configuration, human operators will be more of a hindrance than a help. Automated processes are certainly not foolproof, but the history of IT is rife with tales of major system failures and data losses due to human error.
Automation is not without fault either, of course. And without a human operator to keep an eye on things, automated failure could potentially be even more devastating.
But clearly, the enterprise industry sees value beyond mere performance when it comes to virtual and cloud-based architectures, as well as Flash, multicore, SDN and all the other developments that have come along.
Resource flexibility, application- and user-centric functionality and distributed data operations all help to drive business models into the 21st Century – even if they don’t eke every last byte per second out of available silicon.