New Data Center will be Leaner, Meaner and Fluffier

Convergence, cost and cloud are driving the design of the next gen data center.

By Herman Mehling | Posted Jan 10, 2012
Page of   |  Back to Page 1
Print ArticleEmail Article
  • Share on Facebook
  • Share on Twitter
  • Share on LinkedIn

The days of massive silos of information are coming to an end, as enterprises strive to tie them together by leveraging key technologies. The overarching objectives: to cut costs and improve efficiencies. The new data center is smaller, smarter, more virtualized, more converged, greener and more open. 

Depending on who you talk to and what you read, you’ll find there are four to six major elements shaping the new data center. In a typical list of those elements, you’ll see some or all of the following: storage, virtualization, the cloud, consolidation, open computing, and the pursuit of green energy efficiencies. 

Analyst firm Gartner sees cloud computing, smarter physical designs, power saving, and big data workloads as the biggest factors in new data center builds over the next five years.

In a 2011 report entitled Shrinking Data Centers: Your Next Data Center Will Be Smaller Than You Think Gartner analyst David Cappuccio noted implicit in the list above “is the continuing trend toward convergence in data center hardware, in which more functionality is contained in smaller containers.” Often, those smaller containers are all-in-one appliances. Server, storage, and networking functionality now can be obtained in a single appliance, available from such vendors as IBM, NetApp, and Oracle.

Traffic increasingly cloud-based

Cisco recently predicted that cloud computing activity will account for one-third of data center traffic by 2015, up from roughly 11 percent now. That is one of the key results of the vendor’s new Global Cloud Index, which forecasts that cloud computing traffic will grow 12-fold through 2015, climbing from 130 exabytes to a whopping 1.6 zettabytes, the equivalent of 5 trillion hours of Web conferencing.

In addition, Cisco estimated that about 51 percent of data center workloads will be cloud-based by 2014, a level the vendor suggested would mark a watershed in cloud computing’s growth because it will exceed traditional IT space. The vendor compiled the report by analyzing 30 terabytes of data generated each month over the past year from data centers worldwide, measurements of more than 45 million broadband speed tests and third-party market forecasts.

Outside forces impact design

"In the world of IT, everything has cascading effects, and in data centers the traditional methods of design no longer work without understanding the outside forces that will have an impact on data center costs, size and longevity," Gartner's Cappuccio said.

In the report, Cappuccio pointed out key differentiators between traditional data center design of the mainframe era and new-gen data centers.

"Today's data centers have many different demands on mechanical/electrical systems, depending on workload mix, function and age of equipment. New designs have taken this into account by adding different density zones for different workload types."

Most older data centers are woefully under-utilized from a space perspective.

"The physical floor space may be nearing capacity, but in many cases, the actual compute space within racks and servers is very poorly used, with average rack densities approaching just 60 percent worldwide," he added. "Newer designs focus on this issue and are developed to allow optimal rack density, often approaching 85 to 90 percent, on average, thus increasing the compute-per-square-foot ratio dramatically.

"The advent of private cloud environments and resource pooling will provide methods to enhance vertical scalability in the data center, while at the same time improving the productivity-per-kilowatt ratio.”

More and more data centers feature cloud computing, virtualization, green energy products, enhanced hypervisors with built-in management stacks, and converged network adaptors.  Increasingly, these separate technologies are mixed and matched by individual companies on a best-of-breed basis. 

And, don’t forget storage in this picture -- it’s the overweight gorilla that somehow manages to double or treble its size each year. Gartner estimated last year that storage capacity in the data center will grow 800 percent in 2011 alone. This explosive growth is driven by organizations’ desire to virtualize, which requires that storage be networked to enable live migration. 

At the same time, the average data center has virtualized less than 30 percent of its servers, according to Forrester Research. 

The attractions of the cloud and consolidation are immense. CIOs and companies are implementing or at least evaluating cloud initiatives to simplify data center management, reduce costs, improve efficiencies, and drive growth with stagnant budgets. In concert with those initiatives, many organizations are consolidating their IT infrastructure to achieve the same objectives.


Green concerns are also paramount as companies now evaluate the "total cost per unit of work" of the entire data center solution (servers, storage, networks) and how they can leverage the efficiencies of footprint, power, and cooling to cut costs and improve ROI.

10 GbE

Driven by the mounting costs and difficulties of maintaining separate data and storage networks, many organizations are looking to merge, or converge, their dedicated Fibre Channel storage networks onto a common Ethernet network via 10 gigabit Ethernet  (GbE). One device that is making headway in the data center is a converged network adapter (CNA).

This adapter combines the functions of a traditional Fibre Channel HBA (host bust adapter) and a 10GbE network interface card (NIC). It supports data networking (TCP/IP) and storage networking traffic on a single I/O adapter using Enhanced Ethernet and Fibre Channel over Ethernet (fCoE). 

To get a handle on what the new data center looks like or might look like, check out what Facebook has done with its open-source-centric data center in Prineville, Oregon. Facebook claims its data center has delivered a 38 percent increase in energy efficiency at 24 percent lower cost.

The company has published technical specifications and mechanical CAD files for the Prineville’s servers, power supplies, server racks, battery backup systems and building design. 

Facebook says its open-source technology enabled the data center to achieve an initial power usage effectiveness (PUE) ratio of 1.07, compared with 1.5 for its existing facilities, which fall into the "best practice" category defined by the U.S. Environmental Protection Agency. PUE is an indicator of data center energy efficiency, and the lower the number, the better.

Herman Mehling has been writing about technology for more than 25 years, and has written hundreds of articles for leading technology publications and websites. He was an editor and reporter at Computer Reseller News, and a pr executive at a number of pr agencies in the San Francisco area. Mehling has edited three books, including How To Select A Vendor For Web Development (written by Salim Lakhani); and has written numerous articles, press releases, and white papers for corporations. Currently, he contributes regularly to: www.devx.com, www.ecrmguide.com and www.esecurityplanet.com. Before working in technology, Mehling was the editor of a grocery trade magazine in Dublin, Ireland, and a reporter for a Dublin weekly newspaper. He currently resides outside of Paris.

Comment and Contribute
(Maximum characters: 1200). You have
characters left.
Get the Latest Scoop with Enterprise Networking Planet Newsletter