Rapid Data Center Evolution Forces Chip Makers to Adopt New Strategies - Page 2

Intel, AMD and ARM are looking to address new workloads fueled by such trends as cloud computing, mobility, big data and social networks.

 By Jeffrey Burt
Page 2 of 4   |  Back to Page 1
Print Article

"I don't think I've ever seen more disruptive things going on than now," Greg Scherer, vice president of server and storage strategy in Broadcom's Infrastructure and Networking Group, told eWEEK, pointing not only to such trends as cloud and big data, but also to software-defined networking (SDN) and storage along with the migration from 10 Gigabit Ethernet networks to 40GbE and eventually 100GbE. "We've been anticipating it happening. … Data centers have typically been pretty stodgy places. With what we're seeing now with cloud, data centers are anything but stodgy."

Diane Bryant, senior vice president and general manager of Intel's Data Center and Connected Systems Group, said during a recent two-day workshop with analysts and journalists that the chip maker is working hard to address the changes going on in the data center.

"We're going through a fundamental transformation in the way that IT is used," Bryant said. "Today, we look at IT as the service. IT is no longer supporting the business; rather, IT is the business. … Our goal is that all data center workloads, regardless of what they are, run best on Intel Architecture."

Chip makers are taking a more application-centric approach. Traditionally, Intel and AMD would roll out general-purpose server processors, which OEMs like Hewlett-Packard, Dell and IBM would put into their systems and then sell to businesses, which would take these general-purpose servers and fit them into their data centers.

With the rise of cloud computing, mobility and other trends, that's changing. Web 2.0 companies like Facebook and Google run massive data centers with huge numbers of small servers processing an increasing number of small workloads. Microsoft officials recently announced that their data centers are running more than 1 million servers.

These organizations are demanding infrastructures that are dense, high-performing, flexible, on-demand and highly energy-efficient. But now, as illustrated by the Facebook-led Open Compute Project—they are willing to build their own systems if they can't find what they want on the market.

Systems makers like Hewlett-Packard (with Project Moonshot) and Dell (with its Copper servers) are responding with initiatives related to building small, energy-efficient microservers that use systems-on-a-chip (SoCs) from multiple vendors—not only Intel and AMD but also ARM partners like Calxeda and Marvell Technology. Chip makers are looking to meet that demand, not only with new processors optimized for particular workloads but also with broader architectural approaches, new partnerships and custom-chip businesses that can tailor their silicon to fit specific customer needs.

Much of the change and competition is happening at the lower end of the server spectrum. There will always be RISC-based systems and servers powered by Intel Xeons to handle the high-end, heavy-duty business applications. Where the rapid changes are taking place—and where the competition is heating up—is in the space for smaller, more dense and more energy-efficient systems.

Originally published on eWeek. This article was originally published on Aug 12, 2013
Get the Latest Scoop with Networking Update Newsletter