Intel Has Its Eyes on the Unified Network

In a Webcast/teleconference last month, Intel Corporation announced its long-awaited new server platform—the Xeon 5500—along with a newly designed 10Gbps network controller, the 82599, suitable for high-speed NICs as well as mezzanine cards and LoM (LAN on Motherboard) deployments.

Intel has high expectations for this dynamic duo, which Pat Gelsinger, senior vice president and general manager of Intel’s Digital Enterprise Group, called in a blog post “the most important Intel server product since 1995’s Intel Pentium Pro processor.”

“The Intel Xeon processor 5500 series is the foundation for the next decade of innovation,” Gelsinger continued. “These chips showcase groundbreaking advances in performance, virtualization, and workload management, which will create opportunities to solve the world’s most complex challenges and push the limits of science and technology.”

Indeed the new processor (formerly code-named ‘Nehalem-EP’) and network controller form the foundation for a whole new vision of data center function and functionality that Intel calls the Dynamic Data Center (DDC).

Director of marketing of Intel’s LAN Access Division, Steve Schultz, told Enterprise Networking Planet that he envisions a three phase transition to this new networking entity. Many organizations are already in the initial, “server consolidation” phase, thanks to virtualization technology.

“Today, people are beginning to realize that virtualization brings more to the table, however,” Schultz continued. “It’s also a dynamic resource manager. Now I can run these virtual machines, move virtual resources to another machine, allocate resources appropriately – replicate and relocate a copy on remote site for disaster recovery. Virtualization makes these things much easier.”

Virtualization as resource management, phase two, is a relatively new concept that’s beginning to take hold in many data centers, and will eventually culminate in the third phase, which the company sometimes characterizes as “private cloud computing,” but which Schultz more concretely identified as “automated, scalable, on-demand computing.”

“What we’re talking about is a very flexible set of servers, that are very intelligent and allow you to move resources from one group of servers to another group of servers, and back and forth,” Schultz said. “This is the vision that we see happening in the data center, and Intel brings some fundamental things to the table to deliver this vision.”

First and foremost is what Schultz called “this very powerful, balanced platform.”

“You need to have great performance, because you’re running lots of virtual machines on one physical machine, and you need to be able to scale your memory and I/O capabilities.”

In the move from traditional servers to virtualized servers, IT people typically just add more GbE ports to the physical server. “But today, we’re bringing more than double the processing power, so I can run even more virtual machines – and it’s just not practical to go on adding physical ports to that server; it becomes a cabling nightmare,” Schultz said.

“With this new platform, we can just condense all those GbE ports into 10GbE ports. Ten Gigabit Ethernet also gives me the extra bandwidth that I can run my storage traffic over with just one 10GbE link.” We’ll come back to this idea – the “unified network” – a bit later.

Sunil Ahluwalia, 10 Gigabit Ethernet product line manager, Intel LAN Access Division drilled down into some of the fundamental architectural innovations that underlie these major performance improvements.

Gone is the ‘front-side bus,’ through which the CPU communicated with the management chipset in earlier generations. In the new architecture, it is replaced by a ‘quick-path interconnect’ (QPI) – “a really high-speed, high-bandwidth, point-to-point connection between the CPUs and the chipset,” according to Ahluwalia. “That gives a huge performance boost.”

Moreover, the memory controller, previously a component of the management chipset, has been integrated into the microprocessor itself. “So now you have three wide-bandwidth memory channels coming directly out of the CPU. That also gives a huge performance boost,” he said.

The new platform boasts a revamped I/O subsystem as well. “We have introduced PCI Express 2.0, which has twice the bandwidth of PCIE Gen1,” said Ahluwalia. “That gives the bus a connection to the 82599 controller that’s twice the bandwidth of the previous generation.”

So, what is the magnitude of these performance improvements? It’s big. With the previous platform (Xeon 5300 processor and 82598 network controller), bidirectional throughput tops out at something around 16 or 17Gbps. “As you add more 10Gb ports, the [platform is] not scaling with the number of ports,” Ahluwalia told Enterprise Networking Planet. The fundamental reason for throughput essentially ‘sticking’ at this point, despite adding I/O ports, is “saturation of the aforementioned front-side bus,” he explained.

“But with the Xeon 5500 platform, we are able to scale performance to over 50 gigs. As we add more 10Gb ports, we are able to see more throughput.”

The 50+ Gbps result was achieved using four 10Gb ports, which is as far as Intel has gone in its testing. “That’s two and a half times the bandwidth that we could have achieved with the previous generation,” Ahluwalia said, “quite significant in terms of the throughput.”

Significant enough, Intel feels, to help catapult the Ethernet network into a whole new role in the overall IT infrastructure. Read more on Intel’s plans for the Unified Network in our followup.

Get the Free Newsletter!
Subscribe to Daily Tech Insider for top news, trends & analysis
This email address is invalid.
Get the Free Newsletter!
Subscribe to Daily Tech Insider for top news, trends & analysis
This email address is invalid.

Latest Articles

Follow Us On Social Media

Explore More