PCIe 4.0 and the Data Center

PCIe 4.0—or perhaps Nvidia's PCIe alternative—holds the potential to transform data infrastructure on a micro level.

By Arthur Cole | Posted May 30, 2014
Page of   |  Back to Page 1
Print ArticleEmail Article
  • Share on Facebook
  • Share on Twitter
  • Share on LinkedIn

Big changes are coming to the macro networks that unite data center resources, but important things are happening on the micro level as well, some with the potential to redefine the entire concept of data infrastructure as we know it.

A case in point is PCI Express (PCIe). Long the domain of server- or perhaps rack-layer solutions, the high-speed specification is getting faster and more powerful. Some people think it is time for it to branch into new territory.

PCIe 4.0 breaks new ground

A company called Synopsys, for example, recently came out with the first PCIe 4.0 IP solution. 4.0 doubles the throughput from today's 3.0 to 16 GTps. The solution features a DesignWare PHY that sports new eq and signal integrity capabilities, plus a controller that provides port logic for endpoints, root complex, dual mode and switching applications. The solution also features a verification IP for speedier test and development of software and firmware applications. Look for the system to make its debut next week at the PCI-SIG Developers Conference in Santa Clara, CA.

With backward compatibility to the 3.0 standard built in, there's no reason for developers not to start on their 4.0 designs right away, says Synopsys's Scott Knowlton. With throughput at 16 Gbps, you'll be able to move about 2 GBps per lane direction and 64 GBps on a standard 16x link. And this transition should be much smoother than the jump from 2.0 to 3.0 considering that, at least with the DesignWare platform, you'll have no need to change the interfaces or the underlying architecture.

With this kind of power, maybe it's time to start thinking "outside the box" when it comes to PCIe, says the company's Navraj Nandra. At the very least, a box-to-box PCIe connection is starting to look very reasonable for new modular infrastructure. One key stumbling block, though, is the need to send reference clock data along with signal data, which requires heavier, more expensive cable to protect it from EMI and other hazards. However, the new SRIS (Separate Refclk Independent Spread) model allows reference clocks to be maintained at each end of a link, effectively providing the same functionality as SATA and USB. 

Nvidia offers PCIe alternative

But even as PCIe strives to break new ground, other solutions are starting to invade its home turf. Nvidia, naturally a strong supporter of high-speed interconnects, offers a bus technology within its new Pascal architecture that is said push 20 GTps already. The NVLink is built to tighter specifications than PCIe but uses similar point-to-point communications, albeit with a different signaling method. The platform is at the prototype stage right now, with multiple cards connected to a motherboard via an NVLink mezzanine. The design provides a convenient means of cooling both the GPU and the CPU within the server rack.

Whether PCIe remains a board-level solution or works its way up to bigger things, it seems likely to remain a key facet of the enterprise data infrastructure for some time. The mere fact that flash and other high-speed memory devices are now a common sight within server and storage farms suggests that demand for rapid, scale-out interconnects will be heavy, particularly among web-facing, hyperscale organizations.

At the same time, though, it doesn't seem likely that we'll see a PCIe SAN any time soon. But with the advent of converged infrastructure, there will likely be an entirely new generation of data centers that won't bother with a SAN anyway.

Photo courtesy of Shutterstock.

Comment and Contribute
(Maximum characters: 1200). You have
characters left.
Get the Latest Scoop with Enterprise Networking Planet Newsletter