Putting More Fiber in Your Datacenter Diet

Data centers are beginning to feel the need for fiber in a big way. Find out what's being done to enable light speed in the DC.

By Arthur Cole | Posted Oct 16, 2015
Page of   |  Back to Page 1
Print ArticleEmail Article
  • Share on Facebook
  • Share on Twitter
  • Share on LinkedIn

For datacenters that must be upgraded for both speed and scale in order to meet emerging application requirements, there is no better solution than fiber optics.

Fiber has traditionally played a limited role on the local area network, save as a trunking mechanism in large facilities or campus networks, but with the rise of Big Data and the Internet of Things, not to mention social networking, collaboration and a plethora of other initiatives, the need to make the leap to light-speed is growing.

Obviously, fiber infrastructure needs to follow the same guidelines as coax and even twisted-pair copper: namely, simplicity, efficiency and low cost. To that end, standards bodies like the IEEE are working up a number of new solutions aimed at driving bandwidth and connectivity without breaking budgets or creating overly complex physical infrastructure. A case in point is the new 803.3bm standard, which defines a 100Gbase-SR4 infrastructure using only four lanes of 25 GbE rather than 10 lanes of 10 GbE. The standard is already showing up in commercial products like Opt-X’s Unity 24-fiber MTP system, which offers the advantage of migrating from 10 to 40 to 100 GbE on a single fiber installation.

Meanwhile, the SWDM Alliance is pushing a shortwave wave division multiplexing scheme as a high-capacity multimode fiber solution for the datacenter. The group says the format will allow the enterprise to maximize its legacy duplex infrastructure by allowing four lasers to function at different wavelengths on a single strand of fiber rather than the typical eight lasers. This enables one strand to transmit and one to receive, again providing an easier upgrade path from 10 to 100 GbE while at the same time reducing power dissipation and improving transmission distance. The group is backed by tech luminaries like Dell, Juniper, Corning and H3C.

Some are looking at fiber as the ideal underlay for advanced abstract networks. Plexxi CEO Rich Napolitano told Fierce Communications recently that fast photonics communications offers the ability to construct a truly fungible network architecture that will be much more “definable” than current solutions. By implementing this kind of flexible underlay, enterprises will be able to define networks once per day or once per second if they choose, and it can be used to support any kind of overlay that applications and data require, not just those that are supported by a particular cabling and wiring scheme. It can also integrate easily into existing environments like VMware to enable a wide variety of node and cluster configurations.

The key piece of any fiber network is the optical module, of course, and as fabrics scale into Big Data territory, both the cost and the functionality of the modules will become increasingly important, says Kohichi Tamura, director of engineering and marketing at optics component developer Oclaro. When we start talking about hundreds of thousands of servers each with their own optical link, pricing will have to drop down to at least $10 per GB to as low as $1 per GB for a commercially viable fabric. Form factor will also be a key consideration as bandwidth increases and the length of the cable run varies anywhere from a few meters to several kilometers.

Few organizations are prepared for a full rip-and-replace of legacy networks, but it is fair to say that newly built datacenters will feature more fiber than their earlier brethren. This puts many organizations in a tough spot because conversion of legacy infrastructure into private clouds and other advanced virtual constructs is a key component of their upgrade strategies.

At some point, however, the virtual world must meet the physical, and if the network itself does not lend itself to high-speed, dynamic fabric switching, than the rest of the local data environment will fail to provide the level of service that users are getting elsewhere. It’s at that point that the enterprise will either make the investment into fiber-based local networking or push its infrastructure needs onto the public cloud.

Photo courtesy of Shutterstock.

Comment and Contribute
(Maximum characters: 1200). You have
characters left.
Get the Latest Scoop with Enterprise Networking Planet Newsletter