Ethernet speeds are on a solid trajectory to 100 Gbps and beyond, and with advances in fiber optics and physical-layer switching already hitting the channel, the enterprise is under the gun to implement a migration plan, soon.
But while the goal is clear, the process is not. Multiple technological and architectural decisions will have to be made, and the ultimate solution will be customized to a fair degree to accommodate the vagaries of legacy infrastructure.
According to the Dell’Oro Group, the Ethernet market will soar to $26 billion by the end of the decade, led primarily by the deployment of 25 and 100 Gbps solutions in the data center and 2.5 and 5 Gbps across the campus. A key driver, of course, is the insatiable demand for data by an increasingly digital-savvy population, compounded by the rapid rise of connected IoT devices that will produce a dramatic increase in machine-to-machine (M2M) traffic.
Interestingly, though, Dell’Oro says deployment of both 10 and 40 GbE solutions will wane as the decade progresses, which presents a conundrum for organizations as they ponder the upgrade path to 100 G. Previously, the accepted path went from single-lane 10 GbE to four-lane solutions which could be aggregated when necessary to provide 40 G, then you can keep adding 10 G as needed until the cost to convert to full 100 G starts to make sense. But now that the IEEE has ratified a slew of 25 GbE standards, the enterprise has the option of converting 10 GbE ports to 25 GbE now and then pursuing a more streamlined approach to 100 GbE going forward.
The question is, which is the better approach? Either way, at some point you have to convert ports and cabling to support wider bandwidths, but the exact when and how will be as unique as the legacy network architectures that populate the data center.
At best, says CommScope’s James Donovan, there are some basic design considerations that can make the 10/40/100 transition easier (and presumably the 10/25/50/100 one as well). These include minimizing the cable polarity variants in existing infrastructure while ensuring that the polarity you settle on does not limit your future deployment options. Implement a uniform data rate for MPO pins on both patchcords and trunks, and make sure the trunk connectors are the same on both ends. And once the migration plan has been set, diagram each variation and double-check the physical model to ensure everything remains in sync.
Complicating matters even further, though, is the possibility of converting legacy copper cabling to fiber once data needs push into 100 G territory and beyond. Intel is about to start shipping 100 G modules that use silicon photonics to transmit data as light pulses rather than electrical signals. As reported in eWeek, the first two devices are a four-lane, single-mode device and a four-lane coarse WDM unit aimed at switch-to-switch optical interconnects in the data center. Future versions will feature pluggable and embedded designs with the ability to transmit over distances of two kilometers, ushering in the real possibility of tightly integrated, high-speed geo-distributed data environments. All the enterprise needs to do is swap out existing copper cabling in the data center for fiber.
Naturally, upgrading network bandwidth is trickier in larger enterprises that have more diverse traffic patterns. But the alternative is to do nothing, and that simply will not work as the rest of the world gravitates toward digital services that can only function properly on high-speed, highly flexible network architectures.