Connectivity in a Geo-Distributed Data Environment

The need to push connectivity across greater distances is becoming critical.

By Arthur Cole | Posted Sep 29, 2016
Page of   |  Back to Page 1
Print ArticleEmail Article
  • Share on Facebook
  • Share on Twitter
  • Share on LinkedIn

With both the cloud and the Internet of Things coming to dominate the enterprise data environment, the need to push connectivity across greater distances is becoming paramount.

Numerous platforms and service offerings have emerged in recent months that aim to forge tighter links not only between remote data centers, but between individual server and storage components within those data centers. Ultimately, the aim is to produce a single federated ecosystem that spans local, colocated and cloud-based infrastructure, all of it defined on an abstract, virtual layer to achieve limitless flexibility and scalability.

Digital Realty recently upped its game in the interconnect arena with the new Service Exchange platform that facilitates the coordination between the enterprise and its cloud providers, as well as the telecommunications infrastructure that lies between. The service is built on an elastic SDN fabric from interconnect specialist Megaport that enables multiple virtual private connections (Virtual Cross Connects, or VCCs) to be accessed through a single port using Digital Realty’s MarketplacePORTAL management  interface. In this way, organizations can easily scale their bandwidth requirements up and down as workloads dictate while maintaining connectivity to leading cloud providers like AWS, Google and Microsoft.

Meanwhile, Infinera is out with a new version of the Cloud Xpress platform that provides 100 GbE data center interconnect (DCI) capabilities using multi-terabit optical links. Cloud Xpress 2 utilizes the company’s Infinite Capacity Engine – which itself incorporates the latest version of the company’s photonic integrated circuit (PIC) – that provides both rack-level and DCI-scale connectivity at 4.8 times the density of the original platform. This enables rack throughput of 1.2 Tbps that can be bumped up to 27.6 Tbps over single-pair fiber for long-haul applications. As well, the platform provides software-activated bandwidth allocation, built-in encryption and streamlined provisioning suitable for fully automated data operations.

Also making waves in the DCI industry is a company called Inphi Corp., which utilizes a four-level pulse amplitude modulation (PAM4) technique to build 400 GbE connections within and between data centers. By bundling its PAM4 DSP IC with a quad channel linear driver and TIA amplifier, the company says it has developed an integrated solution that conforms to the emerging CFP8 form factor that could conceivably push wide area 400G connectivity directly to the server. This would allow organizations to build dense networking fabrics across multiple locations without the multitude of transceivers, routers and other hardware that is currently required. One drawback, however: the technology is limited to 10 km, so it cannot provide direct connections on a global scale, yet.

One technology that is pushing the distance factor, albeit at 100G rates, is 4-wavelength WDM (4WDM), which recently saw the creation of its own industry group aiming to foster specifications for distances up to 40 km. Longer runs of 20 to 40 km will be based on the LAN-WDM wavelength grid (IEEE 802.3ba) while 10 km connectivity will be met by extending the Course WDM4 wavelength past its traditional 2 km reach. Both solutions will use duplex single-mode fiber, with primary applications being data center interconnects and mobile backhaul. The group which includes Broadcom, Brocade, Dell and Juniper as members, says its new specifications will allow for longer pipelines in smaller form factors and lower operational costs compared to current standards.

With all of these developments occurring simultaneously, it could be difficult to gauge the right solution for any given use case for the time being. Connectivity is not merely a matter of cost and bandwidth consumption but overall performance, which can vary greater according to application and workload requirements.

And while it is true that much of the networking nitty-gritty is being be masked by the rise of abstract networking, the enterprise should still pay close attention to what is happening on both local and wide area infrastructure to make informed choices when deploying the next-generation data ecosystem.

Arthur Cole covers networking and the data center for Enterprise Networking Planet and IT Business Edge. He has served as editor of numerous publications covering everything from audio/video production and distribution, multimedia and the Internet to video gaming.

Comment and Contribute
(Maximum characters: 1200). You have
characters left.
Get the Latest Scoop with Enterprise Networking Planet Newsletter