LAN vs. WAN: What Does the Cloud Demand?
Virtualization is breaking down the architectural silos within the datacenter, improving the free flow of information and providing much greater flexibility when it comes to managing data loads. The same thing is happening in the cloud, except this time, the silo isn’t within the datacenter but is, in fact, the datacenter itself. What does this mean for datacenter architecture?
As both internal and external resources become integrated under an overall cloud-based architecture, the question of where data and applications reside becomes moot both to the user and, to a large extent, the IT manager.
This has an interesting effect on enterprise networking in that it demands the dissolution of barriers between local and wide-area networks. When applications and data requested from a desktop, laptop, or mobile device must traverse both long-haul and short-haul infrastructure, the particular attributes that currently define one or the other become problematic. The LAN, for example, is not designed for the large-packet, bulk transfer requirements at which carrier networks excel, while the WAN has trouble with rapid exchanges of small packet loads between diverse and rapidly changing end points.
It’s no surprise, then, that research is shifting toward overcoming the limitations of current network topologies. On the plus side, both enterprise and carrier networks are largely based on the Ethernet protocol. Unfortunately, the many flavors of Ethernet often make it difficult to smoothly and seamlessly traverse the network architectures that separate users from data.
Consider the growing call for variable-rate Ethernet protocols on carrier networks. The IEEE disappointed a number of leading network engineers recently when it called for a single-rate format for the new 400 Gbps standard. Cisco System’s Ron Johnson was among the first to note that greater flexibility on both the client side and the line side would allow organizations to establish a single 400 Gbps link to external resources and then utilize it as 25, 50, or 100 Gbps—or some other division—as the need arises.
The call for flexibility has since been taken up by Google’s Bikash Koley, who noted that a 135 Gbps link to a client enterprise can only be accomplished by a 200 Gbps trunk or ten 15 Gbps aggregated links. Both solutions are inefficient and represent an over-provisioning of network resources. Koley also noted that optical transports already utilize variable-speed transceivers, but current Ethernet standards lack a media access control (MAC) layer to take advantage of them.
On a positive note, however, recent developments put more LAN-like functionality on the WAN in other ways. Aspera recently demoed a new high-speed transport fabric, dubbed "fasp," that works with Intel’s Data Direct I/O system to provide 10 Gbps performance or better between local and wide-area sources and the built-in cache of the Xeon E5-2600 processor. The idea is to provide full end-to-end high-speed support across virtual and cloud environments using Single-Root I/O Virtualization (SR-IOV), Non-Uniform Memory Access (NUMA), and other techniques. Aspera says that the fasp system has no theoretical bandwidth limitations, since it is not subject to traditional network latency or packet loss.
At the same time, Broadcom is out with a new line of Optical Transport Networking devices, the BCM845xx Series, suitable for Ethernet LAN, WAN, and fiber carrier network applications. The line consists of 20, 40, and 120G versions, all of which include advanced features like electronic dispersion compensation (EDC), clock synthesis, and jitter-filtering phase lock loops (PLLs) that should provide a high degree of interoperability between various network infrastructures and even data systems and equipment.
Ultimately, all this activity points to the fact that the WAN is the new LAN. As the physical locations of users, data, and applications become less important in the anywhere/anytime knowledge industry, today’s comfortably separated spheres of networking are rapidly becoming a flattened, unified, software-driven architecture.
As this future unfolds, issues like bottlenecks, bandwidth limitations, and network protocol mismatches will no longer be valid excuses for failing to move data to its destination on time.