Pressure Mounts for 100G in the Data Center
The pressure is on for high-bandwidth networking in the data center, but is 100G worth it to the enterprise?
It may seem like Big Data is the primary threat to rapid and orderly data center networking. Larger volumes mean more traffic, which means bigger headaches for the network manager. But in reality, the pressure to blow past 10G into 40G and 100G and beyond comes from many sources.
A case in point is flash storage, says Silicon Angle’s Mike Wheatley. Solving data latency and bottlenecks in the SAN, after all, merely pushes them to the LAN. And with the storage farm finally able to claim that it can deliver at the speed that the typical knowledge worker requires and more, it won’t be long before the piddling 10 Gbps on the network is viewed is the greatest inhibitor to data productivity. SDN, of course, will help in this regard by affording the flexibility that modern applications demand, but even this will be a task when data loads move beyond mere human-generated traffic and into terabits of machine-to-machine interactivity.
And as the cloud becomes more prevalent in today’s business environment, expect performance outside the enterprise data center to improve steadily. Remember, data infrastructure is not a cost center for the cloud provider but a strategic asset, so expect providers of all sizes to maintain steady investment in their data capabilities, including networking. Many are already signing up for 100 Gbps service on the wide area, so it isn’t hard to imagine they will be among the first to extend it into the data center. That means users will soon be receiving better performance from distant resources on the cloud than they get from a local data center still hampered by 10 Gbps or less.
Fortunately, upgrading to 100G, while daunting, is not as complicated as it once was. And there are a host of tools available to ensure that the extra throughput is only utilized by the traffic that truly needs it. Napatech, for example, is out with a new 100G network accelerator, the NT100E3-1-PTP, that provides real-time traffic analysis and rapid acceleration using only a modicum of CPU capacity. The single-port device is PCI-SIG compliant and enables 100 Gbps capture with zero loss. Through nanosecond resolution in the time-stamping process and support for multiple time synchronization schemes like PTP, it enables the seamless integration of high-speed, high-performance traffic into the overall data environment.
High-speed networking no longer has the impact it once did on the physical plant either. New high-density patch panels outfitted for fiber optics, like Tripp Lite’s N482 Series, are helping to reduce the number of fan-out cables required for 40G and 100G, as well as the multiple legs and connectors needed to accommodate varying data speeds. The N482 features a modular plug-and-play cassette design that allows high-bandwidth connections to be consolidated on a single point that then runs traffic to and from the rack. As data loads increase, expansion is accommodated by slotting in another cassette rather than running multiple cables.
Clearly, though, expansion into high-bandwidth networking will not come without cost, so it will be up to the individual enterprise to determine if bringing that traffic in-house is worth the effort. As I mentioned, the cloud will most likely offer end-to-end 40/100G service before all but the largest of enterprises, which means the CIO will have to make the ultimate call between building high-speed infrastructure and leasing it.
Photo courtesy of Shutterstock.
Arthur Cole covers networking and the data center for IT Business Edge. He has served as editor of numerous publications covering everything from audio/video production and distribution, multimedia and the Internet to video gaming.