Big Volumes Call for Big Networking
No matter how they're defined, all networks share the need for greater bandwidth to deal with increasing traffic.
The flexibility gained from software defined networking and other virtual architectures is all well and good, but at the end of the day, more data requires more bandwidth.
That’s why spending on networking hardware is moving at such a steady clip these days. According to Infonetics Research, network port revenue for 1, 10, 40, and 100Gbps devices increased 5 percent from 2012 to 2013, topping a cool $39 billion, with much of that growth coming from the higher-bandwidth devices. While the vast portion of overall revenue lies with the 1 and 10G segments, researchers note that 40G shipments nearly tripled to 1.5 million. 100G nearly quadrupled due largely to surging demand from service providers.
It seems almost natural that core switching would gravitate toward higher-bandwidth solutions, given both the data volumes that the enterprise is dealing and the diversity of traffic as new mobile and web-facing applications take root. But the drive toward 40, 100 and even 400G architectures is starting to make its presence known across all facets of the networking stack, particularly on the processor level.
MoSys, for example, brought a new range of PHY technologies to Interop. Among them were serial interface memory modules for 400 Gbps networking gear and a newLineSpeed SerDes and Bandwidth Engine packet processing accelerator that allow high-density 10 GbE configurations to scale all the way to 1 Tb or more. The company now has the ability to produce single-chip solutions that overcome the memory access bottlenecks that typically accompany network scale into the three- and four-digit Gbps range. At the same time, they support optical transport networking applications and compute line cards ranging from long-reach backplanes to sort-reach connectivity for modular architectures.
Meanwhile, Credo Semiconductor has developed a new 55 Gbps SerDes PHY with NRZ (non-return to zero) signaling aimed at next-generation 40/100/400G data center networks. The design offers improved performance on a number of parameters that tend to wreak greater havoc with signal integrity at high speeds, including jitter, and bother continuous time linear equalization (CTLE) and decision feedback equalization (DFE). NRZ signaling provides for a more robust signal as well, although most current solutions top out at 28 Gbps.
The need for ever-higher bandwidth solutions is also causing the enterprise to take a hard look at optical transport, which until recently was deployed primarily by telecommunications providers. Arista Networks and Intel recently proposed a new 100 GbE optical spec aimed squarely at hyperscale data environments. The goal of the 100G CLR4 standard is to build a common platform for relatively short distances – 100 meters to 2 km or so – using a coarse wavelength division multiple (CWDM) format across dual single-mode fibers. This provides a duplex architecture that can keep power consumption at or below 3.5 watts while maintaining the same QSFP form factor that supports today’s 40 G designs. The companies are already working toward a 36-port, 100 Gbps design that fits inside a single rack unit.
Virtual networks are indeed eminently scalable and offer a wide range of dynamic configuration options, but only if they have the proper framework that provides broad connectivity and room to grow. So while server and storage hardware may have the ability to accommodate numerous virtual solutions, the network must grow in order to handle the new data environment.
The front office may not want to hear that, but they’ll have to if they hope to take advantage of software-defined flexibility and efficiency.
Arthur Cole covers networking and the data center for IT Business Edge. He has served as editor of numerous publications covering everything from audio/video production and distribution, multimedia and the Internet to video gaming.