Flexibility Rules, But Don't Neglect Network Capacity
Much of the development activity surrounding IT networking of late has focused on increased flexibility. Software defined networking (SDN), advanced fabric topologies, increased edge intelligence and the like all aim to link users to resources through increasingly diverse network configurations.
But what about raw bandwidth? With all this increased flexibility, has anyone stopped to consider the sheer amount of data about to hit the enterprise?
According to Cisco, IT networking could be in for a world of hurt in the next few years. Global data center traffic is on pace to hit 7.7 zettabytes annually by 2017, nearly triple the amount for 2012. This will force enterprise executives to contend with perhaps 25 percent annual data growth for the next three years. The vast majority of this traffic consists of standard data center functions, such as the delivery of information to processing and storage points for normal production and development purposes. Nearly 17 percent, though, comes from newer cloud-oriented functions like collaboration, streaming video and mobile traffic, and it's here where we can expect much of the growth to take place.
To be sure, new switch architectures are standing by in the channel. Most large organizations—and even many SMBs—have come to expect adoption of 40G and even 100G devices as the decade unfolds. According to IDC, though, 10 GbE has only permeated about half of all server ports out there thus far, which means that much of the IT universe is just barely able to cope with even the basics of virtualization and cloud computing, let alone the next-generation network environments needed to accommodate the coming data deluge.
Increasing data traffic must also be addressed in light of the ongoing trend toward data center consolidation and distribution. As the data industry gravitates toward fewer but larger data facilities accessed over longer distances, the wide area network will naturally take on more of the characteristics of the local area network, primarily increased application delivery and small batch data transmission rather than the typical data dump to branch offices and remote storage centers. This will place greater emphasis on WAN optimization to not only minimize data bottlenecks and traffic congestion issues, but also to reduce bandwidth requirements as well, if only so existing links can handle the increased loads without having to purchase additional capacity from telecom carriers.
Or course, the network is only as strong as its weakest link, so data center managers will have to drill down to the component level to ensure that all systems can handle heavier loads. Storage devices, for example, will benefit from the increased throughput under the new 12 Gbps SAS protocol, implemented on devices like LSI's MegaRAID controller and expander cards. The company claims a 60 percent increase in database performance through the controller, plus a 25 percent gain in key Hadoop workloads with the expander, which not only boosts productivity in the short term but enables the enterprise to scale resources in the storage node without having to worry about overloading the controller.
The challenge facing the enterprise network, then, is two-fold: increase flexibility and increase capacity. SDN will help with the first, but the second will require a good, old-fashioned hardware upgrade to a wider-band data infrastructure.
Photo courtesy of Shutterstock.