The True Challenges of Big Data Networking

Technology isn't the only thing that enterprises must change in order to prepare for Big Data. Attitudes must evolve, too.

By Arthur Cole | Posted Apr 1, 2016
Page of   |  Back to Page 1
Print ArticleEmail Article
  • Share on Facebook
  • Share on Twitter
  • Share on LinkedIn

Big Data is all about distributing workloads across hundreds, if not thousands, of processors so that unstructured bits and bytes can be turned into understandable, actionable intelligence. Distribution, of course, requires networking, and the way most experts envision Big Data networks offers both good news and bad news for the enterprise.

On the plus side, much of the processing for sensor-driven IoT applications will remain on the edge, so not every piece of information generated in the field has to find its way back to the data center. On the other hand, this will require a fair amount of lateral connectivity between endpoints, spurring demand for more mesh-type fabric networking across distributed infrastructure.

Exactly how legacy networks will respond to this new environment is not entirely clear. Some problems are inevitable. According to Ixia’s Jeff Harris, the best thing the enterprise can do now is to break its own network before Big Data does. Today’s networks may be well suited to cloud computing, SaaS, social networking and even video conferencing, but they still lack much of the visibility, resiliency and self-management that will be required when reams of machine-to-machine (M2M) traffic starts to pour in. Remember, if your network grinds to a halt, customers will simply go elsewhere for what they need.

The very term “Big Data,” while not incorrect, is something of a misnomer because it’s not the volume that poses the biggest challenge, so much as it is the diversity and distribution of the underlying infrastructure. According to Ovum, more than 20 billion devices will be online by 2020, which will fuel a complete re-imagining of how data is collected, managed and stored. With some data points containing as little as 512 bytes of RAM, linking them in a cohesive manner will require not only advanced networking but new forms of distributed databases and other applications. And with bandwidth over wide area networks expected to remain at a premium, the enterprise will need networking technologies that are not only bigger and faster, but smarter.

Some may wonder whether all this talk about the network is jumping the gun. After all, few organizations have the budget to launch hyperscale, high-speed storage infrastructure, so as long as the network can out-perform the fastest disk drive, it won’t be the weakest link in the chain.

This may have made sense even a few years ago, but the emergence of scale-out Flash arrays is quickly changing the equation. Pure Storage’s new FlashBlade scales out to Big Data proportions at less than $1 per usable gigabyte, says NewsFactor Network’s Jef Cozza, and it is due to hit the channel by the end of the year. The platform is suitable for block, file and object storage and can support primary and secondary applications, meaning it will be well-suited to the diverse data types that will characterize enterprise environments in short order.

Network security will also require an entirely new outlook, says Darktrace’s Sam Alderman-Miller. With the perimeter no longer clearly defined, the network will have to get much better at examining itself, assessing threats that have already gained access and circumventing them before they do real damage. Conveniently, the very same Big Data analytics that can bring clarity to complex workloads can do the same for the network, but it will take a thorough understanding on the part of the enterprise as to what “normal” operations look like and how to spot deviations. This will become increasingly difficult as the network attains higher levels of autonomy and self-governance.

Unfortunately for the unprepared enterprise, all of this is going to come to a head sooner rather than later. Most experts agree that these developments will enter mainstream production environments by the end of the decade, which isn’t a lot of time given the scope of the change being contemplated. Since most of these capabilities will reside on or above the virtual network layer, however, there shouldn’t be a lot of hardware rip-and-replace beyond the normal lifecycle refresh.

Changing attitudes is a much more complicated endeavor. Knowledge workers across the board need to understand that this is an entirely new approach to network operations and infrastructure. Moving bits from place to place is still important, but it is not the sole function of the network anymore.

Comment and Contribute
(Maximum characters: 1200). You have
characters left.
Get the Latest Scoop with Enterprise Networking Planet Newsletter