Big Data Networking: Sharpen the Edge to Protect the Core

Big Data and the Internet of Things will have a transformative effect on the network edge.

By Arthur Cole | Posted Jun 12, 2015
Page of   |  Back to Page 1
Print ArticleEmail Article
  • Share on Facebook
  • Share on Twitter
  • Share on LinkedIn

The nightmare Big Data scenario that haunts many CIOs these days is the specter of unfathomable waves of data crashing into the enterprise network edge. Without a way to ingest this data for processing and analysis on centralized resources, bottlenecks will bring workflows to a halt and enterprise productivity and competitiveness will suffer.

Probably the only comfort in all of this is that your rivals are sure to be encountering the same problem.

There is no question that the network edge will have to improve if organizations hope to capitalize on Big Data, but increased capacity is not the only challenge. A more fundamental problem lies in re-architecting the edge and even redefining exactly what it is, where it lies and what it is supposed to do.

The light at the end of the tunnel is a vastly improved data ecosystem that both lightens the load on network resources and provides access to the unique capabilities that Big Data and the Internet of Things presents.

Updating the network edge for Big Data, the Internet of Things, and beyond

First off, the traditional network edge will likely get speed and capacity boosts whether you plan to implement Big Data or not. 40G endpoint solutions like Algo-Logic’s fifth-generation IP-Core device offer sub-nanosecond latency at 80-Gbps full duplex, and with the Altera Stratix V FPGA on board, it makes a very versatile solution for everything from data center devices to embedded industrial systems. The wireless side of the house is also seeing steady improvement through new IoT-facing platforms like the Zirrus XD4 modular access point. Based on the 802.11ac Wave 1 standard, the devices offer increased density and integrated analytics to optimize traffic flow.

But the new edge will also require agility and a great deal of intelligence in order to address the diversity of challenges on the Internet of Things. Pacific Controls recently opened up a new front on this war with a language optimized for the machine-to-machine (M2M) workflows that will arise. The Edge Device Communication Protocol (EDCP) is designed to help edge devices deal with those workloads directly, rather than shunt all of it back to the data center for mass processing. In this way, Big Data can be leveraged at, say, the local supermarket or department store, where it can quickly analyze the top purchased items among current shoppers and then offer special deals to move product off the shelves – no data center required. At the same time, EDCP integrates some data with web services and higher-level systems that may tap centralized resources for more far-reaching analysis, but this will represent only a fraction of the data that devices encounter on a daily basis.

Part of this trend will require analytics and other applications to reside on the edge, of course. This fact has not been lost on the development community. Predixion Software recently released the Insight 4.5 platform, optimized for the streaming data flows that will occupy the IoT edge. The system can embed analytics on the cloud, the cloud gateway or on the edge device, leveraging the company’s patented Machine Learning Semantic Model (MLSM) to reside within applications, databases, processing engines or wherever else it may be needed. The platform specializes in “data in motion” analytics, as opposed to the “data at rest” functionality of traditional systems, and supports Java, SQL, Javascript and other web-app development toolkits to provide a high degree of customization and fast analytics turnaround.

Indeed, this is where we get into debates about “what is the edge” and “where does it exist.” With processing, storage and applications out on the edge, it is becoming increasingly difficult to draw such clear distinctions. HP, in fact, has been using network-connected automobiles to help determine whether the IoT is better served by a traditional centralized or cloud-based infrastructure or a mesh fabric model that allows end-points to communicate more freely. According to V3’s Roland Moore-Colyer, the mesh fabric is winning. With the cars serving as network nodes, greater communication between nodes allows for a much more flexible and responsive system, as each node can learn from the others about issues like roadblocks, bottlenecks and other factors. This also overcomes many of the security and privacy issues that organizations face when pushing mass data sets onto the cloud because each node only knows enough to manage its own small task, and it would be virtually impossible to hack them all.

It seems, then, that Big Data and the IoT appear to be insurmountable challenges only when viewed from the perspective of existing network infrastructure. By decentralizing core compute and storage resources and pushing them out to the edge, the new enterprise networking can prevent massive data sets from crashing the data center, kind of like the way a coral reef protects the shoreline.

Despite all the recent gains with virtual networking and software-defined architectures were are only just beginning to tap into these capabilities. But the transition is inevitable, because existing network infrastructure simply won’t be able to cope with the load much longer.

Comment and Contribute
(Maximum characters: 1200). You have
characters left.
Get the Latest Scoop with Enterprise Networking Planet Newsletter