Lack of Monitoring Still Hampers Scale-Out Networking

Cloud visibility limitations create challenges for enterprises and CSPs alike, especially on open platforms.

By Arthur Cole | Posted May 13, 2016
Page of   |  Back to Page 1
Print ArticleEmail Article
  • Share on Facebook
  • Share on Twitter
  • Share on LinkedIn

There are many reasons for the enterprise to adopt multi-cloud architectures, but there are also many reasons to hesitate. None is more crucial than the need to maintain adequate monitoring and control over distributed networks.

The problem is that while virtual networks are evolving at a steady clip, visibility is not. Many organizations, or their individual business units, are OK with putting low-level data on the cloud without the kind of thorough monitoring that exists within the data center, but this is a major stumbling block for the kinds of critical workloads that cloud providers are angling for. So in actuality, lack of visibility is just as much of a drawback for providers as it is for the enterprise, as it is the critical services that draw the highest revenues.

The problem is particularly acute in open platforms. At a recent gathering of the Open Network User Group (ONUG), a number of leading network experts decried the lack of network monitoring tools capable of functioning across multiple platforms, according to Light Reading. GE CTO Joe Farrell, for instance, noted that applications that can span Oracle, SAP, Microsoft and Amazon clouds will be available within six months, but the ability to monitor them is still pending. Key capabilities like real-time performance monitoring and reduced bandwidth consumption are also lacking.

Proprietary platforms have an easier time with end-to-end visibility, but even so, it usually requires a fair bit of industry cooperation to make it happen. SD-WAN provider Viptela, for instance, recently teamed up with Zenoss to enable a coordinated visibility stack for customers who have already deployed both platforms in tandem. The partnership will provide a unified view of health status information by embedding the Viptela vManage application within the Zenoss management stack. In this way, organizations can keep tabs on their global Viptela footprints from within the Zenoss dashboard, supplemented by policy-based routing and application management and a full suite of metrics to detect and resolve issues before they inhibit performance.

Monitoring the Distributed Enterprise

In most cases, enabling network-wide monitoring across distributed architectures will require a thorough strategy rather than a simple software deployment or two. According to Ipswitch executive VP Jeff Loeb, key considerations include real-time monitoring on both network and server infrastructure and as much commonality across disparate infrastructure as you can muster using standard formats like SNMP, WMI, SSH, SQL Query and Scripting. You’ll also want to avoid alert storms by implementing automated dependency awareness. This provides for a single alarm for each failed device, not from every downstream device that is suddenly experiencing problems. As well, scalability should be addressed as a core competency given the massive swings in volume that emerging networks are experiencing.

Monitoring should also be proactive if the enterprise hopes to maintain the most cost-effective network performance, says Information Age’s Chloe Green. A recent study by Veeam and DevOps put the cost of unplanned downtime at between $1.25 and $2.5 million across a range of industries, while infrastructure failure can run as high as $100,000 per hour. Clearly, preventing failures is much more effective than repairing them, and that is before you calculate the goodwill generated among users by providing a reliable and secure data environment. Network sensors and advanced analytics are key weapons in this fight, as they allow the enterprise to improve networking using the same Big Data capabilities being applied to business processes. If implemented correctly, these techniques can dramatically reduce the frequency and duration of downtime.

With the emergence of containers, edge processing and storage, and everything else the lab is cooking up right now, development of network monitoring needs to kick into high gear if the wonders of virtual, distributed computing are to take hold. A data environment is only as good as its ability to be controlled, and the most fundamental aspect of control is knowing what is really going on across complex systems.

Ideally, it would be nice to have a single view of the entire distributed data stack, but in this age of self-provisioning and increasing data autonomy, that isn’t likely to happen. At best, the enterprise has the ability to define its own network environment and deploy the appropriate monitoring solution at every step – hopefully without producing too much confusion across an increasingly disparate data ecosystem.

Comment and Contribute
(Maximum characters: 1200). You have
characters left.
Get the Latest Scoop with Enterprise Networking Planet Newsletter