Network analytics is emerging as a key growth market in the era of software-defined architectures. But enterprise executives should be aware that simple number-crunching will not provide the kind of optimization that modern data loads require. Rather, the challenge will be determine exactly what to analyze and how it should be done.
According to Envision Inteligence, the global network analytics market is expected to top $32 billion by 2024, a compound annual growth rate of more than 26 percent. This incorporates the entire network spectrum, of course, from data center and cloud infrastructure to carrier and industrial applications. One of the key drivers is the need to support real-time applications in banking and other sectors, which requires broad visibility into current and future conditions coupled with extensive automation and dynamic machine-to-machine (M2M) connectivity.
This is naturally leading to a fair amount of hype as to the state of current analytics, particularly the intelligent platforms that are purported to push IT management to new levels of functionality. While these technologies are indeed impressive, says Datacenter Knowledge’s Drew Robb, the fact remains that it is still in a very nascent stage. Most of the truly revolutionary advances, such as full network autonomy, are some years away. For the moment, look to AI-based analytics to provide smart troubleshooting and diagnostic tools to support things like trend analysis, forecasting and resource scheduling. At the same time, machine learning algorithms are starting to make their way into control layers for improved traffic management.
Security is also expected to benefit from improved analytics. Cisco’s new Encrypted Traffic Analytics system offers the ability to check traffic for malicious code and other anomalies without having to decrypt it. This is crucial for highly regulated industries, says Dark Reading’s Jai Vijayan, which must often tread carefully around customer data in transit. The system works by examining non-encrypted characteristics of the data stream, such as data in the initial connection packets, as well as packet length, sequences and timing, all of which can provide crucial clues to potential trouble. In addition, things like byte distribution and differences in the use of TLS, DNS and HTTP can also send up red flags.
Does this mean that traditional monitoring tools, such as the trusty network probe, are headed for the dustbin? Not entirely, says Dynatrace’s Gary Kaiser. While it is true that increasing network scale and complexity are making it difficult to track performance using the tried-and-true method of tapping into key aggregation points, this is taking place largely within the data center. On the WAN and the network edge, the number of access points is still relatively small, so things like TCP flow control still impact application performance and availability. Many of these points, in fact, are already wired for IDS and network packet broker (NBP) solutions, providing ready access to network monitoring probes.
Analytics experts will say that the technology shines in its ability to sift through extremely large data sets to glean the hidden patterns that would otherwise go unnoticed. This is true as far as it goes, but the corollary to this is that too much data can often produce conflicting versions of truth and decisions that are still influenced by the biases inherent in the data mining process. And these issues can be magnified when the policies guiding these processes are not defined by desired outcomes.
In the end, analytics is just like any other technology: it is only as valuable as the skill and experience of its human operator allows it to be.
Arthur Cole is a freelance journalist with more than 25 years’ experience covering enterprise IT, telecommunications and other high-tech industries.