Unified Communications Demand Thorough Network Testing

By Carl Weinschenk, IT Business Edge

Nobody disputes the fact that any telecommunication network or IT infrastructure element should continually be monitored to make sure that any developing issue is caught before it blossoms into a problem that disrupts services to end users. This is vital across the entire network, but nowhere more critical than in networks that support unified communications.

A couple of weeks ago, IT Business Edge posted a feature story looking at the importance of testing, measuring and monitoring UC networks during the pre-planning and deployment phase. The story concluded that it was especially important to ensure that these networks were built on a firm foundation. If they are not, they are likely to become victims of their own success. A well-thought-out UC initiative will be so enticing that a higher than expected volume of impromptu ad hoc meetings will be held, and these will each use multiple communications channels. Such high volume and unpredictable use will stress systems early and often. Evolve IP’s Scott Kinka has a great podcast on growth and testing at Unified Communications Edge.

Observers agree on the importance of monitoring and testing systems in an ongoing manner. “I can’t emphasize that this is of equal or greater importance than [testing and monitoring at the point of] the initial deployment,” says Bob Hockman, the director of Product Marketing at Empirix.

Indeed, pre-deployment and production — operational — testing are two sides of the same coin. “My first assertion is that given the dynamic changes [production networks will undergo], the line between deployment and ongoing operations is fading,” says Gurmeet Lamba, the senior vice president for Product Development for Clarus Systems. “There is no such thing as saying that a deployment is completed and now the company can move onto ongoing operations. You have to treat every day as an important day, and test and monitor just as you do during deployment.”

Lamba suggests what he calls a “four pillar” approach to ongoing testing and monitoring: automation, performance management, configuration management and the use of business intelligence data to avoid potential problems in network operations. Each of these areas has its unique intricacies and each changes in emphasis from the planning/deployment to production network phase, which Lamba shorthands as “day one” and “day two” operations.

In a vast majority of cases, things will be purring along when the vendor, value-added resellers and network integrators shake hands and drive out of the parking lot. “Here’s the problem: That is a point in time,” says Jim Melvin, the president and CEO of Apparent Networks. “You have no idea what will happen the minute they walk out the door. The only constant in life is change. The company can hire 20 people, sell off a division or want an upgrade for the Christmas rush…[Some] companies are blind to that. They rely on the wall plug for power, and think that it is no different for IP. But it couldn’t be more different.”

Of course, testing and monitoring are vital elements of any telecom or IT system. They become even more important when a UC layer is placed atop the core Internet protocol (IP) or time-division multiplex (TDM) infrastructure. The fact that virtually all of the organization’s ability to communicate is wrapped up in the UC mesh makes a particular problem potentially far more devastating. In other words, in a siloed world, an individual problem may take down the phones or e-mail. As problematic as that is, the same problem in a UC infrastructure could potentially take down the entire network, from voice to e-mail and beyond.

In that context, the dramatic changes that most companies undergo on a regular basis — adding and subtracting employees, adding new applications and communications conduits, deploying new software revs and otherwise drastically changing the innards of its communications system — must be managed, tested and monitored in a proactive manner.

Though the need for testing itself is a constant from the pre-assessment and deployment phase to the production phase, the precise emphasis can change.

The installed testing regimen likely will be a combination of hardware and software-based tools, according to Dave Schneider, the senior marketing evangelist for testing vendor IXIA. The two testing regimes have different goals, and the tools have different price points. For this reason, he says, a savvy approach, in which the organization does enough to protect itself, but doesn’t break the bank or dominate engineers’ days, must be taken.

Schneider says that more hardware tools, which are pricier, are best positioned to measure overall loads and provide testers with cumulative statistics that may point to potential problems when read by experienced network engineers. They cannot, however, provide drill-down details on the performance of specific applications. They are meta tools, best used on the larger elements and flows of data. For instance, hardware monitors may be positioned to oversee links between the main office and branch locations or between the corporate LAN and the carrier’s point of presence.

Software tools, on the other hand, are far less costly — they are available, Schneider says, on a per-license basis — and can be deployed more flexibly to find specific problems and issues.

Each can provide important pieces to the puzzle, he says: “[Hardware tools] can look at aggregated traffic and tell you that a trunk line at 2 PM approached saturation from video, but it can’t tell you from where to where [the traffic was going]. Software tools can say, ‘Okay, the trunk is saturated and here’s why.'”

Watching the Detectives

How to get the mix of hardware and software monitoring right — with the ability to know something is about to go wrong, but not busting the budget to do so — is not the only area where the organization must be savvy. Increasingly, organizations are outsourcing their telecommunications and IT needs to multiple vendors. This contradicts the interdependency of UC systems, and makes monitoring challenges greater. In short, organizations must be able to effectively monitor outside watchers who may not be communicating with each other.

Companies that are outsourcing their telecom and IT services must make sure that everything is fitting together correctly. “IT is not doing it [all] themselves anymore,” says Apparent’s Melvin. “But they must make sure that each provider handling each piece of the pie is doing it correctly.”

Empirix’s Hockman refers to another link between pre-production network testing and assessment and what comes after the switch is flipped. He says that much depends on creating a good baseline before the network is turned on. Hockman adds that voice is still king. If it is running smoothly, things probably are in good shape.

There also is a technical reason to pay the most attention to real-time services as barometers of network health. These services are more demanding and sensitive than e-mail and other latency-insensitive applications. If folks are hearing each other without problems — as measured by pure objective measures as loss, jitter and latency as well as more subjective mean option scores (MOS) — chances are that the network is in reasonably good shape. “If the voice piece is working, chances are that the other parts — IM, streaming and other best-effort applications — are probably working well,” he says.

The bottom line is that testing is important at every phase, and is becoming more so as network elements and applications grow more interdependent. Clarus’ Lamba suggests that too many companies only pay attention after a problem occurs. “We need to build comprehensive solutions into the planning process itself, as companies start scaling up UC deployments,” he says.

Latest Articles

Follow Us On Social Media

Explore More