Changes in Orientation - Page 2

Print Article

Though the need for testing itself is a constant from the pre-assessment and deployment phase to the production phase, the precise emphasis can change.

The installed testing regimen likely will be a combination of hardware and software-based tools, according to Dave Schneider, the senior marketing evangelist for testing vendor IXIA. The two testing regimes have different goals, and the tools have different price points. For this reason, he says, a savvy approach, in which the organization does enough to protect itself, but doesn't break the bank or dominate engineers' days, must be taken.

Schneider says that more hardware tools, which are pricier, are best positioned to measure overall loads and provide testers with cumulative statistics that may point to potential problems when read by experienced network engineers. They cannot, however, provide drill-down details on the performance of specific applications. They are meta tools, best used on the larger elements and flows of data. For instance, hardware monitors may be positioned to oversee links between the main office and branch locations or between the corporate LAN and the carrier's point of presence.

Software tools, on the other hand, are far less costly -- they are available, Schneider says, on a per-license basis -- and can be deployed more flexibly to find specific problems and issues.

Each can provide important pieces to the puzzle, he says: "[Hardware tools] can look at aggregated traffic and tell you that a trunk line at 2 PM approached saturation from video, but it can't tell you from where to where [the traffic was going]. Software tools can say, 'Okay, the trunk is saturated and here's why.'"

Watching the Detectives

How to get the mix of hardware and software monitoring right -- with the ability to know something is about to go wrong, but not busting the budget to do so -- is not the only area where the organization must be savvy. Increasingly, organizations are outsourcing their telecommunications and IT needs to multiple vendors. This contradicts the interdependency of UC systems, and makes monitoring challenges greater. In short, organizations must be able to effectively monitor outside watchers who may not be communicating with each other.

Companies that are outsourcing their telecom and IT services must make sure that everything is fitting together correctly. "IT is not doing it [all] themselves anymore," says Apparent's Melvin. "But they must make sure that each provider handling each piece of the pie is doing it correctly."

Empirix's Hockman refers to another link between pre-production network testing and assessment and what comes after the switch is flipped. He says that much depends on creating a good baseline before the network is turned on. Hockman adds that voice is still king. If it is running smoothly, things probably are in good shape.

There also is a technical reason to pay the most attention to real-time services as barometers of network health. These services are more demanding and sensitive than e-mail and other latency-insensitive applications. If folks are hearing each other without problems -- as measured by pure objective measures as loss, jitter and latency as well as more subjective mean option scores (MOS) -- chances are that the network is in reasonably good shape. "If the voice piece is working, chances are that the other parts -- IM, streaming and other best-effort applications -- are probably working well," he says.

The bottom line is that testing is important at every phase, and is becoming more so as network elements and applications grow more interdependent. Clarus' Lamba suggests that too many companies only pay attention after a problem occurs. "We need to build comprehensive solutions into the planning process itself, as companies start scaling up UC deployments," he says.

This article was originally published on May 10, 2010
Get the Latest Scoop with Networking Update Newsletter