Frame Relay Applications: Novell IPX & IBM SNA - Page 5

By Cisco Press | Posted Feb 1, 2002
Page 5 of 5   |  Back to Page 1
Print ArticleEmail Article
  • Share on Facebook
  • Share on Twitter
  • Share on LinkedIn

SNA and DLSw Traffic Management
Following is an example of an access list enabling SNA traffic to be passed across a DLSw link:

  access-list 200 permit 0x0d0d 0x0101
  access-list 200 deny 0x0000 0xffff
  dlsw remote-peer 0 tcp 1.1.1.1 lsap-output-list 200
If non-SNA traffic is to be blocked, it is recommended that you prevent the traffic from coming into the router and being classified. After traffic is classified, the router's resources begin to be consumed.
  source-bridge input-lsap-list 200

Custom Versus Priority Queuing
To ensure that SNA traffic is managed (that is, sessions do not time out), Cisco recommends the use of either custom or priority queuing.

Priority-queuing is easier to configure than custom-queuing, but priority-queuing can potentially "break" the Frame Relay network. Priority queuing always checks the higher priority queues before checking the lower priority ones. Therefore, if IP is configured in a high priority queue and IPX in a normal priority queue, the possibility exists to completely choke out IPX traffic if an IP packet is always ready in the high queue (such as infinite preemption). This results in lost IPX sessions, which creates problems for network users. If known low bandwidth protocols are placed on the high queue, this possibility can be eliminated. For example, small numbers of SNA users who are running interactive 3270 traffic on a LAN, or SNA users residing off a slow SDLC line, would not be able to keep the high queues full constantly. This would also apply to other protocols that are bandwidth constrained on the in-bound side. This is the ideal situation in which to use priority queuing.

Custom-queuing removes the possibility of infinite preemption by permitting the administrator to customize how the various queues are serviced.

The following example demonstrates the process of queue servicing:

  • For example, if starting with 10 possible queues, the router polls all queues all the time.
  • If queue 1 is configured to contain IP traffic and queue 2 to contain IPX traffic, the router services X number of bytes on queue 1, then moves on to queue 2 and services X number of bytes there. (The router administrator can configure the value for X.)
  • After servicing queue 1, if queue 2 has no packets, the router immediately moves on to the next queue, which in this case will be queue 1, allowing traffic on queue 1 to use all available bandwidth, if no other protocols require it.
When the serial line [interface] is saturated, queues can be configured to the proper byte count values (Q) if the average size of the packets is known. This essentially configures bandwidth allocation on a per-protocol basis. In this scenario, some "tweaking" will likely be required.


NOTE:   As described here, per-protocol bandwidth allocation is a powerful feature that is not easy to implement. Care should be taken to review all configurations prior to implementing this strategy.

--
Our next segment from Cisco Press' Network Consultants Handbook will deal with the fourth common Frame Relay Application, Voice over Frame Relay (VoFr).

Comment and Contribute
(Maximum characters: 1200). You have
characters left.
Get the Latest Scoop with Enterprise Networking Planet Newsletter