Network Consultants Handbook – Frame Relay
by Matthew Castelli
Frame Relay and the Novell IPX Suite
Novell IPX implementations over Frame Relay are similar to IP network implementation. Whereas a TCP/IP implementation would require the mapping of Layer 3 IP addresses to a DLCI, Novell IPX implementations require the mapping of the Layer 3 IPX address to a DLCI. Special consideration needs to be made with IPX over Frame Relay implementations regarding the impact of Novell RIP (distance-vector algorithm) or NLSP (NetWare Link Services Protocol, link-state algorithm) and SAP (Service Advertising Protocol) message traffic to a Frame Relay internetwork.
Frame Relay IPX Bandwidth Guidelines
IPX can consume large amounts of bandwidth very quickly by virtue of its broadcast announcement-based design. Following are some considerations that demonstrate some methods to consider to manage the IPX traffic and minimize its impact on a Frame Relay WAN.
To reduce overhead in a Frame Relay network, implement the Burst Mode NetWare Loadable Module (NLM). Burst Mode opens the IPX window to avoid waiting for one acknowledgement (ACK) per IPX packet, and allows a maximum window of 128.
Another consideration is the implementation of the Large Internet Packet EXchange (LIPX) NLM if not version 4.X or higher. LIPX will allow for larger-sized packets between client and server. (Often in the case of Frame Relay WANs, the client and server will be connected via Frame Relay VC.) Native IPX without LIPX allows for a maximum payload frame size of 512 bytes; LIPX extends the packet size to 1000 to 4000 bytes. The larger packet size consumes less processing power from the Frame Relay access devices, in turn increasing throughput.
NOTE: Because Ethernet and Token Ring LANs support higher frame sizes, the native IPX 512 byte frame limitation has an adverse effect on network throughput across WAN routers.
If you are working with an older version of Novell NetWare (v3.11), implement the NLSP NLM for network routing. NLSP only sends routing information when an event happens (link failure) or every two hours. The standard RIP [routing] protocol sends its entire routing table to all other routers every 30 seconds. NLSP uses less bandwidth over the WAN, ensuring more bandwidth is available for user data traffic.
NOTE: SAP utilizes IPX packets to broadcast or advertise available services on a NetWare LAN. NetWare servers use these SAP packets to advertise their address, services available, and name to all clients every 60 seconds. All servers on a NetWare LAN listen for these SAP messages and store them in their own server information table. Because most Novell clients utilize local resources, the resources should be advertised on a local basis and not broadcast across the Frame Relay WAN.
Novell SAP
Novell SAP traffic can consume an adverse amount of WAN bandwidth. The router will send, without delay, a SAP update when a change is detected. It is recommended that you modify the SAP delay timer to “slow down” these delays, enabling more user traffic to get through the WAN.
The Cisco IOS command ipx output-sap-delay 55 will send SAP packets with a 55 ms delay between packets. Without a delay, all packets in the update are sent immediately, and the Frame Relay router consumes all available buffer space. With no bandwidth or buffer space available, user traffic will be dropped, requiring retransmission.
The ipx output-sap-delay command causes the router to grab only one buffer a time, leaving the remaining buffer space available to queue user traffic for transmission across the WAN.
The ipx sap-interval and ipx update-interval IOS commands can be used to change the frequency of the updates between IPX-enabled devices. All IPX-enabled devices (routers) interconnected across the Frame Relay WAN must be set to the same update interval. Otherwise, updates will not be synchronized, resulting in phantom routes–routes that appear and disappear with each update.
In a multipoint Frame Relay “broadcast” environment, in which message traffic is propagated to all sites and subinterfaces are not employed, SAP advertisements will be propagated to all sites as well.
NOTE: The absence of the broadcast parameter in the Frame Relay map configuration will prevent both IPX RIP and SAP advertisements from being propagated.
Whereas point-to-point Frame Relay WAN links do not employ a map statement, IPX RIP and SAP updates will be propagated at will between each site. It is recommended that you use IPX RIP and SAP filters in this configuration to minimize the Frame Relay WAN traffic.
Frame Relay and the IBM SNA Suite
IBM mainframes and SNA were the de facto standard of the networking community for many years, predominantly in the 1970s and 1980s. IP has since replaced SNA as the dominant internetworking protocol suite. SNA is still very much “at large,” especially in large legacy networking systems, such as those found within the banking and financial industries.
These IBM SNA networking environments were ideally suited to the internetworking environment enabled by Frame Relay network implementations due to the lower-cost and cleaner architecture, compared to that of traditional point-to-point private line interconnections. Whereas point-to-point network architecture requires several lines and interfaces, Frame Relay networks enable a single line (serial interface) and multiple subinterfaces, one for each SNA communication session (Frame Relay VC).
Migration of a legacy SNA network from a point-to-point infrastructure to a more economical and manageable Frame Relay infrastructure is attractive; however, some challenges exist when SNA traffic is sent across Frame Relay connections. IBM SNA was designed to operate across reliable communication links that supported predictable response times. The challenge that arises with Frame Relay network implementations is that Frame Relay service tends to have unpredictable and variable response times, for which SNA was not designed to interoperate or able to manage within its traditional design.
NOTE: Migration from SDLC to Frame Relay networking environments will require an upgrade to the communications software packages in both the FEPs and SNA controllers.
Typically, SNA controllers, routers, and FRADs encapsulate SNA traffic as multiprotocol data, as described in the Frame Relay Forum’s FRF 3.1 Implementation Agreement.
Traditional IBM SNA Network Configuration
Figure 15-22 illustrates a traditional IBM SNA network configuration.
Figure 15-22: Traditional IBM SNA Network Configuration
(Click image for larger view in a new window)
An SNA network has two primary components:
- Front-end processors (FEPs)–FEPs offload the coordination effort required to enable and support communication between IBM hosts and (potentially) thousands of remote devices.
- Remote controllers–Remote controllers are located at remote locations and are used to interconnect LANs and low-bandwidth (typically 9.6 kbps or 56 kbps) leased lines. Remote controllers concentrate several sources of remote traffic onto one high bandwidth connection to the front-end processor.
IBM’s SNA environment supports the implementation of multidrop technology, making multipoint leased line configurations more cost effective versus each SNA drop possessing its own leased line. In the multidrop environment, several devices share the same leased line with the front-end processor or remote controller, polling these devices and allowing each device a turn to communicate with the mainframe.
The IBM SNA environment relies heavily upon the FEP’s polling mechanisms because the FEP controls when each of its connected remote devices can send and receive data. The SNA infrastructure is based on this polling methodology.
When the FEP polls the remote device, it expects to see a response within a preconfigured timeout period. This timeout threshold is typically a fairly small period of time, generally a few seconds. If the timeout period expires, the poll is retransmitted. Frame discards and late frame arrivals (usually caused by network congestion) can disrupt SNA communication.
Figure 15-23 illustrates a Frame Relay implementation, replacing point-to-point leased lines, supporting an IBM SNA infrastructure.
Figure 15-23: IBM SNA Implementation over Frame Relay
(Click image for larger view in a new window)
SNA Data Link Protocols
Two reliable data link protocols are used for FEP/controller communication in the IBM SNA environment: Synchronous Data Link Control (SDLC) and Logical Link Control, type 2 (LLC2).
Modern SNA networks also support end-to-end sessions set up by the Advanced Peer-to-Peer Networking (APPN) protocol. Figure 15-24 illustrates an APPN infrastructure supporting communication between a mainframe, AS/400 hosts, and LAN systems.
NOTE: APPN relies on LLC2 links.
IBM offers an extension to APPN that can optionally operate without LLC2: High Performance Routing (HPR). HPR can operate without an underlying [reliable] data link protocol. Retransmission and flow control is performed end-to-end by a higher layer protocol, similar to TCP within the TCP/IP protocol suite.
HPR traffic that does not operate on top of an LLC2 link can support transmission across Frame Relay links without encountering the issues associated with reliable links, such as SDLC or LLC2. An example of reliable link issues not found with HPR is an SDLC or LLC2 poll timeout.
Figure 15-24: APPN Network
(Click image for larger view in a new window)
SDLC and LLC2
The IBM SDLC protocol was designed for SNA-based networks and has a number of features that must be addressed when leased-lines are replaced by Frame Relay circuits. These features include the following:
- SDLC is a master/slave polling protocol–An FEP or controller polls remote devices to ascertain whether they have data to be sent or received. SDLC polling traffic is heavy and consumes bandwidth. In addition, the FEP or controller must receive poll responses within a strictly predictable time limit, usually measured in a few seconds.
- SDLC makes liberal use of control frames for flow control–A Frame Relay circuit that is carrying raw SDLC traffic will be congested with frequent SDLC polls and other control traffic.
- Each SDLC information frame is numbered in sequence and contains frame acknowledgements–After a preset number of frames have been sent, data transmission will not proceed unless the sender receives an acknowledgement from the terminating partner (receiver).
- SDLC is not used for LAN peer-to-peer communications–SNA LAN frames contain an LLC2 header that contains both the frame sequence and the acknowledgement numbers.
- LLC2 does not have the polling overhead attributed to SDLC–LLC2 does have the overhead associated with reliable, ordered, flow-controlled delivery of data across a communications link.
Data-Link Switching (DLSw)
Data-link switching (DLSw) is a means of transporting SNA and NetBIOS traffic across a network using many different protocols. The original RFC 1434 described DLSw, but that RFC has been superceded by RFC 1795, which describes DLSw version 1. More recently, scalability enhancements have been introduced in DLSw version 2. Cisco has introduced some enhancements in its DLSw+ implementation that are backward compatible with both version 1 and version 2.
DLSw has the following advantages over SRB:
- DLSw gets around the SRB 7-hop limit.
- DLSw allows multiple connections across a network.
- DLSw increases session response times.
- DLSw provides flow control.
- DLSw reroutes traffic around broken links.
- DLSw removes the SRB heavy broadcast traffic.
Additionally, DLSw implementations provide SDLC to LLC2 conversion, eliminating the need for many Front End Processor (FEP) ports. DLSw supports RFC 1490, enabling LLC2 over Frame Relay and DLSw prioritization.
DLSw uses the Switch-to-Switch Protocol (SSP) in place of source route bridging (SRB) between routers. SSP is used to create DLSw peer connections, locate resources, forward data, and handle flow control and error recovery. TCP is used for DLSw encapsulation. A newer, standard version of DLSw is not restricted to TCP for encapsulation services.
The routers are called data-link switches. The data-link connections (DLCs) are terminated at the router, or data-link switch, so that the Routing Information Field (RIF) ends at a virtual ring within the router. Because DLCs are locally terminated, they can be locally acknowledged. This local acknowledgement means that the necessity for link layer acknowledgements or keeping alive messages to run across the WAN do not exist, minimizing session timeouts. Because the RIF ends at the peer router at each end, six hops can be added on each side of the virtual ring, thereby extending the network. With remote source-route bridging (RSRB), the RIF is carried all the way through the virtual ring, thereby limiting the number of hops. With DLSw, the virtual ring can be different in each peer because of the RIF termination.
Frame relay circuits that are carrying reliable link traffic incur a substantial amount of increased overhead. One Frame Relay circuit has the potential to carry several separate reliable links. Each link requires acknowledgement and flow control messages, which in turn require available bandwidth to carry the additional traffic.
The carrying of LLC2 links across a frame circuit can be avoided with the use of DLSw, as illustrated in Figure 15-25.
Figure 15-25: Data Link Switching (DLSw)
(Click image for larger view in a new window)
When DLSw is implemented, the LLC2 links are terminated at each router. Incoming data is transmitted across the Frame Relay WAN via a TCP session and is then forwarded across a new LLC2 link.
NOTE: DLSw is not constrained to Frame Relay WANs; DLSw interoperates with any WAN technology.
The SNA traffic is preserved by the TCP sessions that support reliable data transfer. The TCP protocol, by its nature and design, adjusts well to sporadic transmission delays, efficiently manages acknowledgements, and carries out flow control without adding overhead to the communications flow.
Implementing DLSw has a disadvantage in that the TCP/IP headers add extra overhead to the transmitted data. This is generally worth the tradeoff compared to the overhead involved with the management of multiple independent LLC2 links.
SNA and DLSw Traffic Management
Following is an example of an access list enabling SNA traffic to be passed across a DLSw link:
access-list 200 permit 0x0d0d 0x0101 access-list 200 deny 0x0000 0xffff dlsw remote-peer 0 tcp 1.1.1.1 lsap-output-list 200
If non-SNA traffic is to be blocked, it is recommended that you prevent the traffic from coming into the router and being classified. After traffic is classified, the router’s resources begin to be consumed.
source-bridge input-lsap-list 200
Custom Versus Priority Queuing
To ensure that SNA traffic is managed (that is, sessions do not time out), Cisco recommends the use of either custom or priority queuing.
Priority-queuing is easier to configure than custom-queuing, but priority-queuing can potentially “break” the Frame Relay network. Priority queuing always checks the higher priority queues before checking the lower priority ones. Therefore, if IP is configured in a high priority queue and IPX in a normal priority queue, the possibility exists to completely choke out IPX traffic if an IP packet is always ready in the high queue (such as infinite preemption). This results in lost IPX sessions, which creates problems for network users. If known low bandwidth protocols are placed on the high queue, this possibility can be eliminated. For example, small numbers of SNA users who are running interactive 3270 traffic on a LAN, or SNA users residing off a slow SDLC line, would not be able to keep the high queues full constantly. This would also apply to other protocols that are bandwidth constrained on the in-bound side. This is the ideal situation in which to use priority queuing.
Custom-queuing removes the possibility of infinite preemption by permitting the administrator to customize how the various queues are serviced.
The following example demonstrates the process of queue servicing:
- For example, if starting with 10 possible queues, the router polls all queues all the time.
- If queue 1 is configured to contain IP traffic and queue 2 to contain IPX traffic, the router services X number of bytes on queue 1, then moves on to queue 2 and services X number of bytes there. (The router administrator can configure the value for X.)
- After servicing queue 1, if queue 2 has no packets, the router immediately moves on to the next queue, which in this case will be queue 1, allowing traffic on queue 1 to use all available bandwidth, if no other protocols require it.
When the serial line [interface] is saturated, queues can be configured to the proper byte count values (Q) if the average size of the packets is known. This essentially configures bandwidth allocation on a per-protocol basis. In this scenario, some “tweaking” will likely be required.
NOTE: As described here, per-protocol bandwidth allocation is a powerful feature that is not easy to implement. Care should be taken to review all configurations prior to implementing this strategy.
—
Our next segment from Cisco Press’ Network Consultants Handbook will deal with the fourth common Frame Relay Application, Voice over Frame Relay (VoFr).