Choosing the Cluster Type that's Right For You

By Brien M. Posey | Oct 7, 2000 | Print this Page
http://www.enterprisenetworkingplanet.com/netsysm/article.php/624381/Choosing-the-Cluster-Type-thats-Right-For-You.htm

In part 1 of this series, Is a Server Cluster Right for Your Organization? , I introduce you to the concept behind server clustering, and explain some of the basic hardware and software requirements for establishing a server-clustering environment. In this article, I'll continue with the server cluster planning and explain how different types of cluster server environments are suited to various business environments.

"Knowing what you're trying to accomplish before you start building a server cluster is a requirement from a technical standpoint, because of the various types of server clusters."

What Do You Want To Accomplish?

As I mentioned in the introduction, there are a couple of different types of cluster environments. Determining which one is right for you depends heavily on what you're trying to accomplish. For example, suppose your company has a Web server that has been getting an extremely high number of hits. It might make sense to cluster the Web server to reduce the workload placed on any single server. Normally, this involves using nothing more than a simple network load-balancing (NLB) cluster model, which means that each server is loaded with a copy of your Web site. A network load- balancing service directs incoming traffic to the server that's currently carrying the lightest load.

A load-balancing cluster model is the least complex and least expensive type of clustering, because each server maintains its own independent copy of the files that contain the Web page. With the possible exception of a database that resides on a separate server outside of the cluster, no files are shared. If a server drops offline, the other servers in the cluster pick up the slack. Because there are no shared files, there's no danger of a failed server making critical data unavailable. As the demand on your Web server array grows, you can easily add more servers to the cluster. Each new server will have its own copy of your Web site and be linked into the network load-balancing service.

Or your company might need a critical business application to be available 24 hours a day, 7 days a week. In a situation like this, you'd need much more complicated and expensive hardware than you would in the Web server situation I described earlier. You'd have to set up your cluster servers to share a common hard disk array. Because your servers share a disk array, this implementation can't be used for the technique in which each server has its own copy of the files. Instead, you'd have to set up the servers in such a way that the server any particular user utilizes for an application depends on which groups they belong to. If a user's assigned server were to fail, another server in the cluster could pick up the slack and respond to the user's request. Because all the servers in the cluster use the same hard disk array, there's no danger of a user losing access to a particular database record they're working on just because a server goes down.

Knowing what you're trying to accomplish before you start building a server cluster is a requirement from a technical standpoint, because of the various types of server clusters. Advanced planning is also a requirement from a business standpoint. In most of the companies that I've worked for, upper management tends to be a little afraid of computers. They often see new technology as expensive, unproven, and as a great opportunity to lose critical data and productivity during a crash. If you want to install a server cluster server in your organization, you'll usually end up having to convince the upper management that the server cluster isn't simply a expensive new toy for the IT department. To win your argument, you'll have to plan which type of server cluster is right for your environment so that you can explain to management how that type of clustering system can benefit your organization.

Now that you know what you're up against from a political and a technological standpoint, let's take a more technical look at the two major types of server clusters: the network load-balancing server and the full-blown server cluster. For the sake of clarity, I'll refer to these as NLB clusters and server clusters for the duration of the article.

NLB Cluster Models

NLB clustering is the method I used in my first example with the Web server that was experiencing high traffic volumes. As I mentioned earlier, in an NLB environment, each server contains its own hard disks and, therefore, its own copy of the application you're clustering. Each server in an NLB environment functions as an independent entity. The cluster can contain between 2 and 32 servers. The extent of the actual clustering process is that each server communicates its present workload and other status information to other servers in the cluster. This is done in such a way that as more clients try to use the clustered application, they are sent to the server with the least current workload.

Obviously, this functionality doesn't happen by magic. Even though NLB represents clustering in its simplest form, NLB still has some very specific hardware and software requirements. The first requirement for running NLB is that all of the servers involved in the cluster must be running TCP/IP, because the NLB driver resides between the TCP/IP stack and the physical layer (the network card). In addition, the server must be running an NLB control program to perform the actual distribution of labor. In the case of Windows 2000, this task is performed by the Windows Load Balancing Service, which exists in the WLBS.EXE file. This service generally consumes between 250 KB and 4 MB of RAM and about 1 MB of hard disk space.

NLB has rigid requirements for the physical layer, as well. For starters, NLB only works on networks running Fiber Distributed Data Interface (FDDI) or Ethernet. The network's medium must be capable of sustained speeds ranging between 10 Mbps and 1 Gbps.

Although it's not a strict requirement, NLB works best if you have two network cards in each server. One network card serves as a link to the clients, whereas the other card becomes part of a dedicated backbone between the servers in the cluster. This technique is particularly effective when the network cards are run in unicast mode as opposed to multicast mode.

Server Clusters

Unlike NLB clusters, a server cluster is made up of multiple machines that function as a single machine. According to my sources, current technology limits this type of cluster to using two servers, although down the road it may be possible to add more. Therefore, this particular configuration is usually more suited to providing fault tolerance than for improving network performance.

Usually, both machines in a server cluster are linked to a common hard disk array through a SCSI interface. Therefore, if either server were to crash, everything on the hard disk would still be accessible from the other server.

As with the NLB cluster configuration, server clusters require each machine to have two network cards. One card is used as a dedicated backbone between servers; the other card is used to service the clients. Technically, one of the cards that services clients doesn't even have to be connected to the network. However, if fault tolerance is your goal, not connecting both cards would be very foolish.

Server clusters are only available through Windows 2000 Advanced Server. Another requirement is that the servers in the cluster must be running TCP/IP.

Server clusters, like most other things in Windows 2000, are modular in nature. They are made up of nodes, groups, and resources. As you might expect, a node is simply a server that is a part of the cluster. A group is a unit of fail over. Each group contains a collection of resources, or objects that can be brought online or taken offline. A group is owned by a node, and all resources within a group run on the same node that owns the group. If any one resource within a group fails, all resources in the group will be temporarily moved to a different node until the cause of failure is resolved.

You might wonder how Windows 2000 knows how and when to move groups between nodes. It does so using something called the Quorum Resource. The Quorum Resource exists on an NTFS partition within the shared hard disk array. It is basically a collection of all the cluster's configuration information, fail-over policies, and recovery logs.

As with NLB servers, there are several different ways to configure server clusters. I'll dedicate an upcoming article to configuring NLB servers. Then, I'll continue the series with an article on configuring server clusters. These articles will touch on such issues as the various server configuration models available and how to do capacity planning for each type of server. //

Brien M. Posey is an MCSE who works as a freelance writer. His past experience includes working as the director of information systems for a national chain of health care facilities and as a network engineer for the Department of Defense. Because of the extremely high volume of e-mail that Brien receives, it's impossible for him to respond to every message, although he does read them all.