Recently, I’ve written several articles discussing various aspects of working with Active Directory. However, while working on those articles, I realized that although I knew quite a bit about working with Active Directory, I knew very little about how Active Directory actually works with Windows 2000. Needless to say, I did some research. In this article, I’ll discuss the global catalog and the issue of replication between Active Directory servers.
Global Catalog
If you have ever managed a Windows NT network, you’re familiar with the concept of domains. In Windows NT, the first domain controller (DC) you initially brought online became your primary domain controller (PDC). It held the master copy of all the Windows NT security accounts. If you brought additional DCs online within the domain, they received a copy or replica of the information held by the PDC. If any changes were made to the security information, only the copy on the PDC would be changed. The PDC would alert the backup DCs of the change, and they would request the update when they had time to do so.
Right now, you’re probably wondering what my little crash course in Windows NT could possible have to do with Active Directory. However, because Active Directory functions similarly in some aspects to the Windows NT domain model, it seems appropriate to compare it to something that most of us are familiar with.
In Windows NT, the PDC was the central repository of all security account information. In Windows 2000, this role is played by the global catalog. The global catalog is created automatically when you bring the first DC online. The global catalog is actually nothing more than a database; it contains a full copy of every directory service object from the global catalog’s host domain. It also contains a partial copy of every directory service object from every domain within the forest. This partial copy attempts to conserve space by copying only the most commonly searched attributes of each object, such as a users login name or first or last name.
In Windows NT, the PDC was responsible for validating security information and for authenticating logins. However, this isn’t the case with the global catalog. As I mentioned earlier, the global catalog is only a database–it doesn’t validate logins. Logins are still authenticated by a DC. However, when a DC receives a login request, it checks with a global catalog server for the user’s global group information.
Basically, the global catalog has two main functions: It aids network logins by providing universal group information to DCs during the login process, and it allows users to search for directory service objects in a quick and efficient manner regardless of where the objects are located at within the forest.
As you can see, the global catalog plays an important part in Windows 2000. If the global catalog server goes down, only users of the Domain Admins group can log in to the network. Fortunately, any DC can be designated as a global catalog server. Therefore, it’s possible to protect yourself by having more than one global catalog server.
Replication
Now that you know something about how the global catalog works, let’s discuss how information stored in the global catalog is replicated among servers within the domain. As I mentioned earlier, in Windows NT, changes to the security information could only be made to the PDC. Backup DCs had to obtain updates from the PDC. In Windows 2000, this isn’t the case. Windows 2000 uses a multimaster domain model. This means that any DC can accept updates and is responsible for replicating those updates to other DCs.
Two types of replication are at work within Windows 2000: Replication occurs within sites and between sites. As you might have guessed, replication within a site takes priority over replication between sites. This is the case for a couple of reasons. First, there’s usually more demand for the information being replicated within the information’s own site. For example, suppose your company has two sites: USA and China. If you’re a member of the USA site, you probably communicate frequently with other members of the USA site and thus need very up-to-date directory information within that site. However, the average worker will seldom have to communicate with the China site; so, although directory updates between the two sites are important, they aren’t as important as directory updates within the sites.
CrossLinks |
|
The other reason for the less frequent updates between sites is that many times, sites are linked by a low-bandwidth connection. For example, in our USA/China site example, the sites might be linked by a T1 connection, but they would definitely not have the 100Mbps connection present within the sites. Therefore, to keep from hogging the precious bandwidth, replication between sites isn’t constant.
Another fascinating aspect of directory replication is the way it’s performed within a site. As you build your site, Windows 2000 automatically arranges the servers within the site into a ring structure. That way, any one server can fail and replication will still function, because ring structures always provide two paths for data to flow from each server. For example, suppose you have servers named A, B, C, and D, and that the normal replication path is A to B to C to D to A. Now, suppose that server B fails. In a linear structure, replication will fail, because A can’t communicate with B. However, in Windows 2000, replication will revert to using the path of A to D to C and C to D to A. You can see an illustration in Figure 1. //
Figure 1: Replication within a site works in a ring structure.
Brien M. Posey is an MCSE who works as a freelance writer. His past experience includes working as the Director of Information Systems for a national chain of health care facilities and as a network engineer for the Department of Defense. Because of the extremely high volume of e-mail that Brien receives, it’s impossible for him to respond to every message, although he does read them all.