Two things come to mind when somebody asks me what they should know about the layout of their Linux or Unix file system. First is the actual structure, or where things are stored. Second, is whether or not to create multiple file systems or keep everything in one or two mondo devices.
The easiest question, about why things are where they are, isn’t as easy as it seems. Different operating systems have different ideas about the specifics, but the root-level directories mean nearly the same thing across all platforms. The Linux Standards Base created a Filesystem Hierarchy Standard, or FHS, to attempt to formalize some of the stranger ideas they had. They diverge enough that it’s worth mentioning, but first let’s look at the traditional viewpoint.
As we all know, the file system is structured like a tree, with / at the top. The most common sub-directories within / are seen across most Unix and Linux flavors. The universal directories are:
- essential user commands, presumably available when running in single-user mode, when /usr may not exist
- device files, which provide access to devices on the system
- host-specific configuration information
- essential shared libraries, and perhaps even kernel modules on some systems (on others it’s in /kern)
- mount point for temporary file systems, usually manually created by administrators
- add-on or “optional” software, which is rarely used for that purpose
- system binaries, where daemons and things an administrator cares about will live
- temporary files, created by any application running on the system
The LSB FHS added a few, most notably /srv. The purpose of /srv is to store all “data for services” within. This includes any external service, like a Web server, that the server will provide. Not many people follow this standard, but certain Linux distros have conformed recently.
What, exactly, is optional software? When talking about Solaris, optional software is anything but the base operating system. Download Sun’s compiler suite, it goes in /opt/SUNWspro. Want more GNU software? Use blastwave and get everything in /opt/csw/. The long-held philosophy that you should never muck with the operating system-managed files and directories makes good sense. You want to be sure that every program in /usr/bin runs how people expect it to run in a given operating system. Third party supported software can easily explode if you replace /usr/bin/perl with an unknown version, for example. HP-UX, on the other hand defies all logic by putting both OS and optional software in /opt.
Then there’s Linux. Each variant provides any of a number of package managers, and they all install packages over top of the base system. This is fine, assuming you’re using officially supported packages from the distribution. Frequently, however, people will add repositories to their package manager’s configuration to get a wider variety of software. These are not part of the base system, and are not supported by the vendor. Depending on who prepared the package, they can tromp all over your operating system, leaving you in a completely unknown state.
It’s clear which philosophy I tend to agree with.
One, or One Hundred Filesystems?
The second point of contention, about how many physical filesystems to allocate, is an even more heated debate. I personally don’t understand why-it seems easier than the first issue-because it isn’t a one-size-fits-all question. Different servers will have different layouts, and that shouldn’t lead to confusion as the first issue does.
The debate goes something like this:
If I allocate one physical partition and mount / on it, everything is in a single place; if I create a separate /, /var, /tmp, etc, I run the risk of guessing wrong and having to deal with full filesystems. Indeed, a fewer number of filesystems does provide some leeway if you estimate a size incorrectly. On the other hand, if you do fill up a file system that’s too encompassing, you may find that your server handles this very poorly.
If /, /var, and /tmp are all on the same filesystem on a Web server, the filling of any one of those filesystems means that the other are full as well. Web servers get extremely cranky when they can’t write logs to /var (traditionally they are stored there), and likewise other daemons may fail if they cannot write to /tmp.
One must also think about backup strategy when allocating filesystems, as not all backup software can work with only portions of a filesystem. In the end, it’s highly dependent on your server’s purpose. One a Web server, a separate /var is likely required, but on someone’s desktop, the need isn’t as pressing.
It’s all a matter of opinion, once the technical issues are resolved. Backups are important, the number of file systems you need to monitor is extremely important, and swap file location is probably the most overlooked consideration. The first blocks allocated on a disk are on the outer edge of the platter, where the disk is spinning fastest. If you allocate a 70GB / filesystem at slice 0, and then toss on a 4GB swap as slice 1, you’re shooting yourself in the foot. Swap should always be closest to the edge of the disk for optimal performance, and other filesystems that are accessed frequently should be close by. Aside from swap responding slowly, you also need to make sure that the disk heads aren’t constantly seeking across the entire platter to serve requests.
I will not purport to know the ideal layout structure. Some Unix variants want just a few hundred MB for /, forcing you to create separate filesystems for everything else. Some middle ground is generally the best solution. As mentioned, I prefer to keep similar classes of machines all the same, and that means maintaining multiple partition maps within system deployment software. Most servers I maintain have a few GBs for /var, a few for /, 6GB for /usr, and 10GB for /opt. Desktops generally have a 10GB /usr and a 10GB /. Your layout will depend on your needs, and don’t let anyone tell you different-just be prepared to justify your decisions.