Implement NFSv4: Domains and Authentication
With NFSv4, you get POSIX ACL support, encryption, and much more flexible authentication options. Here's how to start implementing it today.
Last week, we talked about some new features available in NFSv4. This week we'll explain what's required to get NFS servers and clients talking NFSv4, and briefly talk about the components for secure NFS.
New in NFSv4 is the "domain" concept. Before NFSv4 will allow access to a file based on the user id, it will first check to see if the NFS Domains are the same between the client and server. If the configured domains differ between client and server, NFS will deny access.
The first step to using NFSv4 is to configure the domain. By default, most implementations will try to operate based on the DNS domain of the client and server. If they match, everything will work. If not, you have a few choices.
The NFS domain can be configured manually on each client and server. In the Solaris world there's a variable in the file /etc/default/nfs called NFSMAPID_DOMAIN which can be set to the correct domain. However, it is much easier to use DNS for this information.
An NFS client or server will query DNS for the information. If a computer is configured in a.foo.com, it will ask about NFS domains from a.foo.com. Also, if resolv.conf is configured to search multiple domains, for example b.foo.com and foo.com, then queries for each of those will occur, too.
The IETF has two proposals thus far: one for a new RR (Resource Record) and one specifying the format of a TXT record. Most DNS servers haven't implemented the new RR yet, but in theory, you can set the domain by specifying the RR: NFS4ID. If our previous example's NFS4ID was configured to be z.foo.com, then all the clients would use that for the NFS domain. Most sites will be stuck with using the TXT record. The same concept is at work here, but the format is a bit different. For bind 9, the NFS domain can be set to z.foo.com using a TXT record like so:
_^12NFSv13^ IN TXT "z.foo.com"
Just remember, if clients aren't configured to search a common DNS domain for DNS lookups, you'll need to add this record for each DNS domain that contains NFS clients.
Once that's taken care of, you can begin enabling NFSv4. We'll start with the easiest case, Solaris. Edit the /etc/default/nfs file and set NFS_SERVER_VERSMAX=4 and NFS_CLIENT_VERSMAX=4. On the server, restart the NFS server with: "svcadm restart nfs/server.' Clients will probably need to unmount any existing file systems before restarting the nfs/client service. Afterward, everything should be working.
ACL support can be tested easily enough, but if you're using ZFS, you'll quickly notice that the good old setfacl/getfacl programs are now ineffective. The chmod and ls programs have been extended to include this functionality. To verify NFSv4 is working properly with its POSIX ACL support, try:
chmod A+user:bin:delete:allow testdir
The previous command gave the bin user access to delete this directory. We can verify that it worked over NFSv4 with:
# ls –V testdir
drwxr-xr-x+ 2 charlie them 4 Nov 1 15:31 testdir
Yes, that's a lot of new permissions to learn! The chmod manpage is very helpful. Note above that username bin has 'd' permissions, or "delete" on my testdir directory.
In theory, setting ACLs over NFSv4 will work in Linux too. Getting Linux clients to talk NFSv4 is certainly more difficult. We're sad to report that despite the extensive testing NFSv4 has gone through with Linux, it isn't the holy grail we indicated last week. Enabling NFSv4 on Ubuntu Dapper instantly caused kernel bug messages to show up in syslog, and then crashes. Even after a reboot. Luckily Edgy (the latest Ubuntu) didn't suffer the same fate.
In Linux, we simply enabled the idmap daemon in /etc/default/nfs-common with NEED_IDMAPD=yes. It may be necessary to configure /etc/idmapd.conf with the NFS domain, depending on your distro and kernel version. Afterward, you should be able to manually mount NFS file systems with the nfs4 option. If all your files are owned by the 'nobody' user, the NFS domain is incorrect. To enable NFSv4 on autofs-mounted file systems, just add -fstype=nfs4 to the mount options.
Once mount options and user id issues are sorted out, you can begin playing with NFSv4 authentication and encryption. Solaris, AIX, Linux, etc can all use Kerberos, so encrypted NFS is quite feasible. Authentication is easy, and of course encryption is more difficult to set up. If you're already working in a functional Kerberos environment, 90% of the battle is over.
To authenticate NFS clients, DH (Diffie-Hellman) can be used, and so can krb5 (Kerberos). The general concept, for DH, is to have each user establish his own secure RPC password, which will be used to authenticate to a NIS server, running keyserver. If all the stars align, users with the same network and login password will not have to do anything special. Once all the key issues are worked out (I make this sound wonderfully trivial) adding sec=dh to the share and mount options will enable DH authentication.
Of course that's a pain, and Kerberos is the answer. The mount options for Kerberos are krb5 (authentication only), krb5i (authentication + integrity checksums), or krb5p (privacy). Kerberos is another topic for another article, but the options are mentioned here for the already-configured Kerberos users out there.
NFSv4: real POSIX ACL support, encryption, and much more. What else could you ask for?