Virtualization technologies are really taking off, especially with the advent of recent hardware improvements. There is also another way to virtualize your infrastructure: with OS-level virtualization.
Zones can be very useful for many tasks, including testing and server consolidation. Virtual servers are an excellent way to implement varying complexities of disaster recovery, as was described in the recent article, “Virtualize, and Streamline Disaster Recovery.” Let’s take a look at how this all works in Solaris 10.
The concept of zones is fairly straightforward. The already-running OS is the global zone, and any zones you add are non-global. Certain administrative functions can only be performed from the global zone, such as editing the zone configuration. The configuration holds important data that impacts the way a zone operates, including: delegated file systems, resource controls, and device configurations. You can apply patches to the global zone, and if the non-global zones share /usr, /sbin, /platform and /lib with the global, they are automatically patched in unison. If you opt to create a zone with its own copy of these file systems, then it will have to be patched separately.
Here’s a quick walkthrough of creating a simple zone. A really neat feature of zones is that you can create them on a ZFS file system, and the root file system in the zone will be ZFS. You can’t actually have a ZFS root file system in Solaris 10 yet, but you can inside a zone. A trivial example of creating a zfs pool from one partition, and then a file system is:
# zpool create test c0t0d0s7
# zfs create test/zones
# zfs set quota=3g test/zones
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
test 104K 64.5G 25.5K /test
test/zones 24.5K 3.00G 24.5K /test/zones
Easy as an apple dessert. We’d like to talk about the features more than provide a copy and paste-able “howto,” so the specifics of the commands shown in this article are left as an exercise to the reader.
Now let’s create a zone. We’ll want to share some file systems with the global zone. This is the minimum configuration necessary to set up a simple zone:
# zonecfg -z myzone
myzone: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:myzone> set autoboot=true
zonecfg:myzone> set zonepath=/test/zones/myzone
zonecfg:myzone> add net
zonecfg:myzone:net> set address=10.1.1.1
zonecfg:myzone:net> set physical=bge0
# zoneadm list -cv
ID NAME STATUS PATH
0 global running /
- myzone configured /test/zones/myzone
The configuration is stored in an XML file within /etc/zones. This step simply set up the configuration, but you can’t use it until it is “installed.” This command was run on a fairly slow Sun Blade 1500, and took 26.5 minutes to complete:
# zoneadm -z myzone install
Preparing to install zone <myzone>.
Creating list of files to copy from the global zone.
Copying <59863> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <1127> packages on the zone.
Initialized <1127> packages on zone.
Zone <myzone> is initialized.
The entire install took only 84MB of space. Now that the zone is installed, and using only a tiny amount of space, it can be booted by running zoneadm –z myzone boot. You can connect to the console by using zlogin –C myzone and answer the usual timezone and directory services questions that are asked during a Solaris install. The zone is ready to use afterward.
If you decide that 3GB is too small for the zone, you can either create another ZFS file system in the pool, or simply enlarge the file system with: zfs set quota=5g test/zones.
There are a few issues that frequently arise when people begin using zones. Standard concerns include: backups, resource limiting and OS updates.
When an update to Solaris comes out, you can run the “upgrade” install option without worrying about the zones. Zones get updated at the same time, unless you’ve installed them on ZFS as we did in the example. This is a current bug, which should be fixed in the next release. Many people shun the upgrade install option, choosing to always reinstall from scratch. Unfortunately, this means the non-global zones will have to be restored by restoring its file system manually.
A ZFS file system can be backed up easily with zfs send and restored with zfs receive. If the global zone has not changed at all, it is possible to use an existing zone config that was obtained with zonecfg export. You can move zones between physical servers in this manner, only if they are at the same patch level. Hopefully in the future zones will be able to move between servers while they are running.
On the issue of backups, this too is a bit complicated. You don’t want to simply run your normal backup software from the global zone, allowing it to include the zone install directory. Shared file systems are mounted with the loopback file system, and most backup programs will happily backup everything twice. One option is to backup each non-global zone separately, specifying only non-shared file systems. Legato, for example, can be configured to backup the hostname of a non-global zone and ignore non-UFS file systems.
Resource limiting can be easily accomplished in zones by using zonecfg to create resource control capabilities. The available options are quite complex but the simplest form allows a fair-share scheduler to be used. You can configure a non-global zone to only use, say, 60% of the CPU. The prctl command can then be used to adjust the assigned values on the fly, if required.
Zones are very flexible, and quite easy to use. Since they are OS-level virtualization and share a kernel, you can’t run anything but another copy of the same operating system. This is a perfect way to consolidate servers, because zones create very, very little overhead, yet still maintain a high degree of isolation from the host operating system.