Share iSCSI Volumes With Linux Clients via ZFS

Enterprise Networking Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

The Sun x4500 series storage servers have been a big hit. You get 48 disks, and ZFS takes advantage of the large number of disk spindles to achieve amazing throughput. Here’s another great use for the x4500: host all your virtual machines on it and share them to your Linux clients via iSCSI. We’ll show you how.

Before we get started, a quick clarification is in order. The codename for the x4500 is “Thumper.” The Thumper has 48 SATA disks (up to 1TB each), and it was shipped configured with operating system using two 1TB disks mirrored. The x4540 is called “Thor,” and it ships with a compact flash drive for the OS.

Sharing iSCSI in Solaris

This couldn’t be simpler. In the basic mode, without configuring authentication, you only need to run two commands to create a volume in ZFS and share it via iSCSI. However, this is not the standard zfs create command you may be used to running, because we don’t actually want to create a ZFS file system.

Instead, we want to create a block device in the pool, which will allow us to export it as such. We still get the benefits of living in the ZFS pool, namely RAID-Z and checksumming, but we do not get a ZFS filesystem. By default, a reservation is created at the same time, to ensure that all the space you’ve allocated is available and deducted from the pool size. If you choose, you can employ thin provisioning with the – s (sparse) option, but this is not recommended. If another operating system is using the block device and it suddenly becomes un-writable, but the OS thinks there’s free space, bad things can happen.

So, let’s create the block device:

# zfs create -V 20g VMs1/xen03

The above command create a block device (-V) of size 20GB in the zpool named VMs1, and calls it xen03, which is the name of my test VM that will be given this device.

Next, we need to share it via iSCSI:

# zfs set shareiscsi=on VMs1/xen03

This, simply, sets the shareiscsi property on the volume and starts the required services if they aren’t already running.

We can use zfs list to view the results, but the juicy details are in the output of zfs list - o all, which lists all properties of the file system. I prefer to limit it to the set of values I care about with these options, for example:

# zfs list -o name,type,used,avail,ratio,compression,reserv,volsize,shareiscsi VMs1/xen03
NAME              TYPE   USED  AVAIL  RATIO  COMPRESS  RESERV  VOLSIZE  SHAREISCSI
VMs1/xen03 volume  35.9K  1.75T  1.00x       off             20G           20G          on

This tells us that the type is a volume, it has used 35.9K, the total available space in the pool is 1.75TB, the compression ratio is 1.0x and compression is off, the size of the volume is 20GB and that a reservation of the same amount exists, and finally that iSCSI sharing is enabled.

Mounting iSCSI volumes in Linux

Finally we are ready to make an iSCSI connection and start using our new block device. In the following example we’re using Red Hat Enterprise Linux 5.

Begin by installing the iscsi-initiator-utils package and enabling the service:

yum install iscsi-initiator-utils
service iscsi start

Next, we have a few options. We need to discover the targets on the remote server, and there are essentially two ways to go about it. If you’ve set up authentication and the only iSCSI targets this server will see are the ones it’s supposed to see, it’s safe to go ahead and discover all targets. Otherwise, you will want to connect only to specific targets.

Discovering all targets on the server:

iscsiadm -m discovery -t sendtargets -p SERVERNAME

This iscsiadm command gets a list of all iSCSI targets on the server SERVERNAME and creates the necessary glue to connect to them. If you restart the iSCSI service (service iscsi restart), the target will be connected, and will show up as block devices.

Now, you’ve connected to the device! You should be able to see it in fdisk -l:

Disk /dev/sdd: 21.4 GB, 21474836480 bytes
64 heads, 32 sectors/track, 20480 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes

The next steps are to create a file system, then use it as you would any other. Ideally you’ll use LVM so that it can be resized easily.

To avoid connecting to all available targets, you probably want to manage them manually. First, on the Solaris server, run the following command:

# iscsitadm list target
Target: VMs1/xen03.cic.pdx.edu
    iSCSI Name: iqn.1986-03.com.sun:02:c035874a-7224-e2c5-d171-c04251993f0f
    Connections: 0

Note the iSCSI Name, and keep only that target connected. Discovered targets are stored in /var/lib/iscsi/nodes in Red Hat, and this information is used when reconnecting.

While testing, you will likely want to connect/disconnect manually a few times. The iscsiadm session mode (-m session) will allow you to disconnect all volumes easily with – u. You can list all sessions with:

iscsiadm -m session list.

Discovery of iSCSI nodes can be done many ways. The above example used “sendtargets” which connects to a server and asks for a list of all available targets. If you’re going to use this in production, you probably want to have an iSNS server to centralize the management of these settings. More information can be found at the linux-iscsi Web site.

Authentication A great way to secure iSCSI shares and limit the discovery to only targets an initiator (client) is supposed to have access to, is via authentication. The server setup is quite involved and beyond the scope of this document, but once you have that working you can configure the Linux initiator in /etc/iscsi/iscsid.conf. The necessary information to fill in for authentication is:

node.session.auth.username =
node.session.auth.password =
discovery.sendtargets.auth.username =
discovery.sendtargets.auth.password =

If you are creating a file system, make sure you add it to /etc/fstab so it will return after a reboot. One thing to note is that you should use the _netdev option in fstab, since this is a network device and requires networking before it can be mounted. The iSCSI block device is taken care of already, due to the data stored in /var/lib/iscsi/nodes, and will return as soon as the iscsi service is started.

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends, and analysis.

Latest Articles

Follow Us On Social Media

Explore More