Provisioning KVM virtual machines on iSCSI the hard way (Part 2 of 2)
The previous post described how to setup an iSCSI target on Fedora/RHEL the hard way. This post demonstrates how to configure iSCSI on a libvirt KVM host using virsh and then provison a guest using virt-install.
Defining the storage pool
libvirt manages all storage through an object known as a “storage pool”. There are many types of storage pools SCSI, NFS, ext4, plain directory and, interesting for this article, iSCSI. All libvirt objects are configured via an XML description and storage pools are no exception. For an iSCSI storage pool there are three pieces of information to provide. The “Target path” determines how libvirt will expose device paths for the pool. Paths like /dev/sda, /dev/sdb, etc are not a good choice because they are not stable across reboots, or across machines in a cluster, the names are assigned on a first come, first served basis by the kernel. It is strongly recommended that “/dev/disk/by-path” by used unless you know what you’re doing. This results in a naming scheme that will be stable across all machines. The “Host Name” is simply the fully qualified DNS name of the iSCSI server. Finally the “Source Path” is that adorable IQN seen earlier when creating the iSCSI target (“iqn.2004-04.fedora:fedora13:iscsi.kvmguests
“). This isn’t the place to describe the full XML schema for storage pools, it suffices to say that an iSCSI config looks like this
<pool type='iscsi'>
<name>kvmguests</name>
<source>
<host name='myiscsi.server.com'/>
<device path='iqn.2004-04.fedora:fedora13:iscsi.kvmguests
'/>
</source>
<target>
<path>/dev/disk/by-path</path>
</target>
</pool>
Save this XML snippet to a file named ‘kvmguests.xml’ and then load it into libvirt using the “pool-define” command.
# virsh pool-define kvmguests.xml Pool kvmguests defined from kvmguests.xml # virsh pool-list --all Name State Autostart ----------------------------------------- default active yes kvmguests inactive no
Starting the storage pool
This has saved the configuration, but has not actually logged into the iSCSI target, so no LUNs are yet visible on the virtualization host. Todo this requires running the “pool-start” command, at which point LUNs should be visible using the “vol-list” command:
# virsh pool-start kvmguests Pool kvmguests2 started # virsh vol-list kvmguests Name Path ----------------------------------------- 10.0.0.1 /dev/disk/by-path/ip-192.168.122.2:3260-iscsi-iqn.2004-04.fedora:fedora13:iscsi.kvmguests-lun-1 10.0.0.2 /dev/disk/by-path/ip-192.168.122.2:3260-iscsi-iqn.2004-04.fedora:fedora13:iscsi.kvmguests-lun-2
The volume names shown there are what will be required later in order to install a guest with virt-install.
Querying LUN information
Further information about each LUN can be obtained using the “vol-info” and “vol-dumpxml” commands
# virsh vol-info /dev/disk/by-path/ip-192.168.122.2:3260-iscsi-iqn.2004-04.fedora:fedora13:iscsi.kvmguests-lun-1 Name: 10.0.0.1 Type: block Capacity: 10.00 GB Allocation: 10.00 GB # virsh vol-dumpxml /dev/disk/by-path/ip-192.168.122.2:3260-iscsi-iqn.2004-04.fedora:fedora13:iscsi.kvmguests-lun-1 <volume> <name>10.0.0.1</name> <key>/dev/disk/by-path/ip-192.168.122.2:3260-iscsi-iqn.2004-04.fedora:fedora13:iscsi.kvmguests-lun-1</key> <source> </source> <capacity>10737418240</capacity> <allocation>10737418240</allocation> <target> <path>/dev/disk/by-path/ip-192.168.122.2:3260-iscsi-iqn.2004-04.fedora:fedora13:iscsi.kvmguests-lun-1</path> <format type='unknown'/> <permissions> <mode>0660</mode> <owner>0</owner> <group>6</group> <label>system_u:object_r:fixed_disk_device_t:s0</label> </permissions> </target> </volume>
Activating the storage at boot time
Finally, if everything is looking in order, then the pool can be set to start automatically upon host boot.
# virsh pool-autostart kvmguests Pool kvmguests2 marked as autostarted
Provisioning a guest on iSCSI
The virt-install command is a convenient way to install new guests from the command line. It has support for configuring a guest to use volumes from a storage pool via its –disk argument. This arg takes the name of the storage pool, followed by the name of the volume within it. It is now time to install a guest with two disks, the first exclusive use for its root filesystem, the second to be shareable between several guests for data:
# virt-install --accelerate --name rhel6x86_64 --ram 800 --vnc --hvm --disk vol=kvmguests/10.0.0.1 --disk vol=kvmguests/10.0.0.2,perms=sh --pxe
Once this is up and running, take a look at the guest XML that virt-install used to associate the guest with the iSCSI LUNs:
# virsh dumpxml rhel6x86_64 <domain type='kvm' id='4'> <name>rhel6x86_64</name> <uuid>ad8961e9-156f-746f-5a4e-f220bfafd08d</uuid> <memory>819200</memory> <currentMemory>819200</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='rhel6.0.0'>hvm</type> <boot dev='network'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>destroy</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='block' device='disk'> <driver name='qemu' type='raw'/> <source dev='/dev/disk/by-path/ip-192.168.122.170:3260-iscsi-iqn.2004-04.fedora:fedora13:iscsi.kvmguests-lun-1'/> <target dev='hda' bus='ide'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <disk type='block' device='disk'> <driver name='qemu' type='raw'/> <source dev='/dev/disk/by-path/ip-192.168.122.170:3260-iscsi-iqn.2004-04.fedora:fedora13:iscsi.kvmguests-lun-2'/> <target dev='hdb' bus='ide'/> <shareable/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' unit='1'/> </disk> <controller type='ide' index='0'> <alias name='ide0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='network'> <mac address='52:54:00:0a:ca:84'/> <source network='default'/> <target dev='vnet1'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/28'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/28'> <source path='/dev/pts/28'/> <target port='0'/> <alias name='serial0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='5901' autoport='yes' keymap='en-us'/> <video> <model type='cirrus' vram='9216' heads='1'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> </devices> </domain>
In particular, notice how the guest uses the huge paths under /dev/disk/by-path to refer to the LUNs, and also that the second disk has the <shareable/> flag set. This ensures the SELinux labelling allows multiple guests to access the disk and that all I/O caching is disabled on the host. Both critically important if you want the disk to be safely shared between guests.
Summing up
To allow migration of guests between hosts, some form of shared storage is required. People often turn to NFS at first, but this has security shortcomings because it does not allow for SELinux labelling, which means that there is limited sVirt protection between guests. ie one guest can access another guests’s disks. By choosing iSCSI as the shared storage platform, full sVirt isolation between guests is maintained on a par with non-shared storage setups. Hopefully this series of iSCSI related blog posts have shown that even provisioning KVM guests on iSCSI the hard way, is not actually very hard at all. It also shows that you don’t need a expensive commercial NAS to make use of iSCSI, any server with a bunch of disks running Fedora or RHEL can easily be turned into an iSCSI storage server for your virtual machines, though you will have to be prepared to get your hands dirty!
[…] That is a very quick guide to setting up an iSCSI target on Fedora 13. The next step is to switch back to the virtualization host and provision a new guest using iSCSI for its virtual disk. This is covered in Part II […]
Why did you not use virtio as bus for the iscsi target in guest configuration ?
No reason, you can use any disk bus for the guest config. I just happened to choose IDE this time.
Hi,
I just discovered `find-storage-pool-sources-as’ subcommand of `virsh’ which returns in XML the description of a specific storage pool.
It seems very interesting with the ISCSI type and I think I could delegate the discovering and the logging step of a LUN to libvirtd.
I’ve succeed to create a ISCSI pool for only 1 LUN but I’d like to generalize this for all my LUNs for a specific initiator.
The `find-storage-pool-sources-as’ subcommand display all the available LUNs:
But I don’t know what to do and what I can do with a such (but very interesting) output :-(
Please help me to use it efficiently.
Thanks.