Writing the Nova file injection code to use libguestfs APIs instead of FUSE

Posted: November 15th, 2012 | Filed under: Fedora, libvirt, OpenStack, Security, Virt Tools | Tags: , , , , , | No Comments »

When launching a virtual machine, Nova has the ability to inject various files into the disk image immediately prior to boot up. This is used to perform the following setup operations:

  • Add an authorized SSH key for the root account
  • Configure init to reset SELinux labelling on /root/.ssh
  • Set the login password for the root account
  • Copy data into a number of user specified files
  • Create the meta.js file
  • Configure network interfaces in the guest

This file injection is handled by the code in the nova.virt.disk.api module. The code which does the actual injection is designed around the assumption that the filesystem in the guest image can be mapped into a location in the host filesystem. There are a number of ways this can be done, so Nova has a pluggable API for mounting guest images in the host, defined by the nova.virt.disk.mount module, with the following implementations:

  • Loop – Use losetup to create a loop device. Then use kpartx to map the partitions within the device, and finally mount the designated partition. Alternatively on new enough kernels the loop device’s builtin partition support is used instead of kpartx.
  • NBD – Use qemu-nbd to run a NBD server and attach with the kernel NBD client to expose a device. Then mapping partitions is handled as per Loop module
  • GuestFS – Use libguestfs to inspect the image and setup a FUSE mount for all partitions or logical volumes inside the image.

The Loop module can only handle Raw format files, while the NBD module can handle any format that QEMU supports. While they have the ability to access partitions, the code handling this is very dumb. It requires the Nova global ‘libvirt_inject_partition’ config parameter to specify which partition number to inject. The result is that every image you upload to glance must be partitioned in exactly the same way. Much better would be if it used a metadata parameter associated with the image. The GuestFS module is much more advanced and inspects the guest OS to figure out arbitrarily partitioned images and even LVM based images.

Nova has a “img_handlers” configuration parameter which defines the order in which the 3 mount modules above are to be tried. It tries to mount the image with each one in turn, until one suceeds. This is quite crude code really – it has already been hacked to avoid trying the Loop module if Nova knows it is using QCow2. It has to be changed by the Nova admin if they’re using LXC, otherwise you can end up using KVM with LXC guests which is probably not what you want. The try-and-fallback paradigm also has the undesirable behaviour of masking errors that you would really rather consider fatal to the boot process.

As mentioned earlier, the file injection code uses the mount modules to map the guest image filesystem into a temporary directory in the host (such as /tmp/openstack-XXXXXX). It then runs various commands like chmod, chown, mkdir, tee, etc to manipulate files in the guest image. Of course Nova runs as an unprivileged user, and the guest files to be changed are typically owned as root. This means all the file injection commands need to run via Nova’s rootwrap utility to gain root privileges. Needless to say, this has the undesirable consequence that the code injecting files into a guest image in fact has privileges that allow it to write to arbitrary areas of the host filesystem. One mistake in handling symlinks and you have the potential for a carefully crafted guest image to result in compromise of the host OS. It should come as little surprise that this has already resulted in a security vulnerability / CVE against Nova.

The solution to this class of security problems is to decouple the file injection code from the host filesystem. This can be done by introducing a “VFS” (Virtual File System) interface which defines a formal API for the various logical operations that need to be performed on a guest filesystem. With that it is possible to provide an implementation that uses the libguestfs native python API, rather than FUSE mounts. As well as being inherently more secure, avoiding the FUSE layer will improve performance, and allow Nova to utilize libguestfs APIs that don’t map into FUSE, such as its Augeas support for parsing config files. Nova still needs to work in scenarios where libguestfs is not available though, so a second implementation of the VFS APIs will be required based on the existing Loop/Nbd device mount approach. The security of the non-libguestfs support has not changed with this refactoring work, but de-coupling the file injection code from the host filesystem does make it easier to write unit tests for this code. The file injection code can be tested by mocking out the VFS layer, while the VFS implementations can be tested by mocking out the libguestfs or command execution APIs.

Incidentally if you’re wondering why Libguestfs does all its work inside a KVM appliance, its man page describes the security issues this approach protects against vs just directly mounting guest images on the host

 

GPG keysigning made easy with Pius

Posted: February 10th, 2012 | Filed under: Fedora, Security, Virt Tools | 2 Comments »

A few months back the Red Hat KVM team held a mass keysigning party to setup a web of trust between each others keys. IIRC, there were approximately 20 people participating in this, which potentially meant alot of tedious typing of GPG commands, with the potential for error such tedium implies. Fortunately we had Jim Meyering on hand to give us some tips for facilitating/optimizing the process, the most important of which was to introduce us to the ‘Pius‘ tool.  To quote from its website

pius (PGP Individual UID Signer) helps attendees of PGP keysigning parties. It is the main utility and allows you to quickly and easily sign each UID on a set of PGP keys. It is designed to take the pain out of the sign-all-the-keys part of PGP Keysigning Party while adding security to the process.

That can already be time consuming, but preferrably, you want to verify the identity in each UID, which means verifying the email addresses. There are a few ways to do this, but one of them is to sign each UID on the key individually (which requires import-sign-export-delete for each UID), encrypt-emailing that key to the email address in the UID. This can be incredibly time consuming.

That’s where pius comes in. Pius will do all the work for you – all you have to do is confirm the fingerprint for each key. It will then take care of signing each UID cleanly, minimizing the key, and using PGP/Mime email to send it, encrypted, to the email address in the UID.

The steps Jim defined for us to follow using Pius were as follows

  1. Collate a list of everyone’s key IDs. Our list looked like this (cut down to save space)
     # cat > keyids.txt <<EOF
     4096R/000BEEEE 2010-06-14 Jim Meyering
     4096R/E1B768A0 2011-10-11 Richard W.M. Jones
     4096R/15104FDF 2011-10-11 Daniel P. Berrange
     ...
     EOF
  2. Download all the keys from a key server (it is assumed everyone has already uploaded their own key to a server)
     # id_list=$(perl -nle 'm!^\d{4}R/(\S{8}) ! and print $1' keyids.txt)
     # gpg --recv-keys  $(echo $id_list)
  3. Generate a list of fingerprints for all keys that are to be signed
     # gpg --fingerprint $(echo $id_list)
  4. Verify all the fingerprints and their owners’ identities.
    This is the security critical part. You generally want to meet the person face-to-face, verify their identity via some trusted means (passport, driving license, etc). They should read their key fingerprint out to you, and you should verify that it matches the fingerprint of that downloaded from the key server.
  5. Use Pius to sign all the keys whose fingerprints were verified.
    MAIL_HOST=smtp.your.mail.server.com
    me=your@email.address.com   (eg dan@berrange.com)
    my_id=XXXXXXXXXXX  (Your GPG Key ID eg  15104FDF)
    # pius --mail-host=MAIL_HOST --no-pgp-mime --mail=$me --signer=$my_id $(echo $id_list)

What Pius does here is that for each key ID it is given, it will sign each individual identity (email address). The signature will be ascii-armoured and then sent to the email address associated with that identity. If a user has multiple email addresses on their key, they will receive one signature email per address. The email contains instructions for what the receipient should do. The email will look something like this

From: eblake@redhat.com
To: berrange@redhat.com
Subject: Your signed PGP key

[-- Attachment #1 --]
[-- Type: text/plain, Encoding: 7bit, Size: 0.7K --]

Hello,

Attached is a copy of your PGP key (0x15104fdf) signed by my key
(0xa7a16b4a2527436a).

If your key has more than one UID, than this key only has the UID associated
with this email address (berrange@redhat.com) signed and you will receive
additional emails containing signatures of the other UIDs at the respective
email addresses.

Please take the attached message and decrypt it and then import it.
Something like this should work:

   gpg -d  | gpg --import

Then, don't forget to send it to a keyserver:

   gpg --keyserver pool.sks-keyservers.net --send-key 15104fdf

If you have any questions, let me know.

Generated by PIUS (http://www.phildev.net/pius/).

[-- Attachment #2: 15104fdf__berrange_at_redhat.com_ENCRYPTED.asc --]
[-- Type: application/octet-stream, Encoding: 7bit, Size: 4.6K --]

The final thing, once everyone has dealt with the emails they received, is to refresh your local key database to pull down all the new signatures

# gpg --recv-keys  $(echo $id_list)

I should point out that Pius isn’t just for mass key signing parties. Even if you only have 1 single key you want to sign, it is still a very convenient tool to use. The simplified set of steps to go through would be

# gpg --recv-key XXXXXXXX
# gpg --fingerprint XXXXXXXX
# ...verify person's identity & fingerprint
# pius --mail-host=MAIL_HOST --no-pgp-mime --mail=$me --signer=$my_id XXXXXXX
# ....some time later...
# gpg --recv-key XXXXXXXX

Thanks again to Jim Meyering for pointing out Pius and doing the organization for our key signing party & defining the steps I describe above. BTW, Pius is available in Fedora from F16 onwards.

Libvirt sandbox at FOSDEM 2012

Posted: February 5th, 2012 | Filed under: Fedora, libvirt, Security, Virt Tools | Tags: | 7 Comments »

As mentioned previously, today I presented a talk at FOSDEM 2012, titled “Building application sandboxes on top of LXC and KVM with libvirt”.  As promised I have now uploaded the PDF slides for public access.  For further information about libvirt-sandbox, consult this previous blog post on the subject. Also keep an eye on this site for further blog posts in the future. Thanks to everyone who attended the talk. I look forward to returning again in a year’s time for another update.

Rambling about the pain of dealing with passwords for online services

Posted: January 23rd, 2012 | Filed under: Fedora, Security | 11 Comments »

Over the last 6 months or so, I’ve become increasingly paranoid about password usage for online web services. There have a been a number of high profile attacks against both the commercial world and open source project infrastructure, many of which have led to compromise of password databases. Indeed, it feels like my news feed has at least 1 article a week covering an attack & user account compromise against some online service or other. And this is only the attacks that are detected and/or reported. Plenty more places don’t even realized they have been attacked yet, and there are likely plenty of cover-ups too. This leads me inescapably to my first axiom of password management:

  • Axiom #1: Your password(s) will be compromised. There is no “if”, only “when”

It follows from this, that it is the epitome of foolishness to use the same password for more than one site. Even if you are diligent to watch for news reports of site compromises & quickly change your passwords across all other sites, you are wasting hours of time, and still vulnerable for the period between the attack taking place & being reported in the media (if at all). Out of curiosity, I made a list of every website I could remember where I had registered an account of some kind. I was worried when the list got to 50, I was shocked when it went over 100, and I stopped counting thereafter.

There is a barrage of often conflicting suggestions about how to create strong passwords for accounts. Most websites simply say things like “you must use a mixture of at least 8 letters, numbers and special symbols“. Google have been trying to educate people about how to make up more easily remembered passwords, but XKCD points out the flaws in these commonly suggested approaches. Even if you do decide upon some nice scheme for creating your passwords, you quickly come across many websites which will reject your carefully thought up & remembered password. Compound this with the fact that many websites (typically financial ones) also require you to enter “passwords hints” based on questions that are supposedly easy to remember, but in fact turn out to be anything but. Now multiply by the number of sites you need passwords for (x100). This leads me inescapably to my second axiom of password management:

  • Axiom #2: It is beyond the capabilities of the human brain to remember enough strong passwords

There have been a great many proposals for shared authentication services, whether owned & managed centrally by a corporation like Microsoft Passport, or completely decentralized & vendor independent like OpenID. Today out of all the 100+ sites I use, I can count the number that allow OpenID login on the fingers of one hand. More recently the big social networks have been having some success with positioning themselves as the managers of your identity & providers of authentication to other sites. I am not happy with the idea of any social network being the controller of not only my online identity, but also controller of access to every single website I register with. I don’t trust them with all this data, and they are an extremely high value target for any would be attackers if they control all your website logins. Letting them control all my logins, feels akin to just re-using the same password across every website. I know they have marginally stronger login procedures than most sites, by allowing you to authenticate individual clients used to login, but this isn’t enough to balance the downside. In fact I’m not really convinced that I want any online service to be the manager or all my login details for websites. It is just too big a single point of trust/failure.

A minority of online banking websites now provide you some form of hardware key token generator, or pin entry device to authenticate with. This is clearly not going to work for most websites, due to the cost & distribution problem. Even within the limited scope of financial websites the practicality is limited – if every financial institution I dealt with had key token generators, I’d have a huge pile of hardware devices to look after ! I do like hardware authentication devices and now use them for login to any personal SSH servers that I manage, but with a few exceptions like Fedora, they are not a solution for the online password problem today or the forseable future. I am depressingly lead to my third axiom of password management:

  • Axiom #3: Widespread password authentication is here to stay for many, many years to come

Hmm, perhaps the problem is better described by mapping to the 5 stages of grief

  1. Denial – only careless people have their details compromised, i’ll be fine using the same 4-5 passwords across all sites
  2. Anger – how could $WEBSITE have been so badly run / protected, to let themselves be compromised
  3. Bargaining – if I just let Facebook handle all my logins, they’ll solve all the hard problems for me
  4. Depression – the industry will never get its act together & solve authentication
  5. Acceptance – passwords are here to stay, what can I do to minimize my risks

Well I think I am at step 5 now. I have accepted that passwords are here to stay, that sooner or later one or more of the sites I am registered with will be compromised, and it is impossible for me to remember enough passwords. My goal is thus to minimise the pain and damage.

My conclusion is that the only viable way to manage passwords today is to do the one thing everyone tells you never to do

  • Write down all the passwords

Of course this shouldn’t be taken too literally. I am not suggesting to put a post-it on the monitor with the passwords on it, rather I mean store the passwords in some secure location, which is in turn protected a master password. ie use a password manager application.

Using KeePassX for managing passwords

After looking at a few options on Linux, I ended up choosing KeePassX as my password manager because it had a quite advanced set of features that appealed to me. Before anyone comments, I had discounted any usage of a password manager built into the web browser before even starting. The browser is a directly network facing process of great complexity and frequent security flaws – they just aren’t the right place to be storing all your valuable secrets. The features in KeepPassX that I liked were:

  • Passwords are stored encrypted in a structured database
  • It is possible to specify many different metadata attributes with each password, username, site URL, title, comment, and more.
  • It can copy the password to the clipboard, allowing paste into web browser forms, avoiding the need to manually type in long password sequences
  • It automatically purges passwords from the clipboard after 30 seconds to minimise the window when it is visible
  • The database can be set to automatically lock itself against after 30 seconds, requiring the master password to be entered again to access further password entries
  • The password database can be secured using a password, or a keyfile, or both. The keyfile is just a plain file with random bytes stored somewhere (like a USB key)
  • An advanced password generator with many tunable options

I have several laptops and I want the password database to be usable from either machine. At the same time though, the password database should not become a single point of failure / data loss, so there needs to be multiple copies of it.  Using a password database does have the downside that it becomes a nice single point of attack for the bad guys. It would thus be desirable to have separate password databases for websites used on a general day-to-day business vs security critical seldom used sites ie bank accounts, and other financial institutions. With this in mind the way I decided to use KeepPassX is as follows

  • I purchased 4x USB stick 4 GB capacity for < 5 GBP each, two coloured black and two coloured white
  • All 4 USB sticks were split into 2 partitions, each of 2 GB size.
  • The primary partition is formatted with a Fedora 16 LiveCD. This is to facilitate easy access to the passwords, should I find myself without one of my own Linux laptops close by
  • The second partition is setup with LUKS full disk encryption and formatted with ext4.
  • The partition with the encrypted filesystem is used to store the KeePassX database files and any other important files (GPG keys, SSH keys, etc)
  • The black coloured USB sticks are used to store a database for financial account details
  • The white coloured USB sticks are used to store a database for any other website logins
  • One USB stick of each colour is the designated backup. The backup sticks are kept in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard.’
  • A shell script which will synchronize files between sticks, to be run periodically to ensure recent-ish backups

With that all decided there merely followed the tedious task of logging into over 100 websites and changing my password on each one. I decided that my default policy would be to let KeePassX generate a new random password for each site made up of letters, numbers and special characters, with a length of 20 characters. Surprisingly the vast majority of sites coped just fine with these passwords. BugZilla turns out to be limited to 16 characters and a handful of ecommerce sites had even shorter limits, or refused to allow use of special characters!

Using E-Mail “plus addressing” for accounts

It is often said that when bad guys have compromised a website’s account database they will try to reuse the same email and password on a number of other high value sites. Since many people reuse passwords & many sites allow login based off an email address, the bad guys will trivially gain access to a significant number of accounts on other non-compromised sites. I am already generating unique passwords for each site, but to add just one more roadblock, I decided that while changing passwords, I would also set a unique email address for every single site.

My exim mail server supports what is known as “plus addressing”, whereby you can append an arbitrary tag to the local part of an email address. For example given an address “fred@example.com” you have an infinite number of unique email address “fred+TAG@example.com” where “TAG” is any reasonable string. Sadly when I tried using plus addressing, I immediately hit problems, because many (broken) form data validation checks think “+” is not a valid character to use in email addresses, or worse they would accept the address but all email they sent would end up in a black hole. Fortunately, it is a trivial matter to reconfigure Exim to allow use of ‘-‘ as the separator for plus addressing, ie to allow “fred-TAG@example.com“.

Out of > 100 websites I updated my account details on, only 1 rejected the use of ‘-‘ in my email address. So now more or less every account I am registered to has both a unique password and unique email address.

In the end, the main thing I an unhappy about is that using a password manager presents a single point of attack for a local computer virus/trojan. Given the frequency with which websites are being compromised these days & the number of sites I need to remember passwords for, I think overall this is clearly still a net win. I will remain on the lookout though for ways to improve the security of the password manager database itself.