Using command line arg & monitor command passthrough with libvirt and KVM

Posted: December 19th, 2011 | Filed under: Fedora, libvirt, Virt Tools | Tags: , , , , , , , | 1 Comment »

The general goal of the libvirt project is to provide an API definition and XML schema that is independent of any one hypervisor technology. This means that for every feature in KVM, we have to take a little time to carefully design a suitable API or XML schema to expose in libvirt, if none already exists. Sometimes we are lucky and features in KVM can be expressed in almost the same way in libvirt, but more often than not, this does not hold true – the design for libvirt may look somewhat different. For example, several KVM monitor commands at the KVM level may be exposed as a single API call in libvirt. Or conversely, several different libvirt APIs may all end up invoking the same underlying KVM monitor commands. The need to put careful thought into libvirt API & XML design means that there may be a short delay between a feature appearing in KVM, and it appearing in libvirt, though it can also be the other way around, where libvirt has a feature but KVM doesn’t provide a way to support it yet!

For many applications / developers this delay is a non-issue, since they don’t need to be on the absolute bleeding edge of development of KVM or libvirt, but there are always exceptions to the rule. For a long time, we did not want to enable direct use of hypervisor specific features at all via libvirt, because of the support implications of doing so. A little over a year ago though, we decide to change our position on this matter. The key to this change in policy was deciding how to clearly demarcate functionality which is long term supportable, vs that which is not.

KVM custom command line arguments

To support custom command line argument passthrough for KVM, we decided to introduce a new set of XML elements in a different XML namespace. This namespace is defined:

  xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'

Our policy is that any guest configuration that uses this QEMU XML namespace is not guaranteed to continue working if either libvirt or KVM are upgraded. A change to the way libvirt generates command line arguments may break the arguments the app has passed through. Alternatively KVM itself may drop or change the semantics of an existing command line argument. In other words applications should not rely on this capability long term, rather they should raise an RFE against libvirt to support it via the primary XML namespace, and just use the QEMU namespace until the RFE is complete.

The QEMU namespace defines syntax for passing arbitrary command line arguments, along with arbitrary environment variables:

<domain type='qemu' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>QEMUGuest1</name>
  <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid>
  ...
  <devices>
    <emulator>/usr/bin/qemu</emulator>
    <disk type='block' device='disk'>
      <source dev='/dev/HostVG/QEMUGuest1'/>
      <target dev='hda' bus='ide'/>
    </disk>
    ...
  </devices>
  <qemu:commandline>
    <qemu:arg value='-unknown'/>
    <qemu:arg value='parameter'/>
    <qemu:env name='NS' value='ns'/>
    <qemu:env name='BAR'/>
  </qemu:commandline>
</domain>

Remember that when adding the <qemu:commandline> element, you need to always declare the XML namespace ‘xmlns:qemu’ on the top level <domain> element.

KVM monitor command passthrough

To support custom monitor command passthrough for KVM, we decided to introduce a second ELF library, libvirt-qemu.so, and separate header file /usr/include/libvirt/libvirt-qemu.h. An application that uses APIs in libvirt-qemu.so is not guaranteed to continue working if either libvirt or KVM are upgraded. A change to the way libvirt manages guests may conflict with the monitor commands the app is trying to issue. Alternatively KVM itself may drop or change the semantics of an existing monitor commands. In other words applications should not rely on this capability long term, rather they should raise an RFE against libvirt to support it via the primary library API, and just use libvirt-qemu.so until the RFE is complete.

Currently the libvirt-qemu.h defines two custom APIs

typedef enum {
  VIR_DOMAIN_QEMU_MONITOR_COMMAND_DEFAULT = 0,
  VIR_DOMAIN_QEMU_MONITOR_COMMAND_HMP     = (1 << 0), /* cmd is in HMP */
} virDomainQemuMonitorCommandFlags;

int virDomainQemuMonitorCommand(virDomainPtr domain, const char *cmd,
                                char **result, unsigned int flags);

virDomainPtr virDomainQemuAttach(virConnectPtr domain,
                                 unsigned int pid,
                                 unsigned int flags);

The first allows passthrough of arbitrary monitor commands, while the latter allows attachment to an existing QEMU instance as discussed previously. The monitor command API is quite straighforward, it accepts a string command, and returns a string reply.  The data for the command/reply can be either in HMP or QMP syntax, depending on how QEMU was launched by libvirt. The VIR_DOMAIN_QEMU_MONITOR_COMMAND_HMP flag allows an application to force use of the HMP syntax at all times.

Using monitor command passthrough from virsh

Not all users will be writing directly to the libvirt API, so the monitor command passthrough is also wired up into virsh via the “qemu-monitor-command” API. First is an example using QMP (JSON syntax):

$ virsh qemu-monitor-command vm-vnc '{ "execute": "query-block"}'
{"return":[{"device":"drive-virtio-disk0","locked":false,"removable":false,"inserted":{"ro":false,"drv":"qcow2","encrypted":false,"file":"/home/berrange/VirtualMachines/plain.qcow"},"type":"unknown"}],"id":"libvirt-9"}

And second is an example demonstrating use of HMP with a guest that runs QMP (libvirt automagically redirects via the ‘human-monitor-command’ command)

$ virsh qemu-monitor-command --hmp vm-vnc  'info block'
drive-virtio-disk0: removable=0 file=/home/berrange/VirtualMachines/plain.qcow ro=0 drv=qcow2 encrypted=0

Tainting of guests

Anyone familiar with the kernel will know that it marks itself as tainted whenever the user does something that is outside the boundaries of normal support. We have borrowed this idea from the kernel and apply it to guests run by libvirt too. Any attempt to use either the command line argument passthrough via XML, or QEMU monitor command passthrough via libvirt-qemu.so will result in the guest domain being marked as tainted. This shows up in the libvirt log files. For example after that last example,  $HOME/.libvirt/qemu/log/vm-vnc.log shows the following

Domain id=2 is tainted: custom-monitor

This allows OS distro support staff to determine if something unusal has been done to a guest when they see support tickets raised. Depending on the OS distro’s support policy they may decline to support problem arising from tainted guests. In RHEL for example, any usage of QEMU monitor command passthrough, or command line argument passthrough is outside the bounds of libvirt support, and users would normally be asked to try to reproduce any problem without a tainted guest.

Securing the WordPress admin interface using (Free!) SSL certificates

Posted: December 19th, 2011 | Filed under: Fedora | Tags: , , , , , , | 1 Comment »

Last year I migrated my website off Blogger to a WordPress installation hosted on my Debian server. Historically my website has only been exposed over plain old HTTP, which was fine since the Blogger publishing UI was running HTTPS. With the migration to WordPress install though, the publishing UI is now running on my own webserver and thus the lack of HTTPS on my server becomes a reasonably serious problem. Most people’s first approach to fixing this would be to just generate a self-signed certificate and deploy that for their server, but I rather wanted to have a x509 certificate that would be immediately trusted by any visiting browser.

Getting free x509 certificates from StartSSL

The problem is that the x509 certificate authority system is a bit of a protection racket with recurring fees that just cannot justify the level of integrity they provide. There is one exception to the norm though, StartSSL offer some basic x509 certificates at zero cost. In particular you can get Class1 web server certificates and personal client identity certificates. The web server certificates are restricted in that you can only include 2 domain names in them, your basic domain name & the same domain name with ‘www.’ prefixed. If you want wildcard domains, or multiple different domain names in a single certificate you’ll have to go for their pay-for offerings. For many people, including myself, this limitation will not be a problem.

StartSSL have a nice self-service web UI for generating the various certificates. The first step is to generate a personal client identity certificate, which the rest of their administrative control panel relies on for authentication. After generation is complete, it automatically gets installed into firefox’s certificate database. You are wisely reminded to export the database to a pkcs12 file and back it up somewhere securely. If you loose this personal client certificate, you will be unable to access their control panel for managing your web server certificates. The validation they do prior to issuing the client certificate is pretty minimal, but fully automated, in so much as they send a message to the email address you provide with a secret URL you need to click on. This “proves” that the email address is yours, so you can’t request certificates for someone else’s email address, unless you can hack their email accounts…

Generating certificates for web servers is not all that much more complicated. There are two ways to go about it though, either you can fill in their interactive web form & let their site generate the private key, or you can generate a private key offline and just provide them with a CSR (Certificate Signing Request). I tried todo the former first of all, but for some reason it didn’t work – it got stuck generating the private key, so I switched to generating a CSR instead. The validation they do prior to issuing a certificate for a web server is also automated. This time they do a whois lookup on the domain name you provide, and send a message with a secret URL to the admin, technical & owner email addresses in the whois record. This “proves” that the domain is yours, so you can’t requests certificates for someone else’s domain name, unless you can hack their whois data or admin/tech/owner email accounts…

Setting up Apache to enable SSL

The next step is to configure apache to enable SSL for the website as a whole. There are four files that need to be installed to provide the certificates to mod_ssl

  • ssl-cert-berrange.com.pem – this is the actual certificate StartSSL issued for my website, against StartSSL’s Class1 root certificate
  • ssl-cert-berrange.com.key – this is the private key I generated and used with my CSR
  • ssl-ca-start.com.pem – this is the master StartSSL CA certificate
  • ssl-ca-chain-start.com-class1-server.pem – this is the chain of trust between your website’s certificate and StartSSL’s master CA certificate, via their Class1 root certificate

On my Debian Lenny host, they were installed to the following locations

  • /etc/ssl/certs/ssl-cert-berrange.com.pem
  • /etc/ssl/private/ssl-cert-berrange.com.key
  • /etc/ssl/certs/ssl-ca-chain-start.com-class1-server.pem
  • /etc/ssl/certs/ssl-ca-start.com.pem

The only other bit I needed todo was to setup a new virtual host in the apache config file, listening on port 443

<VirtualHost *:443>
  ServerName www.berrange.com
  ServerAlias berrange.com

  DocumentRoot /var/www/berrange.com
  ErrorLog /var/log/apache2/berrange.com/error_log
  CustomLog /var/log/apache2/berrange.com/access_log combined

  SSLEngine on

  SSLCertificateFile    /etc/ssl/certs/ssl-cert-berrange.com.pem
  SSLCertificateKeyFile /etc/ssl/private/ssl-cert-berrange.com.key
  SSLCertificateChainFile /etc/ssl/certs/ssl-ca-chain-start.com-class1-server.pem
  SSLCACertificateFile /etc/ssl/certs/ssl-ca-start.com.pem
</VirtualHost>

After restarting Apache, I am now able to connect to https://berrange.com/ and that my browser trusts the site with no exceptions required.

Setting up Apache to require SSL client cert for WordPress admin pages

The next phase is to mandate use of a client certificate when accessing any of the WordPress administration pages. Should there be any future security flaws in the WordPress admin UI, this will block any would be attackers since they will not have the requisite client SSL certificate. Mnadating use of client certificates is done with the “SSLVerifyClient require” directive in Apache. This allows the client to present any client certificate that is signed by the CA configured earlier – that is potentially any user of StartSSL.  My intention is to restrict access exclusively to the certificate that I was issued. This requires specification of some match rules against various fields in the certificate. First lets see the Apache virtual host configuration additions:

<Location /wp-admin>
  SSLVerifyClient require
  SSLVerifyDepth  3
  SSLRequire %{SSL_CLIENT_I_DN_C} eq "IL" and \
             %{SSL_CLIENT_I_DN_O} eq "StartCom Ltd." and \
             %{SSL_CLIENT_I_DN_OU} eq "Secure Digital Certificate Signing" and \
             %{SSL_CLIENT_I_DN_CN} eq "StartCom Class 1 Primary Intermediate Client CA" and \
             %{SSL_CLIENT_S_DN_CN} eq "dan@berrange.com" and \
             %{SSL_CLIENT_S_DN_Email} eq "dan@berrange.com"
</Location>

The first 4 match rules here are saying that the client certificate must have been issued by the StartSSL Class1 client CA, while the last 2 matches are saying that the client certificate must contain my email address. The security thus relies on StartSSL not issuing anyone else a certificate using my email address. The whole lot appears inside a location match against ‘/wp-admin’ which is the URL prefix all the WordPress administration pages have. The entire block must also be duplicated using a location match against ‘/wp-login.php’ to protect the user login page too.

<Location /wp-login.php>
  SSLVerifyClient require
  SSLVerifyDepth  3
  SSLRequire %{SSL_CLIENT_I_DN_C} eq "IL" and \
             %{SSL_CLIENT_I_DN_O} eq "StartCom Ltd." and \
             %{SSL_CLIENT_I_DN_OU} eq "Secure Digital Certificate Signing" and \
             %{SSL_CLIENT_I_DN_CN} eq "StartCom Class 1 Primary Intermediate Client CA" and \
             %{SSL_CLIENT_S_DN_CN} eq "dan@berrange.com" and \
             %{SSL_CLIENT_S_DN_Email} eq "dan@berrange.com"
</Location>

Preventing access to the WordPress admin pages via non-HTTPS connections.

Finally, to ensure the login & admin pages cannot be accessed over plain HTTP, it is necessary to alter the virtual host config for port 80, to include

RewriteEngine On
RewriteRule ^(/wp-admin/.*) https://www.berrange.com$1 [L,R=permanent]
RewriteRule ^(/wp-login.php.*) https://www.berrange.com$1 [L,R=permanent]

To be honest, I should just put a redirect on ‘/’ to prevent any use of the plain HTTP site at all, but I want to test how well my tiny virtual server copes with the load before enabling HTTPs for everything.

Hopefully this blog post has demonstrated that setting up your personal webserver with certificates that any browser will trust, is both easy and cheap (free), so there is no reason to use self-signed certificates unless you need multiple domain names / wildcard addresses in your certificates and you’re unwilling to pay money for them.

Multi-factor SSH authentication using YubiKey and SSH public keys together

Posted: December 18th, 2011 | Filed under: Fedora | Tags: , , , , | 11 Comments »

UPDATE: the setup described here is flawed because it only correctly secures the primary SSH channel. ie if you use port redirection like ‘ssh -L 80:localhost:80 example.com’ then the shell session will require you to enter the yubikey code, but the port redirect will be activated and usable prior to you entering the yubikey. I’d thus strongly recommend NOT FOLLOWING the instructions in this blog post, and instead upgrade to OpenSSH >= 6.2 which has proper built-in support for multi-factor authentication, avoiding the need for this hack.

A month or two ago I purchased a couple of YubiKey USB tokens, one for authentication with Fedora infrastructure and the other for authentication of my personal servers. The reason I need two separate tokens is that Fedora uses its own YubiKey authentication server, thus requiring that you burn a new secret key into the token. For my personal servers I decided that I would simply authenticate against the central YubiKey authentication server hosted by YubiCo themselves. While some people might not be happy trusting a 3rd party service for authentication of their servers, I decided this was not a big problem since I intend to combine the YubiKey authentication with the existing strong SSH RSA public key authentication which is entirely under my control.

YubiKey authentication via PAM

To start off with I decided to follow a well documented configuration path, enabling YubiKey authentication for SSH via PAM. This was pretty straightforward and worked first time. The configuration steps were

  • Build and install the yubico-pam module. You might be lucky and find your distro already ships packages for this, but I was doing this on my Debian Lenny server which did not appear to have any pre-built PAM module.
  • Create a file /etc/yubikey_mappings which contains a list of usernames and their associated yubikey token IDs. The Token ID is the first 12 characters of a OTP generated from a keypress of the token. Multiple token IDs can be listed for each user.
    $ cat > /etc/yubikey_mappings <<EOF
    fred:cccccatsdogs:ccccdogscats
    EOF
  • Get a unique API key and secret for personal use from https://upgrade.yubico.com/getapikey/
  • Add the yubico-pam module to the SSHD PAM configuration module using the previously obtained API key ID in place of XXXX
    $ cat /etc/pam.d/sshd
    # PAM configuration for the Secure Shell service
    
    # Read environment variables from /etc/environment and
    # /etc/security/pam_env.conf.
    auth       required     pam_env.so # [1]
    # In Debian 4.0 (etch), locale-related environment variables were moved to
    # /etc/default/locale, so read that as well.
    auth       required     pam_env.so envfile=/etc/default/locale
    
    auth sufficient pam_yubico.so id=XXXX authfile=/etc/yubikey_mappings
    
    # Standard Un*x authentication.
    @include common-auth
    
    ...snip...

This all worked fine, with one exception, if I had an authorized SSH public key then SSH would skip straight over the PAM “auth” phase. This is not what I wanted, since my intention was to use YubiKey and SSH public keys for login. The yubico-pam website has instructions for setting up two-factor authentication but this only works if both your factors are configured via PAM. SSH public key authentication is completely outside the realm of PAM. AFAICT from a bit of googling, it is not possible to configure OpenSSH to require PAM and public key authentication together; it considers either one of them to be sufficient on their own.

After a little more googling though, I came across an interesting hack utilizing the ForceCommand configuration parameter of SSHD. The gist of the idea is that instead of configuring YubiKey authentication via PAM, you use the ForceCommand parameter to get SSHD to invoke a helper script which performs a YubiKey authentication check and only then executes the real command (ie login shell).

I made a few modifications to Alexandre’s script mentioned in the blog post just linked

  • Use the same configuration file, /etc/yubimap_mappings, as used for centralized yubico-pam setup
  • Allow the verbose debugging information to be turned off
  • Load the API key ID from /etc/yubikey_shell instead of requiring editing of the helper script itself

Usage of the script is quite simple

  • Create /etc/yubikey_shell containing
    $ cat /etc/yubikey_shell
    # Configuration for /sbin/yubikey_shell
    
    # Replace XXXX with your 4 digit API key ID as obtained
    # from https://upgrade.yubico.com/getapikey/
    YUBICO_API_ID="XXXX"
    
    # Change to 1 to enable debug logs for troubleshooting login
    #DEBUG=1
    
    # To override stanard key mapping location. This file
    # should contain 1 or more lines like
    #
    #    USERNAME:YUBI_KEY_ID:YUBI_KEY_ID:...
    #
    # This is the same syntax used for yubico-pam
    #TRUSTED_KEYS_FILE=/etc/yubikey_mappings
  • Create the /etc/yubikey_mappings file, if not already present from a previous yubico-pam setup
    $ cat /etc/yubikey_mappings
    fred:cccccatsdogs:ccccdogscats
  • Append to the /etc/ssh/sshd_config file a directive to enable YubiKey auth for selected users
    Match User fred
      ForceCommand /sbin/yubikey_shell
  • Save the wrapper script itself to /sbin/yubikey_shell
    x
    DEBUG=0
    TRUSTED_KEYS_FILE=/etc/yubikey_mappings
    # This default works, but you really want to use your
    # own ID for greater security
    YUBICO_API_ID=16
    
    test -f /etc/yubikey_shell && source /etc/yubikey_shell
    
    STD="\\033[0;39m"
    OK="\\033[1;32m[i]$STD"
    ERR="\\033[1;31m[e]$STD"
    
    ##################################################
    ## Disconnect clients trying to exit the script ##
    ##################################################
    trap disconnect INT
    
    disconnect() {
      sleep 1
      kill -9 $PPID
      exit 1
    }
    
    debug() {
      if test "$DEBUG" = 1 ; then
        echo -e "$@"
      fi
    }
    
    if test -z "$USER"
    then
      debug "$ERR USER environment variable is not set" > /dev/stderr
      disconnect
    fi  
    ####################################
    ## Get user-trusted yubikeys list ##
    ####################################
    if [ ! -f $TRUSTED_KEYS_FILE ]
    then
      debug "$ERR Unable to find trusted keys list" > /dev/stderr
      disconnect
    fi
    
    TRUSTED_KEYS=`grep "${USER}:" $TRUSTED_KEYS_FILE | sed -e "s/${USER}://" | sed -e 's/:/\n/g'`
    for k in $TRUSTED_KEYS
    do
      debug "$OK Possible key '$k'"
    done
    
    #######################################
    ## Get the actual OTP                ##
    #######################################
    
    echo -n "Please provide Yubi OTP: "
    read -s OTP
    echo
    KEY_ID=${OTP:0:12}
    #######################################
    ## Iterate through trusted keys list ##
    #######################################
    for trusted in ${TRUSTED_KEYS[@]}
    do
      if test "$KEY_ID" = "$trusted"
      then
        debug "$OK Found key in $TRUSTED_KEYS_FILE - validating OTP now ..."
        if wget "https://api.yubico.com/wsapi/verify?id=$YUBICO_API_ID&otp=$OTP" -O - 2> /dev/null | grep "status=OK" > /dev/null
        then
          debug "$OK OTP validated"
          if test -z "$SSH_ORIGINAL_COMMAND"
          then
            exec `grep "^$(whoami)" /etc/passwd | cut -d ":" -f 7`
          else
            exec "$SSH_ORIGINAL_COMMAND"
          fi
          debug "$ERR failed to execute shell / command" > /dev/stderr
          disconnect
        else
          debug "$ERR Unable to validate generated OTP" > /dev/stderr
          disconnect
        fi
      fi
    done
    debug "$ERR Key not trusted" > /dev/stderr
    disconnect

The avoid the need to cut+paste, here are links to the full script and the configuration file.

After restarting the SSHD service, all was working nicely. Authentication now requires a combination of a valid SSH public key and a valid YubiKey token. Alternatively, if SSH public keys are not in use for a user, authentication will require the login password and a valid YubiKey token.

I still feel a little dirty about having to use the ForceCommand hack though, because it means yubikey auth failures don’t appear in your audit logs – as far as SSHD is concerned everything was successful. It would nice to be able to figure out how to make OpenSSH properly combine SSH public key and PAM for authentication…