Improving QEMU security part 2: generic TLS support

Posted: April 1st, 2016 | Filed under: Coding Tips, Fedora, libvirt, OpenStack, Security, Virt Tools | Tags: , , , , | 2 Comments »

This blog is part 2 of a series I am writing about work I’ve completed over the past few releases to improve QEMU security related features.

After the initial consolidation of cryptographic APIs into one area of the QEMU codebase, it was time to move onto step two, which is where things start to directly benefit the users. With the original patches to support TLS in the VNC server, configuration of the TLS credentials was done as part of the -vnc command line argument. The downside of such an approach is that as we add support for TLS to other arguments like -chardev, the user would have to supply the same TLS information in multiple places. So it was necessary to isolate the TLS credential configuration from the TLS session handling code, enabling a single set of TLS credentials to be associated with multiple network servers (they can of course each have a unique set of credentials if desired). To achieve this, the new code made use of QEMU’s object model framework (QOM) to define a general TLS credentials interface and then provide implementations for anonymous credentials (totally insecure but needed for back compat with existing QEMU features) and for x509 certificates (the preferred & secure option). There are now two QOM types tls-creds-anon and tls-creds-x509 that can be created on the command line via QEMU’s -object argument, or in the monitor using the ‘object_add’ command. The VNC server was converted to use the new TLS credential objects for its configuration, so whereas in QEMU 2.4 VNC with TLS would be configured using

-vnc 0.0.0.0:0,tls,x509verify=/path/to/certificates/directory

As of QEMU 2.5 the preferred approach is to use the new credential objects

-object tls-creds-x509,id=tls0.endpoint=server,dir=/path/to/certificates/directory
-vnc 0.0.0.0:0,tls-creds=tls0

The old CLI syntax is still supported, but gets translated internally to create the right TLS credential objects. By default the x509 credentials will require that the client provide a certificate, which is equivalent to the traditional ‘x509verify‘ option for VNC. To remove the requirement for client certs, the ‘verify-peer=no‘ option can be given when creating the x509 credentials object.

Generating correct x509 certificates is something that users often struggle with and when getting it wrong the failures are pretty hard to debug – usually just resulting in an unhelpful “handshake failed” type error message. To help troubleshoot problems, the new x509 credentials code in QEMU will sanity check all certificates it loads prior to using them. For example, it will check that the CA certificate has basic constraints set to indicate usage as a CA, catching problems where people give a server/client cert instead of a CA cert. Likewise it will check that the server certificate has basic constraints set to indicate usage in a server. It’ll check that the server certificate is actually signed by the CA that is provided and that none of the certs have expired already. These are all things that the client would check when it connects, so we’re not adding / removing security here, just helping administrators to detect misconfiguration of their TLS certificates as early as possible. These same checks have been done in libvirt for several years now and have been very beneficial in reducing the bugs reports we get related to misconfiguration of TLS.

With the generic TLS credential objects created, the second step was to create a general purpose API for handling the TLS protocol inside QEMU, especially the simplifying the handshake which requires a non-negligible amount of code. The TLS session APIs were designed such that they are independent of the underling data transport since while the VNC server always runs over TCP/UNIX sockets, other QEMU backends may wish to run TLS over non-socket based transports. Overall the API for dealing with TLS session establishment in QEMU can be used as follows

  static ssize_t mysock_send(const char *buf, size_t len,
                             void *opaque)
  {
      int fd = GPOINTER_TO_INT(opaque);
 
      return write(*fd, buf, len);
  }
 
  static ssize_t mysock_recv(const char *buf, size_t len,
                             void *opaque)
  {
      int fd = GPOINTER_TO_INT(opaque);
 
      return read(*fd, buf, len);
  }
 
  static int mysock_run_tls(int sockfd,
                            QCryptoTLSCreds *creds,
                            Error *erp)
  {
      QCryptoTLSSession *sess;
 
      sess = qcrypto_tls_session_new(creds,
                                     "vnc.example.com",
                                     NULL,
                                     QCRYPTO_TLS_CREDS_ENDPOINT_CLIENT,
                                     errp);
      if (sess == NULL) {
          return -1;
      }
 
      qcrypto_tls_session_set_callbacks(sess,
                                        mysock_send,
                                        mysock_recv,
                                        GINT_TO_POINTER(fd));
 
      while (1) {
          if (qcrypto_tls_session_handshake(sess, errp) < 0) {
              qcrypto_tls_session_free(sess);
              return -1;
          }
 
          switch(qcrypto_tls_session_get_handshake_status(sess)) {
          case QCRYPTO_TLS_HANDSHAKE_COMPLETE:
              if (qcrypto_tls_session_check_credentials(sess, errp) < )) {
                  qcrypto_tls_session_free(sess);
                  return -1;
              }
              goto done;
          case QCRYPTO_TLS_HANDSHAKE_RECVING:
              ...wait for GIO_IN event on fd...
              break;
          case QCRYPTO_TLS_HANDSHAKE_SENDING:
              ...wait for GIO_OUT event on fd...
              break;
          }
      }
    done:
 
      ....send/recv payload data on sess...
 
      qcrypto_tls_session_free(sess):
  }

The particularly important thing to note with this example is how the network service (eg VNC, NBD, chardev) that is enabling TLS no longer has to have any knowledge of x509 certificates. They are loaded automatically when the user provides the ‘-object tls-creds-x509‘ argument to QEMU, and they are validated automatically by the call to qcrypto_tls_session_handshake(). This makes it easy to add TLS support to other network backends in QEMU, with minimal overhead significantly lowering the risk of screwing up the security. Since it already had TLS support, the VNC server was converted to use this new TLS session API instead of using the gnutls APIs directly. Once again this had a very positive impact on maintainability of the VNC code, since it allowed countless #ifdef CONFIG_GNUTLS conditionals to be removed which clarified the code flow significantly. This work on TLS and the VNC server all merged for the 2.5 release of QEMU, so is already available to benefit users. There is corresponding libvirt work to be done still to convert over to use the new command line syntax for configuring TLS with QEMU.

In this blog series:

Improving QEMU security part 1: crypto code consolidation

Posted: March 31st, 2016 | Filed under: Coding Tips, Fedora, libvirt, OpenStack, Security, Virt Tools | Tags: , , | No Comments »

This blog is part 1 of a series I am writing about work I’ve completed over the past few releases to improve QEMU security related features.

Many years ago I wrote patches for QEMU to enable use of TLS with the VNC server via the VeNCrypt protocol extension. In those patches I modified the VNC server code to directly call out to gnutls in various places to perform the TLS handshake, validate certificates and encrypt/decrypt data. Fast-forward 8 years and I’m once again looking at QEMU with a view to adding TLS encryption support to many other QEMU network services, in particular character device backends, migration and NBD. The TLS certificate handling code is complex enough that I really didn’t fancy repeating it in multiple different areas of the QEMU codebase, so I started thinking about extracting the TLS code from the VNC server for purpose of easier reuse. Aside from VNC with TLS, QEMU uses cryptographic routines in a number of other areas, AES for qcow2 native encryption (which is horribly broke btw), single DES (yes, really single DES) in the VNC server for the awful VNC password authentication, SHA256 hashing in the quorum block driver and SHA1 hashing in the VNC websockets handshake, and AES in many of its CPU emulation backends for the various architecture specific AES acceleration instructions. QEMU actually has its own built-in impl of AES and DES that is uses, rather than calling out to a 3rd party crypto library, since the emulated CPU instructions need to run distinct internal steps of the AES algorithm, not merely consume the final output.

Looking to the future, as well as the expanded use of TLS, it was clear that use of cryptography will only ever increase in QEMU. For example, support of a LUKS encryption driver in the block layer will need access to countless encryption ciphers and hashes. It would be possible to get access to ciphers and hashes via the gnutls APIs, but sadly it doesn’t expose all the possible algorithms supported by the underlying libraries it uses. For added fun gnutls can be using either libgcrypt or nettle depending on what version of gnutls you have. So if QEMU wanted to get access to algorithms not exposed by gnutls, it would ideally have to support use of two different libraries. It was clear that QEMU would benefit from a consolidated internal API for dealing with anything related to encryption, to isolate the main bulk of the code from needing to directly deal with whatever 3rd party crypto libraries QEMU linked to. Thus I created a new top level directory in the QEMU codebase crypto/ and associated headers include/crypto/ which will contain all the code for interfacing with gnutls, libgcrypt, nettle, and whatever other cryptographic libraries we might need in the future. First of all the existing AES and DES implementations were moved into this directory. Then I created APIs for dealing with hash and cipher algorithms.

The cipher APIs are written to preferentially use either nettle or libcrypt depending on which one gnutls linked to, though this can be overridden via arguments to configure to force a particular choice. For those who really want to build without these 3rd party libraries the APIs can be built to use the internal AES or DES impls as a falback. A short example of encrypting data using AES-128 and CBC mode would look like this

  QCryptoCipher *cipher;
  uint8_t key = ....;
  size_t keylen = 16;
  uint8_t iv = ....;
 
  if (!qcrypto_cipher_supports(QCRYPTO_CIPHER_ALG_AES_128)) {
     error_report(errp, "Feature <blah> requires AES cipher support");
     return -1;
  }
 
  cipher = qcrypto_cipher_new(QCRYPTO_CIPHER_ALG_AES_128,
                              QCRYPTO_CIPHER_MODE_CBC,
                              key, keylen,
                              errp);
  if (!cipher) {
     return -1;
  }
 
  if (qcrypto_cipher_set_iv(cipher, iv, keylen, errp) < 0) {
     return -1;
  }
 
  if (qcrypto_cipher_encrypt(cipher, rawdata, encdata, datalen, errp) < 0) {
     return -1;
  }
 
  qcrypto_cipher_free(cipher);

The hash algorithms still use the gnutls APIs, though that will change in the 2.7 series to directly use libgcrypt or nettle. The hash APIs are slightly simpler since QEMU doesn’t (currently at least) need the ability to incrementally hash data, so the currently APIs just supporting one-shot hashing of buffers.

  char *digest = NULL;
 
  if (!qcrypto_hash_supports(QCRYPTO_HASH_ALG_SHA256)) {
     error_report(errp, "Feature <blah> requires sha256 hash support");
     return -1;
  }
 
  if (qcrypto_hash_digest(QCRYPTO_HASH_ALG_SHA256,
                          buf, len, &digest
                          errp) < 0) {
     return -1;
  }

The qcrypto_hash_digest() method outputs as printable hex characters. There is also qcrypto_hash_bytes() which returns the raw bytes, or qcrypto_hash_base64() which base64 encodes the result. As well as passing a single buffer, it is possible to provide a list of buffers in an ‘struct iovec’

The calls to qcrypto_cipher_supports() and qcrypto_hash_supports() are entirely optional – errors will be raised by other methods if needed, but they offer the opportunity to emit friendly error messages in the code. For example the VNC server can explicitly say which feature it can’t support due to missing DES support. Just converting the existing code in QEMU code to use these new cipher/hash APIs already had significant benefit, because it allowed for many #ifdef CONFIG_GNUTLS statements to be removed from across the codebase, particularly the VNC server. The other benefit is that the internal AES and DES implementations are no longer used by any QEMU code, except for the CPU instruction emulation, which is not even used if running with KVM. So modern KVM accelerated guests will be using well supported, audited & certified cipher & hash implementations which is often important to enterprise distribution vendors. This first stage of consolidation was completed and merged for the QEMU 2.4 release series but it has been invisible to users, mostly just benefiting the QEMU & distro maintainers.

In this blog series:

Announce: gerrymander 1.5 “some beans and some beans is four!” – a client API and command line tool for gerrit

Posted: February 22nd, 2016 | Filed under: Coding Tips, Fedora, OpenStack, Virt Tools | Tags: , , , , | No Comments »

I’m pleased to announce the availability of a new release of gerrymander, version 1.5. Gerrymander provides a python command line tool and APIs for querying information from the gerrit review system, as used in OpenStack and many other projects. You can get it from pypi

# pip install gerrymander

Or straight from GitHub

# git clone git://github.com/berrange/gerrymander.git

If you’re the impatient type, then go to the README file which provides a quick start guide to using the tool.

This release contains a mixture of bug fixes and new features

  • Honour the ‘files’ parameter in the ‘todo-noones’ command
  • Only match filenames against current patchset
  • Handle pagination with gerrit >= 2.9
  • Avoid looping forever if sort key is missing in results
  • Don’t call encode() on integer types
  • Auto-detect gerrit server from git remote
  • Don’t include your own changes in todo lists
  • Fix type casting of cache lifetime values in config file
  • Optionally show hierarchical relationship between changes via new ‘–deps’ option

Thanks to everyone who contributed this release, whether by reporting bugs, requesting features or submitting patches.

Ceph single node deployment on Fedora 23

Posted: December 21st, 2015 | Filed under: Coding Tips, Fedora, OpenStack, Virt Tools | Tags: , , , | 1 Comment »

A little while back Cole documented a minimal ceph deployment on Fedora. Unfortunately, since then the ‘mkcephfs’ command has been dropped in favour of the ‘ceph-deploy’ tool. There’s various other blog posts talking about ceph-deploy, but none of them had quite the right set of commands to get a working single node deployment – the status would always end up in “HEALTH_WARN” which is pretty much an error state for ceph. After much trial & error I finally figured out the steps that work on Fedora 23.

Even though we’re doing a single node deployment, the ‘ceph-deploy’ tool expects to be able to ssh into the local host as root, without password prompts. So before starting, make sure to install ssh keys and edit /etc/ssh/sshd_config to set PermitRootLogin to yes. Everything that follows should also be run as root.

First, we need the ‘ceph-deploy’ tool installed

# dnf install ceph-deploy

ceph-deploy will create some config files in the local directory, so it is best to create a directory to hold them and run it from there

# mkdir ceph-deploy
# cd ceph-deploy

Make sure that the hostname for the local machine is resolvable, both with domain name and unqualified. If it is not, then add entries to /etc/hosts to make it resolve. The first step simply creates the basic config file for ceph-deploy

# export CEPH_HOST=`hostname -f`
# ceph-deploy new $CEPH_HOST

Since this will be a single node deployment there are 2 critical additions that must be made to the ceph.conf that was just created in the current directory

# echo "osd crush chooseleaf type = 0" >> ceph.conf
# echo "osd pool default size = 1" >> ceph.conf

Without these two settings, the storage will never achieve a healthy status.

Now tell ceph-deploy to actually install the main ceph software. By default it will try to activate YUM repos hosted on ceph.com, but Fedora has everything needed, so the ‘--no-adjust-repos‘ argument tells it not to add custom repos

# ceph-deploy install --no-adjust-repos $CEPH_HOST

With the software install the monitor service can be created and started

# ceph-deploy mon create-initial

Ceph can use storage on a block device, but for single node test deployments it is far easier to just point it to a local directory

# mkdir -p /srv/ceph/osd
# ceph-deploy osd prepare $CEPH_HOST:/srv/ceph/osd
# ceph-deploy osd activate $CEPH_HOST:/srv/ceph/osd

Assuming that completed without error, check the cluster status shows HEALTH_OK

# ceph status
    cluster 7e7be62d-4c83-4b59-8c11-6b57301e8cb4
     health HEALTH_OK
     monmap e1: 1 mons at {t530wlan=192.168.1.66:6789/0}
            election epoch 2, quorum 0 t530wlan
     osdmap e5: 1 osds: 1 up, 1 in
      pgmap v15: 64 pgs, 1 pools, 0 bytes data, 0 objects
            246 GB used, 181 GB / 450 GB avail
                  64 active+clean

If it displays “HEALTH_WARN” don’t make the mistake of thinking that is merely a warning – chances are it is a fatal error that will prevent anything working. If you did get errors, then purge all trace of ceph before trying again

# ceph-deploy purgedata $CEPH_HOST
# ceph-deploy purge $CEPH_HOST
# ceph-deploy forgetkeys
# rm -rf /srv/ceph/osd

Once everything it working, it should be possible to use the ‘rbd’ command on the local node to setup volumes suitable for use with QEMU/KVM.

Setting up a local caching proxy for Fedora YUM repositories

Posted: December 9th, 2015 | Filed under: Coding Tips, Fedora, OpenStack, Virt Tools | Tags: , , , , | 3 Comments »

For my day-to-day development work I currently have four separate physical servers, one old x86_64 server for file storage, two new x86_64 servers and one new aarch64 server. Even with a fast fibre internet connection, downloading the never ending stream of Fedora RPM updates takes non-negligible time. I also have cause to install distro chroots on a reasonably frequent basis for testing various things related to containers & virtualization, which involves yet more RPM downloads. So I decided it was time to investigate the setup of a local caching proxy for Fedora YUM repositories. I could have figured this out myself, but I fortunately knew that Matthew Booth had already setup exactly the kind of system I wanted, and he shared the necessary config steps that are outlined below.

The general idea is that we will reconfigure the YUM repository location on each machine needing updates to point to a local apache server, instead of the Fedora mirror manager metalink locations. This apache server will be setup using mod_proxy to rewrite requests to point to the offsite upstream download location, but will also be told to use a local squid server to access the remote site, thereby caching the downloads.

Apache setup

Apache needs to be installed, if not already present:

# dnf install httpd

A new drop-in config file addition for apache is created with two mod_proxy directives. The ProxyPass directive tells apache that any requests for http://<our-ip>/fedora/* should be translated into requests to the remote site http://dl.fedoraproject.org/pub/fedora/linux/*. The ProxyRemote directive tells apache that it should not make direct connections to the remote site, but instead use the local proxy server running on port 3128. IOW, requests that would go to dl.fedoraproject.org will instead get sent to the local squid server.

# cat > /etc/httpd/conf.d/yumcache.conf <<EOF
ProxyPass /fedora/ http://dl.fedoraproject.org/pub/fedora/linux/
ProxyPass /fedora-secondary/ http://dl.fedoraproject.org/pub/fedora-secondary/
ProxyRemote * http://localhost:3128/
EOF

The ‘fedora-secondary’ ProxyPass is just there for my aarch64 machine – not required if you are x86_64 only

The out of the box SELinux configuration prevents apache from making network requests, so it is necessary to toggle a SELinux boolean flag before starting apache

# setsebool httpd_can_network_relay=1

With that done, we can start apache and set it to run on future boots too

# systemctl start httpd.service
# systemctl enable httpd.service

Squid setup

Squid needs to be installed, if not already present:

# dnf install squid

The out of the box configuration for squid needs a few small tweaks to optimize it for YUM repo mirroring. The default cache replacement policy purges the least recently used objects from the cache. This is not ideal for YUM repositories – if the YUM update needs 100 RPMS downloading and only 95 of the fit in cache, by the time the last package is downloaded we’ll be pushing the first package out of cache again, which means the next machine will have cache miss. The LFUDA policy keeps popular objects in the cache regardless of size and optimizes the byte hit rate at expense of object hit rate. Some RPMS can be really rather large, so the default maximum object size of 4 MB is totally inadequate, increasing it to 8 GB is probably overkill but will ensure we always attempt to cache any RPM regardless of its size. The cache_dir directive is there to tell squid to use threads for accessing objects to give greater concurrency. The last two directives are critical telling squid not to cache the repomd.xml files whose contents change frequently – without this you’ll often YUM trying to fetch outdated repo data files which no longer exist

# cat >> /etc/squid/squid.conf <<EOF
cache_replacement_policy heap LFUDA
maximum_object_size 8192 MB
cache_dir aufs /var/spool/squid 16000 16 256 max-size=8589934592
acl repomd url_regex /repomd\.xml$
cache deny repomd
EOF

With that configured, squid can be started and set to run on future boots

# systemctl start squid.service
# systemctl enable squid.service

Firewall setup

If a firewall is present on the cache machine, it is necessary to allow remote access to apache. This can be enabled with a simple firewall-cmd instruction

# firewall-cmd --add-service=http --permanent

Client setup

With the cache server setup of the way, all that remains is to update the Fedora YUM config files on each client machine to point to the local server. There is a convenient tool called ‘fedrepos’ which can do this, avoiding the need to open an editor and change the files manually.

# dnf install fedrepos
# fedrepos baseurl http://yumcache.mydomain/fedora --no-metalink

NB on the aarch64 machine, we need to point to fedora-secondary instead

# fedrepos baseurl http://yumcache.mydomain/fedora-secondary --no-metalink

Replace ‘yumcache.mydomain’ with the hostname or IP address of the server running the apache+squid cache of course. If the cache is working as expected you should see YUM achieve 100 MB/s download speed when it gets a cache hit.