ANNOUNCE: new libvirt console proxy project

Posted: January 26th, 2017 | Filed under: Fedora, libvirt, OpenStack, Security | Tags: , , , , | No Comments »

This post is to announce the existence of a new small project to provide a websockets proxy explicitly targeting virtual machines serial consoles and SPICE/VNC graphical displays.

Background

Virtual machines will typically expose one or more consoles for the user to interact on, whether plain text consoles or graphical consoles via VNC/SPICE. With KVM, the network server for these consoles is run as part of the QEMU process on the compute node. It has become common practice to not directly expose these network services to the end user. Instead large scale deployments will typically run some kind of proxy service sitting between the end user and their VM’s console(s). Typically this proxy will tunnel the connection over the websockets protocol too. This has a number of advantages to users and admins alike. By using websockets, it is possible to use an HTTP parameter or cookie to identify which VM to connect to. This means that a single public network port can multiplex access to all the virtual machines, dynamically figuring out which to connect to. The use of websockets also makes it possible to write in-browser clients (noVNC, SPICE-HTML5) to display the console, avoiding the need to install local desktop apps.

There are already quite a few implementations of the websockets proxy idea out there used by virt/cloud management apps, so one might wonder why another one is needed. The existing implementations generally all work at the TCP layer, meaning they treat the content being carried as opaque data. This means that while they provide security between the end user & proxy server by use of HTTPS for the websockets connection, the internal communication between the proxy and QEMU process is still typically running in cleartext. For SPICE and serial consoles it is possible to add security between the proxy and QEMU by simply layering TLS over the entire protocol, so again the data transported can be considered opaque. This isn’t possible for VNC though. To get encryption between the proxy server and QEMU requires interpreting the VNC protocol to intercept the authentication scheme negotiation, turning on TLS support. IOW, the proxy server cannot treat the VNC data stream as opaque.

The libvirt-console-proxy project was started specifically to address this requirement for VNC security. The proxy will interpret the VNC protocol handshake, responding to the client user’s auth request (which is for no auth at the VNC layer, since the websockets layer already handled auth from the client’s POV). The proxy will start a new auth scheme handshake with QEMU and request the VeNCrypt scheme be used, which enables x509 certificate mutual validation and TLS encryption of the data stream. Once the security handshake is complete, the proxy will allow the rest of the VNC protocol to run as a black-box once more – it simply adds TLS encryption when talking to QEMU. This VNC MITM support is already implemented in the libvirt-console-proxy code and enabled by default.

As noted earlier, SPICE is easier to deal with, since it has separate TCP ports for plain vs TLS traffic. The proxy simply needs to connect to the TLS port instead of the TCP port, and does not need to speak the SPICE protocol. There is, however, a different complication with SPICE. A logical SPICE connection from a client to the server actually consists of many network connections as SPICE uses a separate socket for each type of data (keyboard events, framebuffer updates, audio, usb tunnelling, etc). In fact there can be as many as 10 network connections for a single SPICE connection. The websockets proxy will normally expect some kind of secret token to be provided by the client both as authentication, and to identify which SPICE server to connect to. It can be highly desirable for such tokens to be single-use only. Since each SPICE connection requires multiple TCP connections to be established, the proxy has no way of knowing when it can invalidate further use of the token. To deal with this it needs to insert itself into the SPICE protocol negotiation. This allows it to see the protocol message from the server enumerating which channels are available, and also see the unique session identifier. When the additional SPICE network connections arrive, the proxy can now validate that the session ID matches the previously identified session ID, and reject re-use of the websockets security token if they don’t match. It can also permanently invalidate the security token, once all expected secondary connections are established. At time of writing, this SPICE MITM support is not yet implemented in the libvirt-console-proxy code, but is is planned.

Usage

The libvirt-console-proxy project is designed to be reusable by any virtualization management project and so is split into two parts. The general purpose “virtconsoleproxyd” daemon exposes the websockets server and actually runs the connections to the remote QEMU server(s). This daemon is designed to be secure by default, so it mandates configuration of x509 certificates for both the websockets server and the internal VNC, SPICE & serial console connections to QEMU. Running in cleartext requires explicit “--listen-insecure” and “--connect-insecure” flags to make it clear this is a stupid thing to be doing.

The “virtconsoleproxyd” still needs some way to identify which QEMU server(s) it should connect to for a given incoming websockets connection. To address this there is the concept of an external “token resolver” service. This is a daemon running somewhere (either co-located, or on a remote host), providing a trivial REST service. The proxy server does a GET to this external resolver service, passing along the token value. The resolver validates the token, determines what QEMU server it corresponds to and passes this info back to “virtconsoleproxyd“. This way, each project virt management project can deploy the “virtconsoleproxyd” service as is, and simply provide a custom “token resolver” service to integrate with whatever application specific logic is needed to identify QEMU.

There is also a reference implementation of the token resolver interface called “virtconsoleresolverd“. This resolver has a connection to one or more libvirtd daemons and monitors for running virtual machines on each one. When a virtual machine starts, it’ll query the libvirt XML for that machine and extract a specific piece of metadata from the XML under the namespace http://libvirt.org/schemas/console-proxy/1.0. This metadata identifies a libvirt secret object that provides the token for connecting to that guest console. As should be expected, the “virtconsoleresolverd” requires use of HTTPS by default, refusing to run cleartext unless the “--listen-insecure” option is given.

For example on a guest which has a SPICE graphics console, a serial ports and two virtio-console devices, the metadata would look like

$ virsh  metadata demo http://libvirt.org/schemas/console-proxy/1.0
<consoles>
  <console token="999f5742-2fb5-491c-832b-282b3afdfe0c" type="spice" port="0" insecure="no"/>
  <console token="6a92ef00-6f54-4c18-820d-2a2eaf9ac309" type="serial" port="0" insecure="no"/>
  <console token="3d7bbde9-b9eb-4548-a414-d17fa1968aae" type="console" port="0" insecure="no"/>
  <console token="393c6fdd-dbf7-4da9-9ea7-472d2f5ad34c" type="console" port="1" insecure="no"/>
</consoles>

Each of those tokens refers to a libvirt secret

$ virsh -c sys secret-dumpxml 999f5742-2fb5-491c-832b-282b3afdfe0c
<secret ephemeral='no' private='no'>
  <uuid>999f5742-2fb5-491c-832b-282b3afdfe0c</uuid>
  <description>Token for spice console proxy domain d54df46f-1ab5-4a22-8618-4560ef5fac2c</description>
</secret>

And that secret has a value associated which is the actual security token to be provided when connecting to the websockets proxy

$ virsh -c sys secret-get-value 999f5742-2fb5-491c-832b-282b3afdfe0c | base64 -d
750d78ed-8837-4e59-91ce-eef6800227fd

So in this example, to access the SPICE console, the remote client would provide the security token string “750d78ed-8837-4e59-91ce-eef6800227fd“. It happens that I’ve used UUIDs as the actual security token, but they can actually be in any format desired – random UUIDs just happen to be reasonably good high entropy secrets.

Setting up this metadata is a little bit tedious, so there is also a help program that can be used to automatically create the secrets, set a random value and write the metadata into the guest XML

$ virtconsoleresolveradm enable demo
Enabled access to domain 'demo'

Or to stop it again

$ virtconsoleresolveradm disable demo
Disabled access to domain 'demo'

If you noticed my previous announcements of libvirt-go and libvirt-go-xml it won’t come as a surprise to learn that this new project is also written in Go. In fact the integration of libvirt support in this console proxy project is what triggered my work on those other two projects. Having previously worked on adding the same kind of VNC MITM protocol support to the OpenStack nova-novncproxy server, I find I’m liking Go more and more compared to Python.

Setting up a local caching proxy for Fedora YUM repositories

Posted: December 9th, 2015 | Filed under: Coding Tips, Fedora, OpenStack, Virt Tools | Tags: , , , , | 3 Comments »

For my day-to-day development work I currently have four separate physical servers, one old x86_64 server for file storage, two new x86_64 servers and one new aarch64 server. Even with a fast fibre internet connection, downloading the never ending stream of Fedora RPM updates takes non-negligible time. I also have cause to install distro chroots on a reasonably frequent basis for testing various things related to containers & virtualization, which involves yet more RPM downloads. So I decided it was time to investigate the setup of a local caching proxy for Fedora YUM repositories. I could have figured this out myself, but I fortunately knew that Matthew Booth had already setup exactly the kind of system I wanted, and he shared the necessary config steps that are outlined below.

The general idea is that we will reconfigure the YUM repository location on each machine needing updates to point to a local apache server, instead of the Fedora mirror manager metalink locations. This apache server will be setup using mod_proxy to rewrite requests to point to the offsite upstream download location, but will also be told to use a local squid server to access the remote site, thereby caching the downloads.

Apache setup

Apache needs to be installed, if not already present:

# dnf install httpd

A new drop-in config file addition for apache is created with two mod_proxy directives. The ProxyPass directive tells apache that any requests for http://<our-ip>/fedora/* should be translated into requests to the remote site http://dl.fedoraproject.org/pub/fedora/linux/*. The ProxyRemote directive tells apache that it should not make direct connections to the remote site, but instead use the local proxy server running on port 3128. IOW, requests that would go to dl.fedoraproject.org will instead get sent to the local squid server.

# cat > /etc/httpd/conf.d/yumcache.conf <<EOF
ProxyPass /fedora/ http://dl.fedoraproject.org/pub/fedora/linux/
ProxyPass /fedora-secondary/ http://dl.fedoraproject.org/pub/fedora-secondary/
ProxyRemote * http://localhost:3128/
EOF

The ‘fedora-secondary’ ProxyPass is just there for my aarch64 machine – not required if you are x86_64 only

The out of the box SELinux configuration prevents apache from making network requests, so it is necessary to toggle a SELinux boolean flag before starting apache

# setsebool httpd_can_network_relay=1

With that done, we can start apache and set it to run on future boots too

# systemctl start httpd.service
# systemctl enable httpd.service

Squid setup

Squid needs to be installed, if not already present:

# dnf install squid

The out of the box configuration for squid needs a few small tweaks to optimize it for YUM repo mirroring. The default cache replacement policy purges the least recently used objects from the cache. This is not ideal for YUM repositories – if the YUM update needs 100 RPMS downloading and only 95 of the fit in cache, by the time the last package is downloaded we’ll be pushing the first package out of cache again, which means the next machine will have cache miss. The LFUDA policy keeps popular objects in the cache regardless of size and optimizes the byte hit rate at expense of object hit rate. Some RPMS can be really rather large, so the default maximum object size of 4 MB is totally inadequate, increasing it to 8 GB is probably overkill but will ensure we always attempt to cache any RPM regardless of its size. The cache_dir directive is there to tell squid to use threads for accessing objects to give greater concurrency. The last two directives are critical telling squid not to cache the repomd.xml files whose contents change frequently – without this you’ll often YUM trying to fetch outdated repo data files which no longer exist

# cat >> /etc/squid/squid.conf <<EOF
cache_replacement_policy heap LFUDA
maximum_object_size 8192 MB
cache_dir aufs /var/spool/squid 16000 16 256 max-size=8589934592
acl repomd url_regex /repomd\.xml$
cache deny repomd
EOF

With that configured, squid can be started and set to run on future boots

# systemctl start squid.service
# systemctl enable squid.service

Firewall setup

If a firewall is present on the cache machine, it is necessary to allow remote access to apache. This can be enabled with a simple firewall-cmd instruction

# firewall-cmd --add-service=http --permanent

Client setup

With the cache server setup of the way, all that remains is to update the Fedora YUM config files on each client machine to point to the local server. There is a convenient tool called ‘fedrepos’ which can do this, avoiding the need to open an editor and change the files manually.

# dnf install fedrepos
# fedrepos baseurl http://yumcache.mydomain/fedora --no-metalink

NB on the aarch64 machine, we need to point to fedora-secondary instead

# fedrepos baseurl http://yumcache.mydomain/fedora-secondary --no-metalink

Replace ‘yumcache.mydomain’ with the hostname or IP address of the server running the apache+squid cache of course. If the cache is working as expected you should see YUM achieve 100 MB/s download speed when it gets a cache hit.