Announce: Entangle “Down” release 0.6.0 – An app for tethered camera control & capture

Posted: May 2nd, 2014 | Filed under: Entangle, Fedora, Photography | Tags: , , | No Comments »

I am pleased to announce a new release 0.6.0 of Entangle is available for download from the usual location:

  http://entangle-photo.org/download/

This release has a mixture of bug fixes and new features / changes and new widget styling

  • Add a demonstration plugin for setting up a captive photo box display mode
  • Switch to use Python3 for plugin engine instead of JavaScript
  • Use GTK dark theme
  • Require GNOME symbolic icon theme to be installed
  • Switch to require lcms2 instead of lcms
  • Move application icon into a standard directory mandated to work with the appdata tools
  • Make manual focus work with Canon EOS cameras
  • Disable flickering progress bar in preview mode with Canon EOS cameras
  • Remove use of deprecated GTK methods/classes/constants
  • Remove use of gexiv2 method which is not long exported
  • Remove use of deprecated libpeas methods
  • Add GTK-DOC transfer annotations / docs to all methods
  • Avoid loosing camera capabilities on disconnect
  • Fix off by one in histogram tables causing memory corruption
  • Mark appdata / desktop files for translation
  • Fix typos in README file
  • Fix inverted tests when checking if range widget changed
  • Avoid storm of expose events due to auto-drawer widget
  • Avoid never ending circular update of controls causing errors in some camera modes
  • Add workaround for crazy D5100 camera serial number
  • Add customizable highlight/background for images
  • Avoid reference leak of windows preventing proper cleanup
  • Remove camera manual/about/driver help windows since it did not contain any info useful to users
  • Filter list of cameras in connect dialog to only those which support capture/preview
  • Don’t auto connect to cameras which don’t support capture or preview
  • Ensure parent window is set on dialogs to prevent them falling behind main window
  • Fix crash with latest GTK due to incorrect overriding of GtkApplication startup method
  • Update to cope with changed GExiv API version
  • Refreshed translations from transifex

Thanks to the great work of the Fedora translation team, Entangle has > 90% translation coverage for Dutch, Polish, Ukrainian, French and Japanese and > 80% coverage for Swedish, Czech, Hungarian, Italian, Spanish, Chinese and German. If you are in a position to help translate Entangle further, consider joining the Fedora translation team.

Improving libvirt firewall performance

Previously I talked about improving the performance of logging in libvirt. This article will consider the second, even bigger, performance problem I’ve tackled over the past month or so, the management of firewall rules.

There are three areas of libvirt which have a need to interact with the host firewall

  • Virtual networks – add rules to iptables/ip6tables to control IPv4/IPv6 forwarding and masquerading of traffic from virtual networks (eg the virbr0 device)
  • MAC filtering – adds rules to ebtables to reject spoofed MAC addresses from guest NICs (pretty much obsoleted by general filtering)
  • General network filtering – adds almost arbitrary rules to ebtables/iptables/ip6tables to filter guest NIC traffic

When first written all of these areas of libvirt code would directly invoke the iptables/ip6tables/ebtables command line tools to add/remove the rules needed. Then along came firewalld. While we could pretend firewalld didn’t exist and continue invoking the CLI tools directly, this would end in tears when firewalld flushes chains / tables. When we first realized that libvirt needed to add firewalld support, time was short wrt the forthcoming Fedora release schedule which would include firewalld by default. Thus an expedient decision was made, simply replacing any calls to iptables/ip6tables/ebtables with a call to ‘firewall-cmd –passthrough’ instead. This allowed libvirt to integrate with firewalld, so its rules would not get lost, with the minimum possible number of code changes to libvirt.

Unfortunately, it quickly became apparent that using ‘firewall-cmd’ imposes quite a severe performance penalty. Testing with libvirt’s virtual network feature showed that to start 20 virtual networks took 3 seconds with direct iptables usage and 42 seconds with “firewall-cmd –passthrough”. IOW using firewall-cmd is over x10 slower than direct invocation. The network filtering code is similarly badly affected, running our integration test suite went from 28 seconds to 479 seconds, almost x18 slower with firewall-cmd.

We’ve lived with this performance degradation in libvirt for a little while now, but it was clear that this was never going to be acceptable in the long term. This kind of performance bottleneck in firewall manipulation really hurts applications using libvirt, slowing down guest creation in OpenStack noticeably, and even making libvirtd much slower to startup. With a little slack time in my schedule I did some microbenchmarks comparing repeated invocation of firewall-cmd to a script that instead directly invoked the underlying firewalld dbus API repeatedly. While I don’t have the results anymore, it suffices to say that, the tests strongly showed the overhead lay in the firewall-cmd tool, as opposed to the DBus API.

Based on the test results it was clear that libvirt needed to switch to talking to firewalld directly via DBus, instead of spawning an external program to do this indirectly. Even without the performance overhead of firewall-cmd this would be just good engineering practice, particularly wrt error handling, since parsing errors from stderr is truly horrible. The virtual network code had a lot of complexity in it to cope with the fact that applying any single firewall rule might fail and thus need a set of steps to rollback to the previous state of the firewall. This was nothing compared to the complexity of the network filtering code, which would dynamically generate hairy/scary shell scripts with conditionals to either report or ignore errors that occurred and to dynamically query existing rules to decide what to create next.

What libvirt needed was an internal API for interacting with the firewall which would magically talk to either the DBus API, or the iptables/ip6tables/ebtables tools directly, as appropriate for the host being managed. To further reduce the burden on users of the API, it would also to need to have the concept of “transactions”. That is a user of the API can define a set of rules that it wants applied, and if applying any of those rules fails, then a set of “rollback” steps would be applied to cleanup. Finally, it would need the ability to register hooks which query the rule state and dynamically changed later rules.

It took a while to figure out the right design for this libvirt internal firewall API, but once done the task of converting all libvirt code remained ahead. The virtual network and MAC filtering code was fairly easy to convert over the space of a day or so. The same could not be said of the network filtering code. The task of changing it from dynamically writing shell scripts, to using the new firewall API lasted days, which turned into weeks, eventually turning into the best part of a month’s work. What saved me from insanity was that the original creator of this firewall code had the good sense to create a comprehensive set of integration tests for it. This made it much easier to identify places where I’d screwed up and has dramatically reduced the number of regressions I am likely to have created. Hopefully there are no regressions at all.

At the end of this months work, as well as the original integration tests, libvirt now also has a large number of new unit tests which can validate the operation of our firewall code. These will demonstrate that future changes in this area, do not alter the commands used to build the firewall, complementing the integration tests which validate the final firewall ruleset. Now back to the performance. Creating 20 virtual networks originally took 29 seconds with firewall-cmd, now takes only 3 seconds with firewalld DBus APIs. So there is no measurable difference from direct invocation of iptables there. Running the network filter integration test suite, which took 479 seconds with firewall-cmd, now only takes 37 seconds with firewalld DBus APIs. This is slightly slower than direct ebtables invocation which took 29 seconds, but I think that delta is now at an acceptably low level.

This code is all merged into current libvirt GIT and will be included in the 1.2.4 release due out next week and heading to Fedora rawhide immediately thereafter. So come Fedora 21, anything firewall related in libvirt should be noticeably faster. If any application or script you work on happens to make use of firewall-cmd, then I’d strongly recommend changing it to use the DBus API directly. I don’t know why firewall-cmd is so terribly slow, but as it is today, I can’t recommend anyone using it if they have a non-trivial number of rules to define.

Improving libvirt logging performance

Posted: May 2nd, 2014 | Filed under: Coding Tips, Fedora, libvirt, OpenStack | Tags: , , , , | No Comments »

Over the past month I’ve been working on a few different areas of libvirt to address some performance bottlenecks the code has. This article is about improvements to the logging subsystem in libvirt, primarily to reduce the CPU overhead that the libvirtd daemon imposes on virtualization hosts. This might not seem like a big deal, but when you have many 100’s or 1000’s of containers packed into a machine, libvirtd CPU usage could start to become quite noticable at times.

Over the years the libvirt codebase has gained more and more debugging statements which have been worth their weight in gold when it comes to troubleshooting problems in deployments. Through the configuration files or environment variables it is possible to tailor logging to output to stderr, plain files, syslog or, most recently, the systemd journal, as well as filters to restrict which files will emit log messages. One less well known feature though is that libvirt has an internal logging ring buffer which records every single log message, no matter what logging output / filter settings are configured. It was possible to send libvirt a signal to dump this ring buffer to stderr, and it would also be dumped automatically upon crash.

For some reason, we had never previously thought to measure the impact of our logging framework on the performance of libvirt. Investigating an unrelated performance problem one day I noticed that oprofile was showing unexpectedly high counts against asprintf/memcpy/malloc functions in an area of code that I wasn’t expecting to be utilizing those functions. I recently had time to investigate this in more detail and discovered that the cause was the internal logging ring buffer. We should have realized long ago that the task of asprintf formatting the log messages and copying them into the ring buffer, would have a non-negligible CPU overhead as the number of log statements in our codebase increased.

To enable the overhead to be more precisely quantified, I wrote a short benchmark program for libvirt which simulated the behaviour of the libvirtd main thread running the event loop with 500 virtual machines / containers all doing simultaneous virtual console I/O. With the existing code, running ~50,000 iterations of this simulated event loop took 1 minute 40 seconds. After a quick hack to turn our logging code into a complete no-op, the demo program completed in only 3 seconds. IOW, the logging code was responsible for 97% of the CPU time consumed by the benchmark. This is somewhat embarrassing, to say the least :-)

Obviously we can not leave the logging code as a complete no-op permanently, since it is incredibly valuable to have it. What we need is for it to be as close to a no-op as possible, unless a specific logging output / filter is actually enabled. With this goal, unfortunately, it became clear that the idea of an always-on internal ring buffer for collecting all logs just had to go. There is simply no way to reduce the overhead of this ring buffer, since it inherently has to asprintf the log messages, which is where the majority of the performance hit comes from. At first I just tweaked the way the ring buffer operated so that it would only collect messages that were already generated for an explicitly configured log output. This reduced the runtime of the benchmark program to 4.6 seconds which is pretty decent. After doing this though, we came to the conclusion that the ring buffer as a concept was now pretty much worthless, since the messages it contains were already being output to another log target. So in the end I just deleted the ring buffer code entirely.

The remaining delta of 1.6 seconds over the base line 3 second execution time was potentially a result of two things. First, for each log message generated, there would be multiple string comparisons against the filename of the code emitting it, in order to determine if log messages for that file were requested for output to a log target. Second, the logging code is inside a mutex protected block to ensure serialization of outputs. The latter did not affect the benchmark program since it was single threaded, and uncontended mutex locks on Linux have negligible overhead. It would, however, affect real world usage of libvirtd since the daemon is heavily threaded. To address both of these potential performance problems, we introduced the idea of a log “category” object which has to be statically declared in each source file that intends to emit log messages. This category object has internal state which determines whether log messages from that source file will be sent on to a log output or just dropped. Without going into too much detail, the key benefit this brought, is that we can determine if any single log message needs to be output with only 2 lockless, integer comparisons in the common case. IOW this removed the repeated string comparisons (only need to be done once now) and removed the mutex locking. With this improvement the logging benchmark takes only 3 seconds. IOW there is no appreciable overhead from logging, except for those files where it is explicitly requested.

This performance improvement was released last month in the 1.2.3 release of libvirt, and thus in Fedora rawhide repositories.

EDIT: BTW, if you have an older libvirt you can edit /etc/libvirt/libvirtd.conf, set “log_buffer_size=0” and then restart libvirtd, which will disable the internal ring buffer and thus avoid the worst of the CPU wastage.