Security Enhanced Test-AutoBuild

Posted: July 2nd, 2007 | Filed under: Uncategorized | No Comments »

In the latter half of last year I was mulling over the idea of writing SELinux policy for Test-AutoBuild. I played around a little bit, but never got the time to really make a serious attempt at it. Before I go on, a brief re-cap on the motivation…

Test-AutoBuild is a framework for providing continous, unattended, automated software builds. Each run of the build engine checks the latest source out from CVS, calculates a build order based on module dependancies, builds each modules, and the publishes the results. The build engine typically runs under a dedicated system user account – builder – to avoid any risk of the module build process compromising a host (either accidentally, or delibrately). This works reasonably well if you are only protecting against accidental damage from a module build – eg building apps maintained inside your organization. If building code from source repositories out on the internet though there is a real risk of delibrately hostile module build processes. A module may be trojaned so that its build process attempts to scan your internal network, or it may trash the state files of the build engine itself – both the engine & the module being built are under the same user account. There is also the risk that the remote source control server has been trojaned to try and exploit flaws in the client.

And so enter SELinux… The build engine is highly modular in structure, with different tasks in the build workflow being pretty well isolated. So the theory was that it ought to be possible to write SELinux policy to guarentee separation of the build engine, from the SCM tools doing source code checkout, from the module build processes, and other commands being run. As an example, within a build root there a handful of core directories

 +- source-root   - dir in which module source is checked out
 +- package-root  - dir in which RPMs/Debs & other packages are generated
 +- install-root  - virtual root dir for installing files in 'make install'
 +- build-archive - archive of previous successful module builds
 +- log-root      - dir for creating log files of build process
 +- public_html   - dir in which results are published

All these dirs are owned by the builder user account. The build engine itself provides all the adminsitrative tasks for the build workflow, so generally requires full access to all of these directories. The SCM tools, however, merely need to be able to check out files into the source-root and create logs in the log-root. The module build process needs to be able to read/write in the source-root, package-root and install-root, as well as creating logs in the log-root. So, given suitable SELinux policy it ought to be possible to lock down the access of the SCM tools and build process quite significantly.

Now aside from writing the policy there are a couple of other small issues. The primary one is that the build engine has to run in a confined SELinux context, and has to be able to run SCM tools and build processes in a different context. For the former, I choose to create a ‘auto-build-secure’ command to augment the ‘auto-build’ command. This allows user to easily run the build process in SELinux enforced, or traditional unconfined modes. In the latter cases, most SELinux policy has automated process context transitions based on the binary file labels. This isn’t soo useful for autobuild though, because the script we’re running is being checked out direct from a SCM repo & thus not labelled. The solution for this is easily though – after fork()ing, but before exec()ing the SCM tools / build script we simply write the desired target context into /proc/self/attr/exec.

So with a couple of tiny modifications to the build engine, and many hours of writing suitable policy for Test-AutoBuild, its now possible to run the build engine under a strictly confined policy. There is one horrible troublespot though. Every application has its own build process & set of operations is wishes to perform. Writing a policy which confines the build process as much as possible, while still keeping it secure is very hard indeed. In fact it is effectively unsolveable in the general case.

So what to do ? SELinux booleans provide a way to toggle on/off various capabilities system wide. If building multiple applications though, it may be desirable to run some under a more confined policy than others – booleans are system wide. The solution I think is to define a set of perhaps 4 or 5 different execution contexts with differing levels of privileges. As an example, some contexts may allow outgoing network access, while others may deny all network activity. So the build admin can use the most restrictive policy by default, and a less restrictive policy for applications which are more trusted.

This weekend was just the start of experimentation with SELinux policy in regards to Test-AutoBuild, but it was more far, far successful than I ever expected it to be. The level of control afforded by SELinux is awesome, and with the flexibility of modifying the application itself too, the possibilities for fine grained access control are enourmous. One idea I’d like to investigate is whether it is possible to define new SELinux execution contexts on-the-fly. eg, instead of all application sources being checked out under a single ‘absource_t’ file context, it would be desirable to create a new source file context per-applicaiton. I’m not sure whether SELinux supports this idea, but it is interesting to push the boundaries here nonetheless…

A weekend of IPv6 bug chasing

Posted: March 25th, 2007 | Filed under: Uncategorized | 3 Comments »

To enable me to actually test some ideas for IPv6 in libvirt’s virtual networking APIs, I recently purchased a LinkSys WRT54GL wireless router which was promptly flashed to run OpenWRT. I won’t go into all the details of the setup in this post, it suffices to say that thanks to the folks at SIXXS my home network has a globally routable IPv6 /48 subnet (no stinking NAT required!). That gives me 80 bits of addressing to use on my LAN – enough to run a good sized handful of virtual machines :-) With a little scripting completed on the LinkSys router, full IPv6 connectivity is provided to any machine on the LAN which wants it. Which is more or less where the “fun” begins

Initially I was just running radvd to advertise a /64 prefix for the LAN & letting devices do autoconfiguration (they basically combine the /64 prefix with their own MAC address to generate a globally unique 128-bit IPv6 address. As part of my exploration for IPv6 support in libvirt though, I wanted to give DHCPv6 a try too.
It was fairly straightforward – much like DHCP on IPv4 you tell the DHCPv6 server what address range it can use (make sure that matches the range that radvd is advertising) and then configure the networking scripts on the client to do DHCP instead of autoconfiguration (basically add DHCPV6C=yes to each interface config). In debugging this though I came across a fun bug in the DHCPv6 client & server in Fedora – it consistently passes in sizeof(sockaddr_in6.sa_addr) as the salen parameter to getnameinfo() which it should be sizeof(sockaddr_in6). So all getnameinfo() calls were failing – fortunately this didn’t appear to have any operational ill-effects to the DHCPv6 client/server – it just means that your logs don’t include details of the addresses being handed out / received. So that was IPv6 bug #1

With globally routable IPv6 addresses now being configured on my laptop, it was time to try browsing some IPv6 enabled websites. If you have a globally routable IPv6 address configured on your interface, then there’s no magic config needed in the web browsers – the getaddrinfo() calls will automatically return an IPv6 address for a site if it is available. BTW, if you’re still using the legacy gethostbyname() calls when writing network code you should really read Uli’s doc on modern address resolution APIs. Suffice to say, if you use getaddrinfo() and getnameinfo() correctly in your apps, IPv6 will pretty much ‘just work’. Well, while the hostname lookups & web browsers were working correctly, all outgoing connections would just hang. After much debugging I discovered that while the SYN packet was going out the default ip6tables firewall rules were not letting the reply backthrough, so the connection never got established. In IPv4 world there is a rule using conntrack to match on ‘state = RELATED,ESTABLISHED’ but there was no equivalent added in the IPv6 firewall rules. That gives us IPv6 bug #2

With that problem temporarily hacked/worked around by allowing all port 80 traffic through the firewall, web browsing was working nicely. For a while at least. I noticed that inexplicably, every now & then, my network device would loose all its IPv6 addresses – even the link local one! This was very odd indeed & I couldn’t figure out what on earth would be causing it. I was about to resort to using SystemTAP when I suddenly realized the loss of addresses co-incided with disconnecting from the office VPN. This gave two obvious targets for investigation – NetworkManager and/or VPNC. After yet more debugging and it transpired that when a VPN conenction is torn down, NetworkManager flushes all addresses & routes from the underlying physical device, but then only re-adds the IPv4 configuration. The fix was this was trivial – during the initial physical device configuration NetworkManager already has code to automatically add an IPv6 link-local address – that code just needed to be invoked from the VPN teardown script to re-add the link-local address after the device was flushed. Finally we have IPv6 bug #3. Only 3 minor, edge-case bugs is pretty good considering how few people actively use this stuff.

Overall it has been a very worthwhile learning exercise. Trying to get your head around IPv6 is non-trivial if you’re doing it merely based on reading HOWTOs & RFCs. As with many things, actually trying it out & making use of IPv6 for real is a far better way to learn just what it is all about. Second tip is to get yourself a globally routable IPv6 address & subnet right from the start – site-local addresses are deprecated & there’s not nearly as much you can do if you can’t route to the internet as a whole – remember there’s no NAT in IPv6 world. I would have been much less likely to have encounter the firewall / NetworkManager bugs if I had only been using site-local addresses, since I would not have been browsing public sites over IPv6. While there are almost certainly more IPv6 bugs lurking in various Fedora applications, on the whole Fedora Core 6 IPv6 support is working really rather well – the biggest problem is lack of documentation & the small userbase – the more people who try it, the more quickly we’ll be able to shake out & resolve the bugs.

BTW, there’s nothing stopping anyone trying out IPv6 themselves. Even if your internet connection is a crappy Verizon DSL service with a single dynamic IPv4 address that changes every time your DSL bounces, the folks as SIXXS have way to get you a tunnel into the IPv6 with a fully routable global IPv6 address & subnet.

How to turn a $900 paperweight back into a usable Intel Mac Mini

Posted: February 4th, 2007 | Filed under: Uncategorized | 1 Comment »

After creating the aforementioned $900 paperweight, I then spent another few hours turning it back into a useful computer again. If anyone should find themselves wanting todo a ‘from scratch’ dual boot install of Mac OS-X and Fedora, it is actually quite simple….

The first step was to get Mac OS-X back onto the system. So I inserted the original install disk, and held down ‘c’ while turning on the mini. A short while later the Mac OS-X installer is up & running. Before beginning the install process, I launched the ‘Disk Utility’ GUI tool. Using its partitioning options I deleted all existing partitions then told it to create 1 partition of 36 GB marked for a HFS+ filesystem, and leave the remaining 30-something GB as unallocated free space. Quitting out of Disk Utility, I now started with the standard Mac OS-X install wizard, letting it install to this 36 GB partition. This all went very smoothly and 30 minutes later I had fully functional Mac OS-X installed & booting correctly.

Next step is to install Fedora, but before doing this I decided to setup the rEFIt bootloader. The automated setup process for rEFIt installs it onto the main Mac OS-X HFS+ filesystem. I’m not sure whether I’ll keep Mac OS-X installed long term and don’t fancy loosing the bootloader if I re-install that partition. This is a perfect solution to this – install rEFIt onto the hidden 200 MB EFI system partition. The rEFIt install instructions (unhelpfully) decline to tell you how to achieve this, but thanks to Google I came across a guy who has documented the process. Went through that (very simple) process, rebooted and bingo – rEFIt is there, showing a single boot option – for Mac OS-X. So now it is time to try installing Fedora again

Inserting the Fedora CD and rebooting while holding down ‘c’ starts up the Fedora installer. Since I had taken care to leave a chunk of unpartitioned free space earlier, this I could simply let anaconda do its default partitioning – which meant no need to play with lvm tools manually. I had wanted to setup a separate /home, but since anaconda was in text mode there’s no way to add/remove LVM volumes. Still it was possible to shrink down the root filesystem size, leaving the majority of the LVM volume group unallocated for later use. Once past the partitioning step, the remainder of the installer was straightforward – just remember to install grub in the first sector of /boot, and not in the MBR. 30 minutes later I rebooted and to my delight rEFIt showed options for booting both Mac OS-X and Fedora. Success!

Once Fedora was up and running there was only one remaining oddity to deal with. The Mac Mini has a i945 Intel graphics card in it, which isn’t supported by the i810 driver that is into the current released of Xorg. Fortunately Fedora ships a pre-release of the ‘intel’ driver which does support the i945 and does automatic mode setting. So it ought to ‘just work’, but it didn’t. I should mention at this point, that the Mac Mini is not connected to a regular monitor, its actually going to my Samsung LCD HDTV, which has a native resolution of 1360×768. After poking around in the Xorg logs, I discovered that the TV wasn’t returning any EDID info, so the graphics driver didn’t have any info with which to generate suitable modelines. The manual for the TV says it requires a standard VESA 1360×768 resolution. A little googling later I found a suitable modeline, added it to the xorg.conf and X finally starts up just fine at the native resolution. For anyone else out there with a Samsung LN-S3241D widescreen HDTV, the xorg.conf sections that did the trick look like this:

Section "Monitor"
         Identifier "TV0"
         HorizSync  30-60
         VertRefresh  60-75
         ModeLine "1360x768@60" 85.800 1360 1424 1536 1792 768 771 777 795 +HSync +VSync

Section "Screen"
        Identifier "Screen0"
        Device     "Videocard0"
        Monitor "TV0"
        DefaultDepth     24

        SubSection "Display"
                Viewport   0 0
                Modes "1360x768@60"
                Depth     24

So compared to the pain involved in breaking in the Mac Mini, bringing it back to life was a quite uneventful affair. And if I had done the install while connected to a regular monitor instead of TV, it would have been even simpler. Anyway, I’m very happy to have both Mac OS-X & Fedora Core 6 runnning – the latter even has the desktop effects bling, and Xen with fully-virt working.

How to turn a Intel Mac Mini into a $900 paperweight

Posted: February 4th, 2007 | Filed under: Uncategorized | 1 Comment »

This weekend I decided it was way overdue to switch my Intel Mac Mini over to using Fedora. I’d read Fedora on Mactel notes and although they’re as clear as mud, it didn’t sound like it should be much trouble.

First off I applied all the available Mac OS-X updates, installed Bootcamp and resized the HFS+ partition to allow 36 GB of space for Linux. Bootcamp unfortunately isn’t too smart and so assumed I wanted to install Windows. No matter, once I’d resized the partition I could quit out of the Bootcamp wizard and take care of the details myself. So I stuck the Fedora Core 6 install CD, and used the GUI to change the startup disk to be the CD – this was the first stupid thing I did – I should have simply held down ‘c’ at the next boot instead of changing the default startup disk.

Anyway, so it booted into the installer, but Anaconda failed to start an X server, so it took me into text mode installation process. I figured this was because of the funky i810 graphics card so didn’t worry really. Until I came to the partitioning stage – I had a single partition available /dev/sda3 but BootCamp had marked this as a Windows partition – so neither the ‘Remove all Linux partitions’ or ‘Use unallocated space’ options would do the trick. And because this was text mode, I couldn’t manually paritition because none of LVM UI is available. No problem, I’ll just switch into the shell and use ‘fdisk’ and ‘lvm’ to setup partitioning in the old school way. I know now, this was a huge mistake :-) Its not explicitly mentioned in the Fedora on Mactel, but Mac OS-X uses the GPT partition table format, not the tradition DOS MBR style. It does provide an emulated MBR so legacy OS’ can see the partitions, but this should be considered read-only. Unfortunately I wasn’t paying attention, so happily run fdisk, deleting the Windows partition created by bootcamp, and creating several new ones to use for /boot and the LVM physical volume. There was also a wierd 200 MB vfat partition at the start of the disk which I hadn’t asked for ever, so I decided to repurpose that for /boot.

I then tried to setup LVM, but it kept complaining the device /dev/sda4 didn’t exist – but fdisk clearly said it did exist. Perhaps the kernel hadn’t re-read the partition table, so I played with fdisk a bit more to no avail, and then rebooted and re-entered the installer again. The kernel still only saw the original 3 partitions, but oddly fdisk did see the extra 4th partition.

I google’d around and found that the Mac OS-X install disk had a GUI partitioning tool, so decided I’d use that to delete the non-HFS paritions, just leaving free unallocated space, which would let Anaconda do automagic partitioning. This seemed to work – I did manage to get through the complete Fedora instal – but after rebooting I discovered something horribly wrong. The BIOS was still configured to boot off CD image. Opps. No matter, I held down the magic key to invoke Bootcamp, but Bootcamp not only wouldn’t see the Fedora install I just did, but also wouldn’t see my Mac OS-X install. All it offered was the choice to boot Windows – which I had never installed ! Of course that failed.

At this point I burnt a bootable CD with rEFIt on it success I could now see my Fedora install and boot it, but still no sign of Mac OS-X :-( Also I didn’t really want to have to leave a rEFIt CD in the drive for every boot. This is when I discovered that the 200 MB vfat partition I repurposed for /boot was in fact used by the EFI BIOS, and things like rEFIt. Doh. I could reformat it as vfat manually, but to install rEFIt into it, requires some wierd operation called ‘blessing’ which was only documented using Mac OS-X tools.

I figured my best option now was to boot the Mac OS-X install CD and play with the partitioning tool again – maybe this would be able to repair my Mac OS-X partition such that I could boot it again. No such luck. All I managed to do was break the Fedora install I had just completed. The transformation from functioning Mac Mini with Mac OS-X, into $900 paperweight was now complete. I had two OS’s installed, both broken to the extent that neither BootCamp or rEFIt would boot them, and BootCamp offering the option of booting a Windows install which didn’t exist :-)

Your open source civic duty (aka NetworkManager bug fixing)

Posted: November 19th, 2006 | Filed under: Uncategorized | 1 Comment »

I’ve been using NetworkManager and the VPNC plugin ever since Fedora Core 4. One thing that always irritated me about the VPN support though was that the applet’s list of VPN connections was displayed in ‘bogo sort’ order – aka not ordered in the slightest. I upgraded to Fedora Core 6 recently and found this still hadn’t changed and was about to submit a bug report when I wondered – “how hard can a fix really be?”. So I decided that rather than just enter another BZ ticket and wait for someone to have time to look at it, I should do my open source civic duty & write the patch myself. It turned out to be really quite trivial – all that was needed was changing a call to g_slist_append with a call to g_slist_insert_sorted. From itch, to patch in 30 minutes!

Update: I noticed the VPN configuration tool had the same sorting problem as the drop down menu of connections. So I knocked up an additional patch which registers a sorting function against the TreeModel for the VPN connection list. Finally, in testing this I discovered that if you rename a VPN connection, then the list entry with the old name is never removed. Which of course calls for a 3rd patch and another BZ entry. A busy evening!