How I learned to stop worrying and love IPv6

Posted: August 16th, 2007 | Filed under: Uncategorized | 1 Comment »

Any OS running Fedora Core 6 or later has IPv6 networking support enabled out of the box. Most people will never notice and/or care since they’re only ever connected to IPv4 networks. A few months back now though I decided it was time to give IPv6 a try for real….

I’ve got two servers on the Internet running in UserModeLinux guests, one running Debian, the other Fedora Core 6, and then a home network provided by a LinkSys router running OpenWRT White Russian. My goal was provide full IPv6 connectivity to all of them.

Home Router

I tackled the problem of the home router first. The OpenWRT wiki has an IPv6 Howto, describing various setups. I decided to get a tunnel from the fine folks at SixXS. My Verizon DSL only provides a dynamic IPv4 address and regular IPv6 over IPv4 tunnels require the server end to know the IPv4 address of your local endpoint. Obviously this is a bit of a problem with a dynamic IPv4 endpoint. SixXS though have a funky way around this in the form of their AICCU daemon which sets up a heartbeat from your local endpoint to their server. Thus should your IPv4 address ever change it can (securely with SSL) inform the server of your changed configuration. So I registered with SixXS, requested an IPv6 tunnel and a short while later they approved me. The service is open to anyone who wants IPv6 connectivity – the approval process is mainly to help avoid abuse & frivilous requests. I was fortunate in that OCCAID are providing an IPv6 tunnel server just a few miles away in Boston – there’s other tunnel servers dotted around but mostly concentrated in America or Europe at this time.

With my IPv6 address allocated it and the OpenWRT guide handy my router was up & running with IPv6 connectivity – I could do ping sites over IPv6 eg

# ping6 www.kame.net
PING www.kame.net (2001:200:0:8002:203:47ff:fea5:3085): 56 data bytes
64 bytes from 2001:200:0:8002:203:47ff:fea5:3085: icmp6_seq=0 ttl=50 time=513.2 ms
64 bytes from 2001:200:0:8002:203:47ff:fea5:3085: icmp6_seq=1 ttl=50 time=512.5 ms
64 bytes from 2001:200:0:8002:203:47ff:fea5:3085: icmp6_seq=2 ttl=50 time=519.5 ms

OpenWRT only ships with an IPv4 firewall as standard, so I quickly added ip6tables rules to deny all incoming traffic to the router. Even though port-scanning the entire IPv6 address space is not practical, only a tiny portion is active, and nearly all tunnels end up using addresses ending in :1 and :2, so a firewall is a must no matter what.

Home Network

To ensure you are serious about making use of their services, SixXS operate a credit system for admin requests. You start off with enough credits to request a IPv6 tunnel, but not enough to request an IPv6 subnet. To gain credits you have to prove you can keep the tunnel operational 24 hours a day for 7 days in a row – you then start gaining credits for each day’s uptime. So I had a slight pause before I could move onto setting up the home network.

Fortunately the LinkSys router is very reliable and so after a week I had enough uptime and thus enough credits to request an IPv6 subnet. In the brave new world of 128 bit addressing there’s no shortage of addresses, so to simplify routing, whenever someone needs a block of addresses they’ll typically be allocated an entire /48. That’s right /48 – you’ll be given more global IPv6 addresses for your personal use, than there are total IPv4 addresses in existance. Another interesting difference is that IPv6 subnets are not technically ‘sold’ – they are merely ‘loaned’ to end users. The upshot is that there’s no issue of having to pay your stinkin’ DSL/Cable ISP $$$ per month for one or two extra addresses.

Having got the subnet allocated, the first step is to configure an IP address on the LAN interface of the LinkSys box. With OpenWRT this just required editing /etc/init.d/S40network to add “ip -6 addr add 2001:XXXX:XXXX:XXXX::1/64 dev br0” (where 2001:XXXX:XXXX:XXXX is my subnet’s prefix). When the various IPv6 protocols were specced out a big deal was made of the fact that there would be no NAT anywhere, and that client configuration would be completely automatic & be able to dynamically reconfigure itself on the fly. The key to this is what they call a ‘router advertisment daemon’. On Linux this is the ‘radvd’ program. If you only have a single outgoing net connection, and a single local network, then configuring it is incredibly easy. Simply edit /etc/radvd.conf file and fill in the IPv6 address prefix for your subnet as allocated by SixXS. Then start the daemon.

Remember I just mentioned network configuration would be automatic – well look at any Fedora box plugged into your local network at this point. You’ll see they all just got globally routable IPv6 addresses assigned to their active network interfaces. Pop up a web browser and visit Kame and you’ll see an animated dancing turtle logo! IPv4 users only see a static image…

Bytemark Server

One of my web servers is running Debian in a User Mode Linux instance at Bytemark in the UK. The good news is that Bytemark have already taken care of getting IPv6 connectivity into their network, so there’s no need to use a tunnel on any server hosted by them. Simply ask their helpdesk to allocate you an IPv6 address from their pool, and add it to your primary ethernet address. Again don’t forget to setup ip6tables firewall rules before doing this.
For Debian configuring the eth0 was a mere matter of editing /etc/network/interfaces and adding

iface eth0 inet6 static
        address 2001:XXXX:XXXX:XXXX::2
        netmask 64
        up ip route add 2000::/3 via 2001:XXXX:XXXX:XXXX::1

Again, with ‘2001:XXXX:XXXX:XXXX’ being the address they allocated to your server.
Since SSH listens for IPv6 connections by default, with the interface address configured I could now SSH from my laptop at home to my server using IPv6. Type ‘who’ and you’ll see a big long IPv6 address against your username if its working correctly.

Linode Server

My other web server is hosted by Linode. Unfortunately they don’t provide direct IPv6 connectivity so I had to use a tunnel. Since I do have a permanent static IPv4 address though I could use a regular IPv6-over-IPv4 tunnel rather than the dynamic heartbeat one I used at home with SixXS. For the sake of redundancy I decided to get my tunnel from a different provider, this time choosing Hurricane. When registering with them you provide a little contact info and the IPv4 address of your server. A short while later they’ll typically approve the request & activate their end of the tunnel. It is then a matter of configuring your end. This machine was running Fedora Core 6, so creating a tunnel requires adding a file /etc/sysconfig/network-scripts/ifcfg-sit1 containing something like

DEVICE=sit1
BOOTPROTO=none
ONBOOT=yes
IPV6INIT=yes
IPV6TUNNELIPV4=YY.YY.YY.YY
IPV6ADDR=2001:XXXX:XXXX:XXXX::2/64

Where YY.YY.YY.YY was the IPv4 address of hurricane’s tunnel server, and 2001:XXXX:XXXX:XXXX was the IPv6 address prefix they allocated for my server. A quick ifup later and this server too has IPv6 connectivity.

The summary

This was all spread out over a couple of weeks, but by the end of it I had got both servers and my entire home network all operational with fully routable, global IPv6 connectivity. I have three differents types of IPv6 connectivity – direct (from Bytemark), static tunnel (from Hurricane), and a dynamic tunnel (from SixXS – they offer static tunnels too). If you have a static IPv4 address there’s a fourth way to get connected called 6-to-4 which maps your Ipv4 address into the IPv6 space and uses anycast routing. With so many ways to get IPv6 connectivity it doesn’t matter if your crappy DSL/Cable ISP doesn’t offer IPv6 – simply take them out of the equation.

One of the great things about being rid of NAT is that I can directly SSH into any machine at home from outside my network – no need for VPNs, or special reverse proxy rules through the NAT gateway. IPv6 addresses are crazily long, so the one final thing I did was to setup DNS entries for all my boxes, including a DNS zone for my home network. Remember how all clients on the home network auto-configure themselves, well this is done based on their network prefix and their MAC address, so they’ll always auto-configure themselves to the same IPv6 address. Makes it easy to give them permanent DNS mappings, without needing to manually administer a DHCP server.

Security Enhanced Test-AutoBuild

Posted: July 2nd, 2007 | Filed under: Uncategorized | No Comments »

In the latter half of last year I was mulling over the idea of writing SELinux policy for Test-AutoBuild. I played around a little bit, but never got the time to really make a serious attempt at it. Before I go on, a brief re-cap on the motivation…

Test-AutoBuild is a framework for providing continous, unattended, automated software builds. Each run of the build engine checks the latest source out from CVS, calculates a build order based on module dependancies, builds each modules, and the publishes the results. The build engine typically runs under a dedicated system user account – builder – to avoid any risk of the module build process compromising a host (either accidentally, or delibrately). This works reasonably well if you are only protecting against accidental damage from a module build – eg building apps maintained inside your organization. If building code from source repositories out on the internet though there is a real risk of delibrately hostile module build processes. A module may be trojaned so that its build process attempts to scan your internal network, or it may trash the state files of the build engine itself – both the engine & the module being built are under the same user account. There is also the risk that the remote source control server has been trojaned to try and exploit flaws in the client.

And so enter SELinux… The build engine is highly modular in structure, with different tasks in the build workflow being pretty well isolated. So the theory was that it ought to be possible to write SELinux policy to guarentee separation of the build engine, from the SCM tools doing source code checkout, from the module build processes, and other commands being run. As an example, within a build root there a handful of core directories

root
 |
 +- source-root   - dir in which module source is checked out
 +- package-root  - dir in which RPMs/Debs & other packages are generated
 +- install-root  - virtual root dir for installing files in 'make install'
 +- build-archive - archive of previous successful module builds
 +- log-root      - dir for creating log files of build process
 +- public_html   - dir in which results are published

All these dirs are owned by the builder user account. The build engine itself provides all the adminsitrative tasks for the build workflow, so generally requires full access to all of these directories. The SCM tools, however, merely need to be able to check out files into the source-root and create logs in the log-root. The module build process needs to be able to read/write in the source-root, package-root and install-root, as well as creating logs in the log-root. So, given suitable SELinux policy it ought to be possible to lock down the access of the SCM tools and build process quite significantly.

Now aside from writing the policy there are a couple of other small issues. The primary one is that the build engine has to run in a confined SELinux context, and has to be able to run SCM tools and build processes in a different context. For the former, I choose to create a ‘auto-build-secure’ command to augment the ‘auto-build’ command. This allows user to easily run the build process in SELinux enforced, or traditional unconfined modes. In the latter cases, most SELinux policy has automated process context transitions based on the binary file labels. This isn’t soo useful for autobuild though, because the script we’re running is being checked out direct from a SCM repo & thus not labelled. The solution for this is easily though – after fork()ing, but before exec()ing the SCM tools / build script we simply write the desired target context into /proc/self/attr/exec.

So with a couple of tiny modifications to the build engine, and many hours of writing suitable policy for Test-AutoBuild, its now possible to run the build engine under a strictly confined policy. There is one horrible troublespot though. Every application has its own build process & set of operations is wishes to perform. Writing a policy which confines the build process as much as possible, while still keeping it secure is very hard indeed. In fact it is effectively unsolveable in the general case.

So what to do ? SELinux booleans provide a way to toggle on/off various capabilities system wide. If building multiple applications though, it may be desirable to run some under a more confined policy than others – booleans are system wide. The solution I think is to define a set of perhaps 4 or 5 different execution contexts with differing levels of privileges. As an example, some contexts may allow outgoing network access, while others may deny all network activity. So the build admin can use the most restrictive policy by default, and a less restrictive policy for applications which are more trusted.

This weekend was just the start of experimentation with SELinux policy in regards to Test-AutoBuild, but it was more far, far successful than I ever expected it to be. The level of control afforded by SELinux is awesome, and with the flexibility of modifying the application itself too, the possibilities for fine grained access control are enourmous. One idea I’d like to investigate is whether it is possible to define new SELinux execution contexts on-the-fly. eg, instead of all application sources being checked out under a single ‘absource_t’ file context, it would be desirable to create a new source file context per-applicaiton. I’m not sure whether SELinux supports this idea, but it is interesting to push the boundaries here nonetheless…

A weekend of IPv6 bug chasing

Posted: March 25th, 2007 | Filed under: Uncategorized | 3 Comments »

To enable me to actually test some ideas for IPv6 in libvirt’s virtual networking APIs, I recently purchased a LinkSys WRT54GL wireless router which was promptly flashed to run OpenWRT. I won’t go into all the details of the setup in this post, it suffices to say that thanks to the folks at SIXXS my home network has a globally routable IPv6 /48 subnet (no stinking NAT required!). That gives me 80 bits of addressing to use on my LAN – enough to run a good sized handful of virtual machines :-) With a little scripting completed on the LinkSys router, full IPv6 connectivity is provided to any machine on the LAN which wants it. Which is more or less where the “fun” begins

Initially I was just running radvd to advertise a /64 prefix for the LAN & letting devices do autoconfiguration (they basically combine the /64 prefix with their own MAC address to generate a globally unique 128-bit IPv6 address. As part of my exploration for IPv6 support in libvirt though, I wanted to give DHCPv6 a try too.
It was fairly straightforward – much like DHCP on IPv4 you tell the DHCPv6 server what address range it can use (make sure that matches the range that radvd is advertising) and then configure the networking scripts on the client to do DHCP instead of autoconfiguration (basically add DHCPV6C=yes to each interface config). In debugging this though I came across a fun bug in the DHCPv6 client & server in Fedora – it consistently passes in sizeof(sockaddr_in6.sa_addr) as the salen parameter to getnameinfo() which it should be sizeof(sockaddr_in6). So all getnameinfo() calls were failing – fortunately this didn’t appear to have any operational ill-effects to the DHCPv6 client/server – it just means that your logs don’t include details of the addresses being handed out / received. So that was IPv6 bug #1

With globally routable IPv6 addresses now being configured on my laptop, it was time to try browsing some IPv6 enabled websites. If you have a globally routable IPv6 address configured on your interface, then there’s no magic config needed in the web browsers – the getaddrinfo() calls will automatically return an IPv6 address for a site if it is available. BTW, if you’re still using the legacy gethostbyname() calls when writing network code you should really read Uli’s doc on modern address resolution APIs. Suffice to say, if you use getaddrinfo() and getnameinfo() correctly in your apps, IPv6 will pretty much ‘just work’. Well, while the hostname lookups & web browsers were working correctly, all outgoing connections would just hang. After much debugging I discovered that while the SYN packet was going out the default ip6tables firewall rules were not letting the reply backthrough, so the connection never got established. In IPv4 world there is a rule using conntrack to match on ‘state = RELATED,ESTABLISHED’ but there was no equivalent added in the IPv6 firewall rules. That gives us IPv6 bug #2

With that problem temporarily hacked/worked around by allowing all port 80 traffic through the firewall, web browsing was working nicely. For a while at least. I noticed that inexplicably, every now & then, my network device would loose all its IPv6 addresses – even the link local one! This was very odd indeed & I couldn’t figure out what on earth would be causing it. I was about to resort to using SystemTAP when I suddenly realized the loss of addresses co-incided with disconnecting from the office VPN. This gave two obvious targets for investigation – NetworkManager and/or VPNC. After yet more debugging and it transpired that when a VPN conenction is torn down, NetworkManager flushes all addresses & routes from the underlying physical device, but then only re-adds the IPv4 configuration. The fix was this was trivial – during the initial physical device configuration NetworkManager already has code to automatically add an IPv6 link-local address – that code just needed to be invoked from the VPN teardown script to re-add the link-local address after the device was flushed. Finally we have IPv6 bug #3. Only 3 minor, edge-case bugs is pretty good considering how few people actively use this stuff.

Overall it has been a very worthwhile learning exercise. Trying to get your head around IPv6 is non-trivial if you’re doing it merely based on reading HOWTOs & RFCs. As with many things, actually trying it out & making use of IPv6 for real is a far better way to learn just what it is all about. Second tip is to get yourself a globally routable IPv6 address & subnet right from the start – site-local addresses are deprecated & there’s not nearly as much you can do if you can’t route to the internet as a whole – remember there’s no NAT in IPv6 world. I would have been much less likely to have encounter the firewall / NetworkManager bugs if I had only been using site-local addresses, since I would not have been browsing public sites over IPv6. While there are almost certainly more IPv6 bugs lurking in various Fedora applications, on the whole Fedora Core 6 IPv6 support is working really rather well – the biggest problem is lack of documentation & the small userbase – the more people who try it, the more quickly we’ll be able to shake out & resolve the bugs.

BTW, there’s nothing stopping anyone trying out IPv6 themselves. Even if your internet connection is a crappy Verizon DSL service with a single dynamic IPv4 address that changes every time your DSL bounces, the folks as SIXXS have way to get you a tunnel into the IPv6 with a fully routable global IPv6 address & subnet.

How to turn a $900 paperweight back into a usable Intel Mac Mini

Posted: February 4th, 2007 | Filed under: Uncategorized | 1 Comment »

After creating the aforementioned $900 paperweight, I then spent another few hours turning it back into a useful computer again. If anyone should find themselves wanting todo a ‘from scratch’ dual boot install of Mac OS-X and Fedora, it is actually quite simple….

The first step was to get Mac OS-X back onto the system. So I inserted the original install disk, and held down ‘c’ while turning on the mini. A short while later the Mac OS-X installer is up & running. Before beginning the install process, I launched the ‘Disk Utility’ GUI tool. Using its partitioning options I deleted all existing partitions then told it to create 1 partition of 36 GB marked for a HFS+ filesystem, and leave the remaining 30-something GB as unallocated free space. Quitting out of Disk Utility, I now started with the standard Mac OS-X install wizard, letting it install to this 36 GB partition. This all went very smoothly and 30 minutes later I had fully functional Mac OS-X installed & booting correctly.

Next step is to install Fedora, but before doing this I decided to setup the rEFIt bootloader. The automated setup process for rEFIt installs it onto the main Mac OS-X HFS+ filesystem. I’m not sure whether I’ll keep Mac OS-X installed long term and don’t fancy loosing the bootloader if I re-install that partition. This is a perfect solution to this – install rEFIt onto the hidden 200 MB EFI system partition. The rEFIt install instructions (unhelpfully) decline to tell you how to achieve this, but thanks to Google I came across a guy who has documented the process. Went through that (very simple) process, rebooted and bingo – rEFIt is there, showing a single boot option – for Mac OS-X. So now it is time to try installing Fedora again

Inserting the Fedora CD and rebooting while holding down ‘c’ starts up the Fedora installer. Since I had taken care to leave a chunk of unpartitioned free space earlier, this I could simply let anaconda do its default partitioning – which meant no need to play with lvm tools manually. I had wanted to setup a separate /home, but since anaconda was in text mode there’s no way to add/remove LVM volumes. Still it was possible to shrink down the root filesystem size, leaving the majority of the LVM volume group unallocated for later use. Once past the partitioning step, the remainder of the installer was straightforward – just remember to install grub in the first sector of /boot, and not in the MBR. 30 minutes later I rebooted and to my delight rEFIt showed options for booting both Mac OS-X and Fedora. Success!

Once Fedora was up and running there was only one remaining oddity to deal with. The Mac Mini has a i945 Intel graphics card in it, which isn’t supported by the i810 driver that is into the current released of Xorg. Fortunately Fedora ships a pre-release of the ‘intel’ driver which does support the i945 and does automatic mode setting. So it ought to ‘just work’, but it didn’t. I should mention at this point, that the Mac Mini is not connected to a regular monitor, its actually going to my Samsung LCD HDTV, which has a native resolution of 1360×768. After poking around in the Xorg logs, I discovered that the TV wasn’t returning any EDID info, so the graphics driver didn’t have any info with which to generate suitable modelines. The manual for the TV says it requires a standard VESA 1360×768 resolution. A little googling later I found a suitable modeline, added it to the xorg.conf and X finally starts up just fine at the native resolution. For anyone else out there with a Samsung LN-S3241D widescreen HDTV, the xorg.conf sections that did the trick look like this:

Section "Monitor"
         Identifier "TV0"
         HorizSync  30-60
         VertRefresh  60-75
         ModeLine "1360x768@60" 85.800 1360 1424 1536 1792 768 771 777 795 +HSync +VSync
EndSection

Section "Screen"
        Identifier "Screen0"
        Device     "Videocard0"
        Monitor "TV0"
        DefaultDepth     24

        SubSection "Display"
                Viewport   0 0
                Modes "1360x768@60"
                Depth     24
        EndSubSection
EndSection

So compared to the pain involved in breaking in the Mac Mini, bringing it back to life was a quite uneventful affair. And if I had done the install while connected to a regular monitor instead of TV, it would have been even simpler. Anyway, I’m very happy to have both Mac OS-X & Fedora Core 6 runnning – the latter even has the desktop effects bling, and Xen with fully-virt working.

How to turn a Intel Mac Mini into a $900 paperweight

Posted: February 4th, 2007 | Filed under: Uncategorized | 1 Comment »

This weekend I decided it was way overdue to switch my Intel Mac Mini over to using Fedora. I’d read Fedora on Mactel notes and although they’re as clear as mud, it didn’t sound like it should be much trouble.

First off I applied all the available Mac OS-X updates, installed Bootcamp and resized the HFS+ partition to allow 36 GB of space for Linux. Bootcamp unfortunately isn’t too smart and so assumed I wanted to install Windows. No matter, once I’d resized the partition I could quit out of the Bootcamp wizard and take care of the details myself. So I stuck the Fedora Core 6 install CD, and used the GUI to change the startup disk to be the CD – this was the first stupid thing I did – I should have simply held down ‘c’ at the next boot instead of changing the default startup disk.

Anyway, so it booted into the installer, but Anaconda failed to start an X server, so it took me into text mode installation process. I figured this was because of the funky i810 graphics card so didn’t worry really. Until I came to the partitioning stage – I had a single partition available /dev/sda3 but BootCamp had marked this as a Windows partition – so neither the ‘Remove all Linux partitions’ or ‘Use unallocated space’ options would do the trick. And because this was text mode, I couldn’t manually paritition because none of LVM UI is available. No problem, I’ll just switch into the shell and use ‘fdisk’ and ‘lvm’ to setup partitioning in the old school way. I know now, this was a huge mistake :-) Its not explicitly mentioned in the Fedora on Mactel, but Mac OS-X uses the GPT partition table format, not the tradition DOS MBR style. It does provide an emulated MBR so legacy OS’ can see the partitions, but this should be considered read-only. Unfortunately I wasn’t paying attention, so happily run fdisk, deleting the Windows partition created by bootcamp, and creating several new ones to use for /boot and the LVM physical volume. There was also a wierd 200 MB vfat partition at the start of the disk which I hadn’t asked for ever, so I decided to repurpose that for /boot.

I then tried to setup LVM, but it kept complaining the device /dev/sda4 didn’t exist – but fdisk clearly said it did exist. Perhaps the kernel hadn’t re-read the partition table, so I played with fdisk a bit more to no avail, and then rebooted and re-entered the installer again. The kernel still only saw the original 3 partitions, but oddly fdisk did see the extra 4th partition.

I google’d around and found that the Mac OS-X install disk had a GUI partitioning tool, so decided I’d use that to delete the non-HFS paritions, just leaving free unallocated space, which would let Anaconda do automagic partitioning. This seemed to work – I did manage to get through the complete Fedora instal – but after rebooting I discovered something horribly wrong. The BIOS was still configured to boot off CD image. Opps. No matter, I held down the magic key to invoke Bootcamp, but Bootcamp not only wouldn’t see the Fedora install I just did, but also wouldn’t see my Mac OS-X install. All it offered was the choice to boot Windows – which I had never installed ! Of course that failed.

At this point I burnt a bootable CD with rEFIt on it success I could now see my Fedora install and boot it, but still no sign of Mac OS-X :-( Also I didn’t really want to have to leave a rEFIt CD in the drive for every boot. This is when I discovered that the 200 MB vfat partition I repurposed for /boot was in fact used by the EFI BIOS, and things like rEFIt. Doh. I could reformat it as vfat manually, but to install rEFIt into it, requires some wierd operation called ‘blessing’ which was only documented using Mac OS-X tools.

I figured my best option now was to boot the Mac OS-X install CD and play with the partitioning tool again – maybe this would be able to repair my Mac OS-X partition such that I could boot it again. No such luck. All I managed to do was break the Fedora install I had just completed. The transformation from functioning Mac Mini with Mac OS-X, into $900 paperweight was now complete. I had two OS’s installed, both broken to the extent that neither BootCamp or rEFIt would boot them, and BootCamp offering the option of booting a Windows install which didn’t exist :-)