Dilbert + flash: epic fail

Posted: April 18th, 2008 | Filed under: Uncategorized | 1 Comment »

For some incomprehensible reason it has been decided that presenting Dilbert cartoon strips as a plain old image isn’t sexy enough. You now need a flash plugin to display the same old static image as before. Seriously, WTF.COM ?

The only positive, is that there is a now an RSS feed for the strips, which does have the static images inline to the feed. So now I can avoid the nauseating website completely and just read the strips from the comfort of LiFeRea.

The answer is not 42

Posted: March 20th, 2008 | Filed under: Uncategorized | No Comments »

The question is..

How long should police/security services be allowed to hold a suspect prior to charging them with an offense?

In the USA the answer to this question is 2 days; In Russia it is 5 days; In France it is 6 days; In the UK the answer is already an astonishing 28 days. No other democracy comes close. And yet the government’s latest “anti-terror” legistation (which will get a second reading in the commons on April 1st ,with a vote to follow after the May local elections), proposes to extend this period of pre-charge detention to 42 days.

The idea that someone can be held by Police for as long as 42 days, potentially without being told of the grounds for suspicion, let alone be charged with an offence, is an idea that should remain the province of Kafka and his book “The Trial”. The Judiciary serves the key role in English law providing the counter-balance to the state, allowing independent oversight and review of prosecution. Yet until you are charged with an offense there are no grounds for the Judiciary to intervene. You cannot defend yourself when there is no charge against which to defend.

…the first impression made by the defence will often determine the whole course of the proceedings. Unfortunately, though, he would still have to make it clear to K. that the first documents submitted are sometimes not even read by the court.

if the court deems it necessary it can be made public but there is no law that says it has to be. As a result, the accused and his defence don’t have
access even to the court records, and especially not to the indictment, and that means we generally don’t know – or at least not precisely – what the first documents need to be about, which means that if they do contain anything of relevance to the case it’s only by a lucky coincidence. If anything about the individual charges and the reasons for them comes out clearly or can be guessed at while the accused is being questioned, then it’s possible to work out and submit documents that really direct the issue and present proof, but not before. Conditions like this, of course, place the defence in a very unfavourable and difficult position. But that is what they intend. In fact, defence is not really allowed under the law, it’s only tolerated, and there is even some dispute about whether the relevant parts of the law imply even that. So strictly speaking, there is no such thing as a counsel acknowledged by the court, and anyone who comes before this court as counsel is basically no more than a barrack room lawyer. The effect of all this, of course, is to remove the dignity of the whole procedure,

This is an extract from “The Trial” yet disturbingly close to what can happen in the UK should the government continue to extend pre-charge detention during which time the (yet to be) accused is allowed no meaningful defense.

What is the motivation of the government in extending this pre-charge period ?

The posited need is to allow more time to investigate “terrorism” cases. There is little-to-no evidence that the current limit of 28 days is harming such investigations, and members of the security services, police and judiciary have either directly questioned whether an extension would have any tangible benefit in terror investigations, or failed to provide any supporting evidence for the extension.
Furthermore, there are viable alternatives to this proposal which extend the powers relating to terrorism investigations. As part of the Charge or Release campaign, Liberty Human Rights have suggested & support a number of alternative powers:

  • Remove the bar on the use of intercept (phone tap) evidence because its inadmissibility is a major factor in being unable to bring charges in terror cases. Liberty welcomes the Government’s proposed Privy Council review into the use of this evidence in terror trials.
  • Allow post-charge questioning in terror cases, provided that the initial charge is legitimate and there is judicial oversight. This will allow for a charge to be replaced with a more appropriate offense at a later stage.
  • Hire more interpreters: Prioritise the hiring of more foreign language interpreters to expedite pre-charge questioning and other procedures.
  • Add resources: More resources for police and intelligence services.
  • Emergency measures in the Civil Contingencies Act 2004 could be triggered in a genuine emergency in which the police are overwhelmed by multiple terror plots, allowing the Government to temporarily extend pre-charge detention subject to Parliamentary and judicial oversight. Liberty believes that this is preferable to creating a permanent state of emergency.

Crucially these proposals do not undermine the foundations of our justice system.

What can you do about it?

Prototype for a Fedora virtual machine appliance builder

Posted: February 17th, 2008 | Filed under: Uncategorized | 2 Comments »

For the oVirt project the end product distributed to users consists of a LiveCD image to serve as the ‘managed node’ for hosting guests, and a virtual machine appliance to serve as the ‘admin node’ for the web UI. The excellant Fedora LiveCD creator tools obviously already deal with the first use case. For the second though we don’t currently have a solution. The way we build the admin node appliance is to boot a virtual machine and run anaconda with a kickstart, and then grab the resulting installed disk image. While this works it involves a number of error-prone steps. Appliance images are not inherantly different from LiveCDs – instead of a ext3 filesystem inside an ISO using syslinux, we want a number of filesystems inside a partitioned disk using grub. The overall OS installation method is the same in both use cases.

After a day’s hacking I’ve managed to re-factor the internals of the LiveCD creator, and add a new installation class able to create virtual machine appliances. As its input it takes a kickstart file, and the names and sizes for one or more output files (which will act as the disks). It reads the ‘part’ entries from the kickstart file and uses parted to create suitable partitions across the disks. It then uses kpartx to map the partitions and mounts them all in the chroot. The regular LiveCD installation process then takes place. Once complete, it writes a grub config and installs the bootloader into the MBR. The result is one or more files representing the appliance’s virtual disks which can be directly booted in KVM / Xen / VMware.

The virt-image tool defines a simple XML format which can be used to describe a virtual appliance. It specifies things like minimum recommended RAM and VCPUs, the disks associated with the appliance, and the hypervisor requirements for booting it (eg Xen paravirt vs bare metal / fullvirt). Given one of these XML files, the virt-image tool can use libvirt to directly deploy a virtual machine without requiring any further user input. So an obvious extra feature for the virtual appliance creator is to output a virt-image XML description. With a demo kickstart file for the oVirt admin node, I end up with 2 disks:

-rwxr-xr-x 1 root     root     5242880001 2008-02-17 14:48 ovirt-wui-os.raw
-rwxr-xr-x 1 root     root     1048576001 2008-02-17 14:48 ovirt-wui-data.raw

And an associated XML file



To deploy the appliance under KVM I run

# virt-image --connect qemu:///system ovirt-wui.xml
# virsh --connect qemu:///system list
 Id Name                 State
----------------------------------
  1 ovirt-wui            running

Now raw disk images are really quite large – in this example I have a 5 GB and a 1 GB image. The LiveCD creator saves space by using resize2fs to shrink the ext3 filesystem, but this won’t help disk images since the partitions are a fixed size regardless of what the filesystem size is. So to allow smaller the appliance creator is able to call out to qemu-img to convert the raw file into a qcow2 (QEMU/KVM) or vmdk (VMWare) disk image, both of which are grow on demand formats. The qcow2 image can even be compressed. Wtth the qcow2 format the disks for the oVirt WUI reduce to 600 KB and 1.9 GB.

The LiveCD tools have already seen immense popularity in the Fedora community. Once I polish off this new code to be production quality, it is my hope that we’ll see similar uptake by people interested in creating and distributing appliances. The great thing about basing the appliance creator on the Live CD codebase and using kickstart files for both, is that you can easily switch between doing regular anaconda installs, creating Live CDs and creating appliances at will, with a single kickstart file.

Progress in Fedora 9 Xen pv_ops kernel development

Posted: February 1st, 2008 | Filed under: Uncategorized | 1 Comment »

You may recall my announcement of our plans for Fedora 9 Xen kernels. The 5 second summary is that we’re throwing away the current Xen kernels, and writing Xen support on top of paravirt_ops, for both DomU and Dom0, and both i386 & x86_64. Hugely ambitious given the limited time scales involved and the fact that only i386 DomU was working when we started this project.

Just minutes ago, after many many weeks work, Stephen Tweedie reached a very important milestone – the first kernel build capable of fully booting on Dom0, including the IOAPIC & DMA support neccessary to run real hardware drivers – ie the ability to access your real disks once the initrd is done :-)

(10:22:13 AM) sct: It boots
(10:22:14 AM) sct: It runs
(10:22:16 AM) sct: I can ssh into it
(10:22:50 AM) sct: [root@ghost ~]# dmesg|grep para
(10:22:50 AM) sct: Booting paravirtualized kernel on Xen
(10:22:50 AM) sct: [root@ghost ~]#

It is beginning to look like we might actually succeed in our goals in time for Fedora 9 – congrats due to Stephen & the rest of the team working on this !

New release of Test-AutoBuild 1.2.1

Posted: December 10th, 2007 | Filed under: Uncategorized | No Comments »

I finally got around to doing some more work on Test-AutoBuild – a build and test automation framework for upstream developers. It checks sources out of SCM repos (CVS, Subversion, SVK, GNU Arch, Mercurial, Perforce), runs any build and test processes. It detects any RPMs generated during the build and publishes them in a YUM repo. It also publishes HTML status pages showing build logs, list of generated packages, any artifacts generated (eg, code test coverage reports, API documentation) and changelogs from the SCM repo. It is a similar system to CruiseControl, but is more powerful since it directly understands the idea of module dependancies, and so can intelligently manage chained builds of multiple dependant modules. We use this in the ET group for testing our virtualization stack. Our nightly builder builds libvirt and gtk-vnc first, then builds virt-viewer and virt-install against these builds, and finally builds virt-manager against all of them. So any change in libvirt gets validated to make sure it doesn’t break apps using libvirt. Since autobuild understands the dependancies, it can do intelligent build caching. eg if there were new changes in the libvirt SCM repo, but none in the virt-manager repos, it will still do a rebuild of virt-manager as a regression test

This new release version 1.2.1 was all about making the SCM checkout process more reliable. Previously if a module could not be checked out (eg due to a server being down, or a config file typo) the entire build cycle would be aborted. With the new release, the troublesome module is simply skipped and the SCM logs published for the admin to diagnose – other modules in the build cycle continue to be built