Automated install of Fedora 18 ARM on a Samsung Google Chromebook

Posted: March 31st, 2013 | Filed under: Fedora | Tags: , , , , , | 23 Comments »

Back in November last year, I wrote about running Fedora 17 ARM on a Samsung Google Chromebook, via an external SD card. With Fedora 18 now out, I thought it time to try again, this time replacing ChromeOS entirely, installing Fedora 18 ARM to the 16GB internal flash device. Igor Mammedov of the Red Hat KVM team, has previously written a script for automating the install of Fedora 17 onto the internal flash device, including the setup of chained bootloader with nv-uboot. I decided to take his start, update it to Fedora 18 and then extend its capabilities.

If you don’t want to read about what the script does, skip to the end

ChromeOS bootloader

The Samsung ARM Chromebook bootloader is a fork of u-boot. The bootloader is setup todo “SecureBoot” of Google ChromeOS images only by default. There is no provision for providing your own verification keys to the bootloader, so the only way to run non-ChromeOS images is to switch to “Developer Mode” and sign kernels using the developer keys. The result is that while you can run non-ChromeOS operating systems, they’ll always be a second class citizen – since the developer keys are publically available, in developer mode, it’ll happily boot anyone’s (potentially backdoored) kernels. You’re also stuck with an annoying 30 second sleep in the bootloader splash screen which you can only get around by pressing ‘Ctrl-D’ on every startup. The bootloader is also locked down, so you can’t get access to the normal u-boot console – if you want to change the kernel args you need to re-generate the kernel image, which is not so much fun when troubleshooting boot problems with new kernels.

The Chromebook bootloader can’t be (easily) replaced since the flash it is stored in is set read-only. I’ve seen hints in Google+ that you can get around this by opening up the case and working some magic with a soldering iron to set the flash writable again, but I don’t fancy going down that route.

It is, however, possible to setup a chained bootloader, so that the built-in uboot will first boot nv-uboot, which is a variant of the bootloader that has the console enabled and boots any kernel without requiring them to be signed. We still have the annoying 30 second sleep at boot time, and we still can’t do secure boot of our Fedora install, but we at least get an interactive boot console for troubleshooting which is important for me.

ChromeOS Partition Layout

Before continuing it is helpful to understand how ChromeOS partitions the internal flash. It uses GPT rather than MBR, and sets up 12 partitions, though 4 of these (ROOT-C, KERB-C, reserved, reserved) are completely unused and 2 are effectively empty (OEM, RWFW) on my system.

# Device            Label      Offset    Length     Size
# /dev/mmcblk0p1  - STATE        282624  11036672   10 GB
# /dev/mmcblk0p2  - KERN-A        20480     16384   16 MB
# /dev/mmcblk0p3  - ROOT-A     26550272   2097154    2 GB
# /dev/mmcblk0p4  - KERN-B        53248     16384   16 MB
# /dev/mmcblk0p5  - ROOT-B     22355968   2097154    2 GB
# /dev/mmcblk0p6  - KERN-C        16448         0    0 MB
# /dev/mmcblk0p7  - ROOT-C        16449         0    0 MB
# /dev/mmcblk0p8  - OEM           86016     16384   16 MB
# /dev/mmcblk0p9  - reserved      16450         0    0 MB
# /dev/mmcblk0p10 - reserved      16451         0    0 MB
# /dev/mmcblk0p11 - RWFW             64      8192    8 MB
# /dev/mmcblk0p12 - EFI-SYSTEM   249856     16384   16 MB

The important partitions are

  • KERN-A – holds the 1st (primary) kernel image
  • KERN-B – holds the 2nd (backup) kernel image
  • ROOT-A – ChromeOS root filesystem to go with primary kernel
  • ROOT-B – ChromeOS root filesystem to go with backup kernel
  • EFI-SYSTEM – EFI firmware files – empty by default
  • STATE – ChromeOS user data partition

Notice from the offsets, that the order of the partitions on flash, does not match the partition numbers. The important thing is that STATE, ROOT-A and ROOT-B are all at the end of the partition table.

Desired Fedora partition layout

The goal for the Fedora installation is to delete the ROOT-A, ROOT-B and STATE partitions from ChromeOS, and replace them with 3 new partitions:

  • ROOT – hold the Fedora root filesystem (ext4, unencrypted, 4 GB)
  • BOOT – hold the /boot filesystem (ext2, unencrypted, 200 MB)
  • HOME – hold the /home filesystem (ext4, LUKS encrypted, ~11 GB)

The /boot partition must sadly be ext2, since the nv-uboot images Google provide don’t have ext4 support enabled, and I don’t fancy building new images myself. It would be possible to have 1 single partition for both the root and home directories, but keeping them separate should make it easier to upgrade by re-flashing the entire ROOT partition, and also avoids the need to build an initrd to handle unlock of the LUKS partition.

Chained bootloader process

The KERN-A and KERN-B partitions will be used to hold the chained nv-uboot bootloader image, so the built-in bootloader will first load the nv-uboot loader image. nv-uboot will then look for a file /u-boot/boot.scr.img file in the EFI-SYSTEM partition. This file is a uboot script telling nv-uboot what partitions the kernel and root filesystem are stored in, as well as setting the kernel boot parameters. The nv-uboot image has an annoying assumption that kernel image is stored on the root filesystem, which isn’t the case since we want a separate /boot, so we must override some of the nv-uboot environment variables to force the name of the root partition for the kernel command line. The upshot is that that the boot.scr.img file is generated from the following configuration

setenv kernelpart 2                                                                                                                               
setenv rootpart 1                                                                                                             
setenv cros_bootfile /vmlinux.uimg                                                                                                                
setenv regen_all ${regen_all} root=/dev/mmcblk0p3                                                                          
setenv common_bootargs                                                                                                                            
setenv dev_extras console=tty1 lsm.module_locking=0 quiet      

The actual kernel to be booted is thus ‘/vmlinux.uimg’ in the /boot partition of the Fedora install. There is no Fedora kernel yet that boots on the ARM ChromeBook, so this is a copy of the kernel from the ChromeOS install. Hopefully there will be official Fedora kernels in Fedora 19, or at least a re-mix with them available. The lsm.module_locking=0 argument here is needed to tell the ChromeOS kernel LSM to allow kernel module loading.

Installation process

With all this in mind, the script does its work in several stages, requiring a reboot after each stage

  1. Running from a root shell in ChromeOS (which must be in developer mode), the filesystem in the ROOT-B partition is deleted and replaced with a temporary Fedora ARM filesystem. The KERN-A and KERN-B partitions have their contents replaced with the nv-uboot image. The kernel image from ChromeOS is copied into the Fedora root filesystem, and the keyboard/timezone/locale settings are also copied over. The installation script is copied to /etc/rc.d/rc.local, so that stage 2 will run after reboot. The system is now rebooted, so that nv-uboot will launch the Fedora root filesystem
  2. Running from rc.local in the temporary Fedora root filesystem, the ROOT-A and STATE partitions are now deleted to remove the last traces of ChromeOS. The ROOT and BOOT partitions are then created and formatted. The contents of the temporary Fedora root filesystem are now copied into the new ROOT partition. The system is now rebooted, to get out of the temporary Fedora root filesystem and into the new root.
  3. Running from rc.local in the final Fedora root filesystem, the ROOT-B partition is now deleted to remove the temporary Fedora root filesystem. In the free space that is now available, a HOME partition is created. At this point the user is prompted to provide the LUKS encryption passphrase they wish to use for /home. The ALSA UCM profiles for the ChromeBook are now loaded and the ALSA config saved. This will help avoid users accidentally melting their speakers later. An Xorg config file is created to configure the touchpad sensitivity, firstboot is enabled and the root account is locked. Installation is now complete and the system will reboot for the final time.
  4. The final system will now boot normally. There will be a prompt for the LUKS passphrase during boot up. Unfortunately the prompt text gets mixed up with systemd boot messages, which I’m not sure how to fix. Just keep an eye out for it. Once the key is entered boot up will complete and firstboot should launch allowing the creation of a user account. Since the root account is locked, this user will be added to the wheel group, giving it sudo privileges.

If everything went to plan, the ChromeBook should now have a fully functional Fedora 18 install on its internal flash, with the XFCE desktop environment. Compared to running off an external SD card, the boot up speed is quite alot faster. The time to get to the desktop login screen is not all that much longer than with ChromeOS (obviously I’m ignoring the pause to enter the LUKS passphrase here).

Some things I’m not happy with

  • Only /home is encrypted. I’d like to figure out how to build an initrd for the ChromeOS kernel capable of unlocking a LUKS encrypted root filesystem
  • The boot up is in text mode. I’d like to figure out how to do graphical boot with plymouth, mostly to get a better prompt for the LUKS passphrase
  • The image is not using GNOME 3. I much prefer the GNOME Shell experience over the “traditional” desktop model seen with XFCE / GNOME 2 / etc

Running the script

You run this script AT YOUR OWN RISK. It completely erases all personal data on your ChromeBook and erases ChromeOS itself. If something goes wrong with the script, you’ll likely end up with an unbootable machine. To fix this you’ll need an SD card / USB stick to follow the ChromeOS recovery procedures. I’ve been through the recovery process perhaps 20 times now and it doesn’t always go 100% smoothly. Sometimes it complains that it has hit an unrecoverable error. Despite the message, ChromeOS still appears to have been recovered & will boot, but there’s something fishy going on. Again you run this script AT YOUR OWN RISK.

  1. Download http://berrange.fedorapeople.org/install-f18-arm-chromebook-luks.sh to any random machine
  2. Optionally edit the script to change the FEDORA_ROOT_IMAGE_URL and UBOOT_URL env variables to point to a local mirror of the files.
  3. Optionally edit the script to set the ssid and psk parameters with the wifi connection details. If not set, the script will prompt for them
  4. Boot the ChromeBook in Developer Mode and login as a guest
  5. Use Ctrl+Alt+F2 to switch to the ChromeOS root shell (F2 is the key with the forward arrow on it, in the usual location you’d expect F2 to be)
  6. Copy the script downloaded earlier to /tmp in the ChromeOS root and give it executable permission
  7. Run bash /tmp/install-f18-arm-chromebook-luks.sh
  8. Watch as it reboots 3 times (keep an eye out for the LUKS key prompts on boots 3 and 4.
  9. Then either rejoice when firstboot appears and you subsequently get a graphical login prompt, or weep as you need to run the ChromeOS recovery procedure.

The script will save logs from stages 1 / 2 / 3 into /root of the final filesystem. It also copies over a couple of interesting log files from ChromeOS for reference.

Announce: NoZone 1.0 – a Bind DNS zone generator

Posted: March 17th, 2013 | Filed under: Coding Tips, Fedora, Virt Tools | Tags: , , , , | 6 Comments »

My web servers host a number of domains for both personal sites and open source projects, which of course means managing a number of DNS zone files. I use Gandi as my registrar, and they throw in free DNS hosting when you purchase a domain. When you have more than 2-3 domains to manage and want to keep the DNS records consistent across all of them, dealing with the pointy-clicky web form interfaces is really incredibly tedious. Thus I have traditionally hosted my own DNS servers, creating the Bind DNS zone files in emacs. Anyone who has ever used Bind though, will know that its DNS zone file syntax is one of the most horrific formats you can imagine. It is really easy to make silly typos which will screw up your zone in all sorts of fun ways. Keeping the DNS records in sync across domains is also still somewhat tedious.

What I wanted is a simpler, safer configuration file format for defining DNS zones, which can minimise the duplication of data across different domains. There may be tools which do this already, but I fancied writing something myself tailored to my precise use case, so didn’t search for any existing solutions. The result of a couple of evenings hacking efforts is a tool I’m calling NoZone, which now has its first public release, version 1.0. The upstream source is available in a GIT repository

The /etc/nozone.cfg configuration file

The best way to illustrate what NoZone can do, is to simply show a sample configuration file. For reasons of space, I’m cutting out all the comments – the copy that is distributed contains copious comments. In this example, 3 (hypothetical) domain names are being configured, nozone.com, nozone.org which are the public facing domains, and an internal domain for testing purposes qa.nozone.org. All three domains are intended to be configured with the same DNS records, the only difference is that the internal zone (qa.nozone.org) needs to have different IP addresses for its records. For each domain, there will be three physical machines involved, gold, platinum and silver

The first step is to define a zone with all the common parameters specified. Note that this zone isn’t specifying any machine IP addresses, or domain names. It is just referring to the machine names to define an abstract base for the child zones

zones = {
  common = {
    hostmaster = dan-hostmaster

    lifetimes = {
      refresh = 1H
      retry = 15M
      expire = 1W
      negative = 1H
      ttl = 1H
    }

    default = platinum

    mail = {
      mx0 = {
        priority = 10
        machine = gold
      }
      mx1 = {
        priority = 20
        machine = silver
      }
    }

    dns = {
      ns0 = gold
      ns1 = silver
    }

    names = {
      www = platinum
    }

    aliases = {
      db = gold
      backup = silver
    }

    wildcard = platinum
  }

With the common parameters defined, a second zone is defined called “production” which lists the domain names nozone.org and nozone.com and the IP details for the physical machines hosting the domains.

  production = {
    inherits = common

    domains = (
        nozone.org
        nozone.com
    )

    machines = {
      platinum = {
        ipv4 = 12.32.56.1
        ipv6 = 2001:1234:6789::1
      }
      gold = {
        ipv4 = 12.32.56.2
        ipv6 = 2001:1234:6789::2
      }
      silver = {
        ipv4 = 12.32.56.3
        ipv6 = 2001:1234:6789::3
      }
    }
  }

The third zone is used to define the internal qa.nozone.org domain.

  testing = {
    inherits = common

    domains = (
      qa.nozone.org
    )

    machines = {
      platinum = {
        ipv4 = 192.168.1.1
        ipv6 = fc00::1:1
      }
      gold = {
        ipv4 = 192.168.1.2
        ipv6 = fc00::1:2
      }
      silver = {
        ipv4 = 192.168.1.3
        ipv6 = fc00::1:3
      }
    }
  }
}

Generating the Bind DNS zone files

With the /etc/nozone.org configuration file created, the Bind9 DNS zone files can now be generated by invoking the nozone command.

$ nozone

This generates a number of files

# ls /etc/named
nozone.com.conf  nozone.conf  nozone.org.conf  qa.nozone.org.conf
$ ls /var/named/data/
named.run           named.run-20130317  nozone.org.data
named.run-20130315  nozone.com.data     qa.nozone.org.data

The final step is to add one line to /etc/named.conf and then restart bind.

$ echo 'include "/etc/named/nozone.conf";' >> /etc/named.conf
$ systemctl restart named.service

The generated files

The /etc/named/nozone.conf file is always generated and contains references to the conf files for each domain named

include "/etc/named/nozone.com.conf";
include "/etc/named/nozone.org.conf";
include "/etc/named/qa.nozone.org.conf";

Each of these files defines a domain name and links to the zone file definition. For example, nozone.com.conf contains

zone "nozone.com" in {
    type master;
    file "/var/named/data/nozone.com.data";
};

Finally, the interesting data is in the actual zone files, in this case /var/named/data/nozone.com.data

$ORIGIN nozone.com.
$TTL     1H ; queries are cached for this long
@        IN    SOA    ns1    hostmaster (
                           1363531990 ; Date 2013/03/17 14:53:10
                           1H  ; slave queries for refresh this often
                           15M ; slave retries refresh this often after failure
                           1W ; slave expires after this long if not refreshed
                           1H ; errors are cached for this long
         )

; Primary name records for unqualfied domain
@                    IN    A               12.32.56.1 ; Machine platinum
@                    IN    AAAA            2001:1234:6789::1 ; Machine platinum

; DNS server records
@                    IN    NS              ns0
@                    IN    NS              ns1
ns0                  IN    A               12.32.56.2 ; Machine gold
ns0                  IN    AAAA            2001:1234:6789::2 ; Machine gold
ns1                  IN    A               12.32.56.3 ; Machine silver
ns1                  IN    AAAA            2001:1234:6789::3 ; Machine silver

; E-Mail server records
@                    IN    MX       10     mx0
@                    IN    MX       20     mx1
mx0                  IN    A               12.32.56.2 ; Machine gold
mx0                  IN    AAAA            2001:1234:6789::2 ; Machine gold
mx1                  IN    A               12.32.56.3 ; Machine silver
mx1                  IN    AAAA            2001:1234:6789::3 ; Machine silver

; Primary names
gold                 IN    A               12.32.56.2
gold                 IN    AAAA            2001:1234:6789::2
platinum             IN    A               12.32.56.1
platinum             IN    AAAA            2001:1234:6789::1
silver               IN    A               12.32.56.3
silver               IN    AAAA            2001:1234:6789::3

; Extra names
www                  IN    A               12.32.56.1 ; Machine platinum
www                  IN    AAAA            2001:1234:6789::1 ; Machine platinum

; Aliased names
backup               IN    CNAME           silver
db                   IN    CNAME           gold

; Wildcard
*                    IN    A               12.32.56.1 ; Machine platinum
*                    IN    AAAA            2001:1234:6789::1 ; Machine platinum

As of 2 days ago, I’m using nozone to manage the DNS zones for all the domains I own. If it is useful to anyone else, it can be downloaded from CPAN. I’ll likely be submitting it for a Fedora review at some point too.

Announce: Entangle “W Boson” release 0.5.1 – An app for tethered camera control & capture

I am pleased to announce a new release 0.5.1 of Entangle is available for download from the usual location:

http://entangle-photo.org/download/

This release is primarily focused on bug fixes, but a couple of small features are thrown in too

  • Update for compatibility with libgphoto 2.5 API callbacks
  • Avoid warnings about deprecated glib2 mutex and condition variable APIs
  • Directly disable viewfinder mode using config APIs
  • Add support for triggering autofocus during preview with ‘a’ key
  • Add support for manual focus drive in/out using ‘.’ and ‘,’ keys
  • Refresh translations from transifex
  • Import user contributed Italian translation
  • Add missing translation markers on some strings

As before we still need help getting the UI translated into as many languages as possible, so if you are able to help out, please join the Fedora translation team:

https://fedora.transifex.net/projects/p/entangle/

Thanks to everyone who helped contribute to this release & troubleshooting of the previous releases.

Installing a 4 node Fedora 18 OpenStack Folsom cluster with PackStack

Posted: March 1st, 2013 | Filed under: Fedora, libvirt, OpenStack, Virt Tools | Tags: , , , | 9 Comments »

For a few months now Derek has been working on a tool called PackStack, which aims to facilitate & automate the deployment of OpenStack services. Most of the time I’ve used DevStack for deploying OpenStack, but this is not at all suitable for doing production quality deployments.  I’ve also done production deployments from scratch following the great Fedora instructions. The latter work but require the admin to do far too much tedious legwork and know too much about OpenStack in general. This is where PackStack comes in. It starts from the assumption that the admin knows more or less nothing about how the OpenStack tools work. All they need do is decide which services they wish to deploy on each machine. With that answered PackStack goes off and does the work to make it all happen. Under the hood PackStack does its work by connecting to each machine over SSH, and using Puppet to deploy/configure the services on each one. By leveraging puppet, PackStack itself is mostly isolated from the differences between various Linux distros. Thus although PackStack has been developed on RHEL and Fedora, it should be well placed for porting to other distros like Debian & I hope we’ll see that happen in the near future. It will be better for the OpenStack community to have a standard tool that is portable across all target distros, than the current situation where pretty much ever distributor of OpenStack has reinvented the wheel building their own private tooling for deployment. This is why PackStack is being developed as an upstream project, hosted on StackForge rather than as a Red Hat only private project.

Preparing the virtual machines

Anyway back to the point of this blog post. Having followed PackStack progress for a while I decided it was time to actually try it out for real. While I have a great many development machines, I don’t have enough free to turn into an OpenStack cluster, so straight away I decided to do my test deployment inside a set of Fedora 18 virtual machines, running on a Fedora 18 KVM host.

The current PackStack network support requires that you have 2 network interfaces. For an all-in-one box deployment you only actually need one physical NIC for the public interface – you can use ‘lo’ for the private interface on which VMs communicate with each other. I’m doing a multi-node deployment though, so my first step was to decide how to provide networking to my VMs. A standard libvirt install will provide a default NAT based network, using the virbr0 bridge device. This will serve just fine as the public interface over which we can communicate with the OpenStack services & their REST / Web APIs. For VM traffic, I decided to create a second libvirt network on the host machine.

# cat > openstackvms.xml <<EOF
<network>
  <name>openstackvms</name>
  <bridge name='virbr1' stp='off' delay='0' />
</network>
EOF
# virsh net-define openstackvms.xml
Network openstackvms defined from openstackvms.xml

# virsh net-start openstackvms
Network openstackvms started

EOF

Next up, I installed a single Fedora 18 guest machine, giving it two network interfaces, the first attached to the ‘default’ libvirt network, and the second attached to the ‘openstackvms’ virtual network.

# virt-install  --name f18x86_64a --ram 1000 --file /var/lib/libvirt/images/f18x86_64a.img \
    --location http://www.mirrorservice.org/sites/dl.fedoraproject.org/pub/fedora/linux/releases/18/Fedora/x86_64/os/ \
    --noautoconsole --vnc --file-size 10 --os-variant fedora18 \
    --network network:default --network network:openstackvms

In the installer, I used the defaults for everything with two exceptions. I select the “Minimal install” instead of “GNOME Desktop”, and I reduced the size of the swap partition from 2 GB, to 200 MB – it the VM ever needed more than a few 100 MB of swap, then it is pretty much game over for responsiveness of that VM. A minimal install is very quick, taking only 5 minutes or so to completely install the RPMs – assuming good download speeds from the install mirror chosen. Now I need to turn that one VM into 4 VMs. For this I looked to the virt-clone tool. This is a fairly crude tool which merely does a copy of each disk image, and then updates the libvirt XML for the guest to given it a new UUID and MAC address. It doesn’t attempt to change anything inside the guest, but for a F18 minimal install this is not a significant issue.

# virt-clone  -o f18x86_64a -n f18x86_64b -f /var/lib/libvirt/images/f18x86_64b.img 
Allocating 'f18x86_64b.img'                                                                                    |  10 GB  00:01:20     

Clone 'f18x86_64b' created successfully.
# virt-clone  -o f18x86_64a -n f18x86_64c -f /var/lib/libvirt/images/f18x86_64c.img 
Allocating 'f18x86_64c.img'                                                                                    |  10 GB  00:01:07     

Clone 'f18x86_64c' created successfully.
# virt-clone  -o f18x86_64a -n f18x86_64d -f /var/lib/libvirt/images/f18x86_64d.img 
Allocating 'f18x86_64d.img'                                                                                    |  10 GB  00:00:59     

Clone 'f18x86_64d' created successfully.

I don’t fancy having to remember the IP address of each of the virtual machines I installed, so I decided to setup some fixed IP address mappings in the libvirt default network, and add aliases to /etc/hosts

# virsh net-destroy default
# virsh net-edit default
...changing the following...
  <ip address='192.168.122.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.122.2' end='192.168.122.254' />
    </dhcp>
  </ip>

...to this...

  <ip address='192.168.122.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.122.2' end='192.168.122.99' />
      <host mac='52:54:00:fd:e7:03' name='f18x86_64a' ip='192.168.122.100' />
      <host mac='52:54:00:c4:b7:f6' name='f18x86_64b' ip='192.168.122.101' />
      <host mac='52:54:00:81:84:d6' name='f18x86_64c' ip='192.168.122.102' />
      <host mac='52:54:00:6a:9b:1a' name='f18x86_64d' ip='192.168.122.102' />
    </dhcp>
  </ip>
# cat >> /etc/hosts <<EOF
192.168.122.100 f18x86_64a
192.168.122.101 f18x86_64b
192.168.122.102 f18x86_64c
192.168.122.103 f18x86_64d
EOF
# virsh net-start default

Now we’re ready to actually start the virtual machines

# virsh start f18x86_64a
Domain f18x86_64a started

# virsh start f18x86_64b
Domain f18x86_64b started

# virsh start f18x86_64c
Domain f18x86_64c started

# virsh start f18x86_64d
Domain f18x86_64d started

# virsh list
 Id    Name                           State
----------------------------------------------------
 25    f18x86_64a                     running
 26    f18x86_64b                     running
 27    f18x86_64c                     running
 28    f18x86_64d                     running

Deploying OpenStack with PackStack

All of the above is really nothing todo with OpenStack or PackStack – it is just about me getting some virtual machines ready to act as the pretend “physical servers”. The interesting stuff starts now. PackStack doesn’t need to be installed in the machines that will receive the OpenStack install, but rather on any client machine which has SSH access to the target machines. In my case I decided to run packstack from the physical host running the VMs I just provisioned.

# yum -y install openstack-packstack

While PackStack is happy to prompt you with questions, it is far simpler to just use an answer file straight away. It lets you see upfront everything that is required and will make it easy for you repeat the exercise later.

$ packstack --gen-answer-file openstack.txt

The answer file tries to fill in sensible defaults, but there’s not much it can do for IP addresses. So it just fills in the IP address of the host on which it was generated. This is suitable if you’re doing an all-in-one install on the current machine, but not for doing a multi-node install. So the next step is to edit the answer file and customize at least the IP addresses. I have decided that f18x86_64a will be the Horizon frontend and host the user facing APIs from glance/keystone/nova/etc, f18x86_64b will provide QPid, MySQL, Nova schedular and f18x86_64c and f18x86_64d will be compute nodes and swift storage nodes (though I haven’t actually enabled swift in the config).

$ emacs openstack.txt
...make IP address changes...

So you can see what I changed, here is the unified diff

--- openstack.txt	2013-03-01 12:41:31.226476407 +0000
+++ openstack-custom.txt	2013-03-01 12:51:53.877073871 +0000
@@ -4,7 +4,7 @@
 # been installed on the remote servers the user will be prompted for a
 # password and this key will be installed so the password will not be
 # required again
-CONFIG_SSH_KEY=
+CONFIG_SSH_KEY=/home/berrange/.ssh/id_rsa.pub

 # Set to 'y' if you would like Packstack to install Glance
 CONFIG_GLANCE_INSTALL=y
@@ -34,7 +34,7 @@
 CONFIG_NAGIOS_INSTALL=n

 # The IP address of the server on which to install MySQL
-CONFIG_MYSQL_HOST=10.33.8.113
+CONFIG_MYSQL_HOST=192.168.122.101

 # Username for the MySQL admin user
 CONFIG_MYSQL_USER=root
@@ -43,10 +43,10 @@
 CONFIG_MYSQL_PW=5612a75877464b70

 # The IP address of the server on which to install the QPID service
-CONFIG_QPID_HOST=10.33.8.113
+CONFIG_QPID_HOST=192.168.122.101

 # The IP address of the server on which to install Keystone
-CONFIG_KEYSTONE_HOST=10.33.8.113
+CONFIG_KEYSTONE_HOST=192.168.122.100

 # The password to use for the Keystone to access DB
 CONFIG_KEYSTONE_DB_PW=297088140caf407e
@@ -58,7 +58,7 @@
 CONFIG_KEYSTONE_ADMIN_PW=342cc8d9150b4662

 # The IP address of the server on which to install Glance
-CONFIG_GLANCE_HOST=10.33.8.113
+CONFIG_GLANCE_HOST=192.168.122.100

 # The password to use for the Glance to access DB
 CONFIG_GLANCE_DB_PW=a1d8435d61fd4ed2
@@ -83,25 +83,25 @@

 # The IP address of the server on which to install the Nova API
 # service
-CONFIG_NOVA_API_HOST=10.33.8.113
+CONFIG_NOVA_API_HOST=192.168.122.100

 # The IP address of the server on which to install the Nova Cert
 # service
-CONFIG_NOVA_CERT_HOST=10.33.8.113
+CONFIG_NOVA_CERT_HOST=192.168.122.100

 # The IP address of the server on which to install the Nova VNC proxy
-CONFIG_NOVA_VNCPROXY_HOST=10.33.8.113
+CONFIG_NOVA_VNCPROXY_HOST=192.168.122.100

 # A comma separated list of IP addresses on which to install the Nova
 # Compute services
-CONFIG_NOVA_COMPUTE_HOSTS=10.33.8.113
+CONFIG_NOVA_COMPUTE_HOSTS=192.168.122.102,192.168.122.103

 # Private interface for Flat DHCP on the Nova compute servers
 CONFIG_NOVA_COMPUTE_PRIVIF=eth1

 # The IP address of the server on which to install the Nova Network
 # service
-CONFIG_NOVA_NETWORK_HOST=10.33.8.113
+CONFIG_NOVA_NETWORK_HOST=192.168.122.101

 # The password to use for the Nova to access DB
 CONFIG_NOVA_DB_PW=f67f9f822a934509
@@ -116,14 +116,14 @@
 CONFIG_NOVA_NETWORK_PRIVIF=eth1

 # IP Range for Flat DHCP
-CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
+CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.123.0/24

 # IP Range for Floating IP's
-CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
+CONFIG_NOVA_NETWORK_FLOATRANGE=192.168.124.0/24

 # The IP address of the server on which to install the Nova Scheduler
 # service
-CONFIG_NOVA_SCHED_HOST=10.33.8.113
+CONFIG_NOVA_SCHED_HOST=192.168.122.101

 # The overcommitment ratio for virtual to physical CPUs. Set to 1.0
 # to disable CPU overcommitment
@@ -131,20 +131,20 @@

 # The overcommitment ratio for virtual to physical RAM. Set to 1.0 to
 # disable RAM overcommitment
-CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
+CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=10

 # The IP address of the server on which to install the OpenStack
 # client packages. An admin "rc" file will also be installed
-CONFIG_OSCLIENT_HOST=10.33.8.113
+CONFIG_OSCLIENT_HOST=192.168.122.100

 # The IP address of the server on which to install Horizon
-CONFIG_HORIZON_HOST=10.33.8.113
+CONFIG_HORIZON_HOST=192.168.122.100

 # To set up Horizon communication over https set this to "y"
 CONFIG_HORIZON_SSL=n

 # The IP address on which to install the Swift proxy service
-CONFIG_SWIFT_PROXY_HOSTS=10.33.8.113
+CONFIG_SWIFT_PROXY_HOSTS=192.168.122.100

 # The password to use for the Swift to authenticate with Keystone
 CONFIG_SWIFT_KS_PW=aec1c74ec67543e7
@@ -155,7 +155,7 @@
 # on 127.0.0.1 as a swift storage device(packstack does not create the
 # filesystem, you must do this first), if /dev is omitted Packstack
 # will create a loopback device for a test setup
-CONFIG_SWIFT_STORAGE_HOSTS=10.33.8.113
+CONFIG_SWIFT_STORAGE_HOSTS=192.168.122.102,192.168.122.103

 # Number of swift storage zones, this number MUST be no bigger than
 # the number of storage devices configured
@@ -223,7 +223,7 @@
 CONFIG_SATELLITE_PROXY_PW=

 # The IP address of the server on which to install the Nagios server
-CONFIG_NAGIOS_HOST=10.33.8.113
+CONFIG_NAGIOS_HOST=192.168.122.100

 # The password of the nagiosadmin user on the Nagios server
 CONFIG_NAGIOS_PW=7e787e71ff18462c

The current version of PackStack in Fedora mistakenly assumes that ‘net-tools’ is installed by default in Fedora. This used to be the case, but as of Fedora 18 it is not longer installed. Upstream PackStack git has switched from using ifconfig to ip, to avoid this. So for F18 we temporarily need to make sure the ‘net-tools’ RPM is installed in each host. In addition the SELinux policy has not been finished for all openstack components, so we need to set it to permissive mode.

$ ssh root@f18x86_64a setenforce 0
$ ssh root@f18x86_64b setenforce 0
$ ssh root@f18x86_64c setenforce 0
$ ssh root@f18x86_64d setenforce 0
$ ssh root@f18x86_64a yum -y install net-tools
$ ssh root@f18x86_64b yum -y install net-tools
$ ssh root@f18x86_64c yum -y install net-tools
$ ssh root@f18x86_64d yum -y install net-tools

Assuming that’s done, we can now just run packstack

# packstack --answer-file openstack-custom.txt
Welcome to Installer setup utility

Installing:
Clean Up...                                              [ DONE ]
Setting up ssh keys...                                   [ DONE ]
Adding pre install manifest entries...                   [ DONE ]
Adding MySQL manifest entries...                         [ DONE ]
Adding QPID manifest entries...                          [ DONE ]
Adding Keystone manifest entries...                      [ DONE ]
Adding Glance Keystone manifest entries...               [ DONE ]
Adding Glance manifest entries...                        [ DONE ]
Adding Cinder Keystone manifest entries...               [ DONE ]
Checking if the Cinder server has a cinder-volumes vg... [ DONE ]
Adding Cinder manifest entries...                        [ DONE ]
Adding Nova API manifest entries...                      [ DONE ]
Adding Nova Keystone manifest entries...                 [ DONE ]
Adding Nova Cert manifest entries...                     [ DONE ]
Adding Nova Compute manifest entries...                  [ DONE ]
Adding Nova Network manifest entries...                  [ DONE ]
Adding Nova Scheduler manifest entries...                [ DONE ]
Adding Nova VNC Proxy manifest entries...                [ DONE ]
Adding Nova Common manifest entries...                   [ DONE ]
Adding OpenStack Client manifest entries...              [ DONE ]
Adding Horizon manifest entries...                       [ DONE ]
Preparing servers...                                     [ DONE ]
Adding post install manifest entries...                  [ DONE ]
Installing Dependencies...                               [ DONE ]
Copying Puppet modules and manifests...                  [ DONE ]
Applying Puppet manifests...
Applying 192.168.122.100_prescript.pp
Applying 192.168.122.101_prescript.pp
Applying 192.168.122.102_prescript.pp
Applying 192.168.122.103_prescript.pp
192.168.122.101_prescript.pp :                                       [ DONE ]
192.168.122.103_prescript.pp :                                       [ DONE ]
192.168.122.100_prescript.pp :                                       [ DONE ]
192.168.122.102_prescript.pp :                                       [ DONE ]
Applying 192.168.122.101_mysql.pp
Applying 192.168.122.101_qpid.pp
192.168.122.101_mysql.pp :                                           [ DONE ]
192.168.122.101_qpid.pp :                                            [ DONE ]
Applying 192.168.122.100_keystone.pp
Applying 192.168.122.100_glance.pp
Applying 192.168.122.101_cinder.pp
192.168.122.100_keystone.pp :                                        [ DONE ]
192.168.122.100_glance.pp :                                          [ DONE ]
192.168.122.101_cinder.pp :                                          [ DONE ]
Applying 192.168.122.100_api_nova.pp
192.168.122.100_api_nova.pp :                                        [ DONE ]
Applying 192.168.122.100_nova.pp
Applying 192.168.122.102_nova.pp
Applying 192.168.122.103_nova.pp
Applying 192.168.122.101_nova.pp
Applying 192.168.122.100_osclient.pp
Applying 192.168.122.100_horizon.pp
192.168.122.101_nova.pp :                                            [ DONE ]
192.168.122.100_nova.pp :                                            [ DONE ]
192.168.122.100_osclient.pp :                                        [ DONE ]
192.168.122.100_horizon.pp :                                         [ DONE ]
192.168.122.103_nova.pp :                                            [ DONE ]
192.168.122.102_nova.pp :                                            [ DONE ]
Applying 192.168.122.100_postscript.pp
Applying 192.168.122.101_postscript.pp
Applying 192.168.122.103_postscript.pp
Applying 192.168.122.102_postscript.pp
192.168.122.100_postscript.pp :                                      [ DONE ]
192.168.122.103_postscript.pp :                                      [ DONE ]
192.168.122.101_postscript.pp :                                      [ DONE ]
192.168.122.102_postscript.pp :                                      [ DONE ]
[ DONE ]

**** Installation completed successfully ******

(Please allow Installer a few moments to start up.....)

Additional information:
* Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
* Did not create a cinder volume group, one already existed
* To use the command line tools you need to source the file /root/keystonerc_admin created on 192.168.122.100
* To use the console, browse to http://192.168.122.100/dashboard
* The installation log file is available at: /var/tmp/packstack/20130301-135443-qbNvvH/openstack-setup.log

That really is it – you didn’t need to touch any config files for OpenStack, QPid, MySQL or any other service involved. PackStack just worked its magic and there is now a 4 node OpenStack cluster up and running. One of the nice things about PackStack using Puppet for all its work, is that if something goes wrong 1/2 way through, you don’t need to throw it all away – just fix the issue and re-run packstack and it’ll do whatever work was left over from before.

The results

Lets see what’s running on each node. First the frontend user facing node

$ ssh root@f18x86_64a ps -ax
PID TTY      STAT   TIME COMMAND
1 ?        Ss     0:03 /usr/lib/systemd/systemd --switched-root --system --deserialize 14
283 ?        Ss     0:00 /usr/lib/systemd/systemd-udevd
284 ?        Ss     0:07 /usr/lib/systemd/systemd-journald
348 ?        S      0:00 /usr/lib/systemd/systemd-udevd
391 ?        Ss     0:06 /usr/bin/python -Es /usr/sbin/firewalld --nofork
392 ?        S<sl   0:00 /sbin/auditd -n
394 ?        Ss     0:00 /usr/lib/systemd/systemd-logind
395 ?        Ssl    0:00 /sbin/rsyslogd -n -c 5
397 ?        Ssl    0:01 /bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
403 ?        Ss     0:00 login -- root
411 ?        Ss     0:00 /usr/sbin/crond -n
417 ?        S      0:00 /usr/lib/systemd/systemd-udevd
418 ?        Ssl    0:01 /usr/sbin/NetworkManager --no-daemon
452 ?        Ssl    0:00 /usr/lib/polkit-1/polkitd --no-debug
701 ?        S      0:00 /sbin/dhclient -d -4 -sf /usr/libexec/nm-dhcp-client.action -pf /var/run/dhclient-eth0.pid -lf /var/lib/dhclient/dhclient-4d0e96db-64cd-41d3-a9c3-c584da37dd84-eth0.lease -cf /var/run/nm-dhclient-eth0.conf eth0
769 ?        Ss     0:00 /usr/sbin/sshd -D
772 ?        Ss     0:00 sendmail: accepting connections
792 ?        Ss     0:00 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue
800 tty1     Ss+    0:00 -bash
8702 ?        Ss     0:00 /usr/bin/python /usr/bin/glance-registry --config-file /etc/glance/glance-registry.conf
8745 ?        S      0:00 /usr/bin/python /usr/bin/glance-registry --config-file /etc/glance/glance-registry.conf
8764 ?        Ss     0:00 /usr/bin/python /usr/bin/glance-api --config-file /etc/glance/glance-api.conf
10030 ?        Ss     0:01 /usr/bin/python /usr/bin/keystone-all --config-file /etc/keystone/keystone.conf
10201 ?        S      0:00 /usr/bin/python /usr/bin/glance-api --config-file /etc/glance/glance-api.conf
13096 ?        Ss     0:01 /usr/bin/python /usr/bin/nova-api --config-file /etc/nova/nova.conf --logfile /var/log/nova/api.log
13103 ?        S      0:00 /usr/bin/python /usr/bin/nova-api --config-file /etc/nova/nova.conf --logfile /var/log/nova/api.log
13111 ?        S      0:00 /usr/bin/python /usr/bin/nova-api --config-file /etc/nova/nova.conf --logfile /var/log/nova/api.log
13120 ?        S      0:00 /usr/bin/python /usr/bin/nova-api --config-file /etc/nova/nova.conf --logfile /var/log/nova/api.log
13484 ?        Ss     0:05 /usr/bin/python /usr/bin/nova-consoleauth --config-file /etc/nova/nova.conf --logfile /var/log/nova/consoleauth.log
20354 ?        Ss     0:00 python /usr/bin/nova-novncproxy --web /usr/share/novnc/
20429 ?        Ss     0:03 /usr/bin/python /usr/bin/nova-cert --config-file /etc/nova/nova.conf --logfile /var/log/nova/cert.log
21035 ?        Ssl    0:00 /usr/bin/memcached -u memcached -p 11211 -m 922 -c 8192 -l 0.0.0.0 -U 11211 -t 1
21311 ?        Ss     0:00 /usr/sbin/httpd -DFOREGROUND
21312 ?        Sl     0:00 /usr/sbin/httpd -DFOREGROUND
21313 ?        S      0:00 /usr/sbin/httpd -DFOREGROUND
21314 ?        S      0:00 /usr/sbin/httpd -DFOREGROUND
21315 ?        S      0:00 /usr/sbin/httpd -DFOREGROUND
21316 ?        S      0:00 /usr/sbin/httpd -DFOREGROUND
21317 ?        S      0:00 /usr/sbin/httpd -DFOREGROUND
21632 ?        S      0:00 /usr/sbin/httpd -DFOREGROUND

Now the infrastructure node

$ ssh root@f18x86_64b ps -ax
PID TTY      STAT   TIME COMMAND
1 ?        Ss     0:02 /usr/lib/systemd/systemd --switched-root --system --deserialize 21
289 ?        Ss     0:00 /usr/lib/systemd/systemd-udevd
290 ?        Ss     0:05 /usr/lib/systemd/systemd-journald
367 ?        S      0:00 /usr/lib/systemd/systemd-udevd
368 ?        S      0:00 /usr/lib/systemd/systemd-udevd
408 ?        Ss     0:04 /usr/bin/python -Es /usr/sbin/firewalld --nofork
409 ?        S<sl   0:00 /sbin/auditd -n
411 ?        Ss     0:00 /usr/lib/systemd/systemd-logind
412 ?        Ssl    0:00 /sbin/rsyslogd -n -c 5
414 ?        Ssl    0:00 /bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
419 tty1     Ss+    0:00 /sbin/agetty --noclear tty1 38400 linux
429 ?        Ss     0:00 /usr/sbin/crond -n
434 ?        Ssl    0:01 /usr/sbin/NetworkManager --no-daemon
484 ?        Ssl    0:00 /usr/lib/polkit-1/polkitd --no-debug
717 ?        S      0:00 /sbin/dhclient -d -4 -sf /usr/libexec/nm-dhcp-client.action -pf /var/run/dhclient-eth0.pid -lf /var/lib/dhclient/dhclient-2c0f596e-002a-49b0-b3f6-5e228601e7ba-eth0.lease -cf /var/run/nm-dhclient-eth0.conf eth0
766 ?        Ss     0:00 sendmail: accepting connections
792 ?        Ss     0:00 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue
805 ?        Ss     0:00 /usr/sbin/sshd -D
8531 ?        Ss     0:00 /bin/sh /usr/bin/mysqld_safe --basedir=/usr
8884 ?        Sl     0:15 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock --port=3306
9778 ?        Ssl    0:01 /usr/sbin/qpidd --config /etc/qpidd.conf
10004 ?        S<     0:00 [loop2]
13381 ?        Ss     0:02 /usr/sbin/tgtd -f
14831 ?        Ss     0:00 /usr/bin/python /usr/bin/cinder-api --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/api.log
14907 ?        Ss     0:04 /usr/bin/python /usr/bin/cinder-scheduler --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/scheduler.log
14956 ?        Ss     0:02 /usr/bin/python /usr/bin/cinder-volume --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log
15516 ?        Ss     0:06 /usr/bin/python /usr/bin/nova-scheduler --config-file /etc/nova/nova.conf --logfile /var/log/nova/scheduler.log
15609 ?        Ss     0:08 /usr/bin/python /usr/bin/nova-network --config-file /etc/nova/nova.conf --logfile /var/log/nova/network.log

And finally one of the 2 compute nodes

$ ssh root@f18x86_64c ps -ax
PID TTY      STAT   TIME COMMAND
  1 ?        Ss     0:02 /usr/lib/systemd/systemd --switched-root --system --deserialize 21
315 ?        Ss     0:00 /usr/lib/systemd/systemd-udevd
317 ?        Ss     0:04 /usr/lib/systemd/systemd-journald
436 ?        Ss     0:05 /usr/bin/python -Es /usr/sbin/firewalld --nofork
437 ?        S<sl   0:00 /sbin/auditd -n
439 ?        Ss     0:00 /usr/lib/systemd/systemd-logind
440 ?        Ssl    0:00 /sbin/rsyslogd -n -c 5
442 ?        Ssl    0:00 /bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
454 ?        Ss     0:00 /usr/sbin/crond -n
455 tty1     Ss+    0:00 /sbin/agetty --noclear tty1 38400 linux
465 ?        S      0:00 /usr/lib/systemd/systemd-udevd
466 ?        S      0:00 /usr/lib/systemd/systemd-udevd
470 ?        Ssl    0:01 /usr/sbin/NetworkManager --no-daemon
499 ?        Ssl    0:00 /usr/lib/polkit-1/polkitd --no-debug
753 ?        S      0:00 /sbin/dhclient -d -4 -sf /usr/libexec/nm-dhcp-client.action -pf /var/run/dhclient-eth0.pid -lf /var/lib/dhclient/dhclient-ada59d24-375c-481e-bd57-ce0803ac5574-eth0.lease -cf /var/run/nm-dhclient-eth0.conf eth0
820 ?        Ss     0:00 sendmail: accepting connections
834 ?        Ss     0:00 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue
846 ?        Ss     0:00 /usr/sbin/sshd -D
9749 ?        Ssl    0:13 /usr/sbin/libvirtd
16060 ?        Sl     0:01 /usr/bin/python -Es /usr/sbin/tuned -d
16163 ?        Ssl    0:03 /usr/bin/python /usr/bin/nova-compute --config-file /etc/nova/nova.conf --logfile /var/log/nova/compute.log

All-in-all PackStack exceeded my expectations for such a young tool – it did a great job with minimum of fuss and was nice & reliable at what it did too. The only problem I hit was forgetting to set SELinux permissive first, which was not its fault – this is a bug in Fedora policy we will be addressing – and it recovered from that just fine when I re-ran it after setting permissive mode.