Setting up a local caching proxy for Fedora YUM repositories

Posted: December 9th, 2015 | Filed under: Coding Tips, Fedora, OpenStack, Virt Tools | Tags: , , , , | 3 Comments »

For my day-to-day development work I currently have four separate physical servers, one old x86_64 server for file storage, two new x86_64 servers and one new aarch64 server. Even with a fast fibre internet connection, downloading the never ending stream of Fedora RPM updates takes non-negligible time. I also have cause to install distro chroots on a reasonably frequent basis for testing various things related to containers & virtualization, which involves yet more RPM downloads. So I decided it was time to investigate the setup of a local caching proxy for Fedora YUM repositories. I could have figured this out myself, but I fortunately knew that Matthew Booth had already setup exactly the kind of system I wanted, and he shared the necessary config steps that are outlined below.

The general idea is that we will reconfigure the YUM repository location on each machine needing updates to point to a local apache server, instead of the Fedora mirror manager metalink locations. This apache server will be setup using mod_proxy to rewrite requests to point to the offsite upstream download location, but will also be told to use a local squid server to access the remote site, thereby caching the downloads.

Apache setup

Apache needs to be installed, if not already present:

# dnf install httpd

A new drop-in config file addition for apache is created with two mod_proxy directives. The ProxyPass directive tells apache that any requests for http://<our-ip>/fedora/* should be translated into requests to the remote site http://dl.fedoraproject.org/pub/fedora/linux/*. The ProxyRemote directive tells apache that it should not make direct connections to the remote site, but instead use the local proxy server running on port 3128. IOW, requests that would go to dl.fedoraproject.org will instead get sent to the local squid server.

# cat > /etc/httpd/conf.d/yumcache.conf <<EOF
ProxyPass /fedora/ http://dl.fedoraproject.org/pub/fedora/linux/
ProxyPass /fedora-secondary/ http://dl.fedoraproject.org/pub/fedora-secondary/
ProxyRemote * http://localhost:3128/
EOF

The ‘fedora-secondary’ ProxyPass is just there for my aarch64 machine – not required if you are x86_64 only

The out of the box SELinux configuration prevents apache from making network requests, so it is necessary to toggle a SELinux boolean flag before starting apache

# setsebool httpd_can_network_relay=1

With that done, we can start apache and set it to run on future boots too

# systemctl start httpd.service
# systemctl enable httpd.service

Squid setup

Squid needs to be installed, if not already present:

# dnf install squid

The out of the box configuration for squid needs a few small tweaks to optimize it for YUM repo mirroring. The default cache replacement policy purges the least recently used objects from the cache. This is not ideal for YUM repositories – if the YUM update needs 100 RPMS downloading and only 95 of the fit in cache, by the time the last package is downloaded we’ll be pushing the first package out of cache again, which means the next machine will have cache miss. The LFUDA policy keeps popular objects in the cache regardless of size and optimizes the byte hit rate at expense of object hit rate. Some RPMS can be really rather large, so the default maximum object size of 4 MB is totally inadequate, increasing it to 8 GB is probably overkill but will ensure we always attempt to cache any RPM regardless of its size. The cache_dir directive is there to tell squid to use threads for accessing objects to give greater concurrency. The last two directives are critical telling squid not to cache the repomd.xml files whose contents change frequently – without this you’ll often YUM trying to fetch outdated repo data files which no longer exist

# cat >> /etc/squid/squid.conf <<EOF
cache_replacement_policy heap LFUDA
maximum_object_size 8192 MB
cache_dir aufs /var/spool/squid 16000 16 256 max-size=8589934592
acl repomd url_regex /repomd\.xml$
cache deny repomd
EOF

With that configured, squid can be started and set to run on future boots

# systemctl start squid.service
# systemctl enable squid.service

Firewall setup

If a firewall is present on the cache machine, it is necessary to allow remote access to apache. This can be enabled with a simple firewall-cmd instruction

# firewall-cmd --add-service=http --permanent

Client setup

With the cache server setup of the way, all that remains is to update the Fedora YUM config files on each client machine to point to the local server. There is a convenient tool called ‘fedrepos’ which can do this, avoiding the need to open an editor and change the files manually.

# dnf install fedrepos
# fedrepos baseurl http://yumcache.mydomain/fedora --no-metalink

NB on the aarch64 machine, we need to point to fedora-secondary instead

# fedrepos baseurl http://yumcache.mydomain/fedora-secondary --no-metalink

Replace ‘yumcache.mydomain’ with the hostname or IP address of the server running the apache+squid cache of course. If the cache is working as expected you should see YUM achieve 100 MB/s download speed when it gets a cache hit.

Faster rebuilds for python virtualenv trees

Posted: November 14th, 2014 | Filed under: Coding Tips, Fedora, OpenStack, Virt Tools | Tags: , , , , , , | 5 Comments »

When developing on OpenStack, it is standard practice for the unit test suites to be run in a python virtualenv. With the Nova project at least, and I suspect most others too, setting up the virtualenv takes a significant amount of time as there are many packages to pull down from PyPI, quite a few of which need compilation too. Any time the local requirements.txt or test-requirements.txt files change it is necessary to the rebuild the virtualenv. This rebuild is an all-or-nothing kind of task, so can be a significant time sink, particularly if you frequently jump between different code branches.

At the OpenStack design summit in Paris, Joe Gordon showed Matt Booth how to setup devpi and wheel to provide a cache of the packages that make up the virtualenv. Not only does it avoid the need to repeatedly download the same packages from pypi each time, but it also avoids the compilation step, since the cache is storing the final installed pieces for each python module. The end result is that it takes 20-30 seconds or less to rebuild a virtualenv instead of many minutes.

After a few painful waits for virtualenvs today, I decided to set it up too I don’t like installing non-packaged software as root on my machines, so what follows is all done as a regular user account. The first step is to pull down the devpi and wheel packages from pypi, telling pip to install them in under $HOME/.local

# pip install --user devpi
# pip install --user wheel

Since we’re using a custom install location, it necessary to update your $HOME/.bashrc file with new $PATH and $PYTHONPATH environment variables and then source the .bashrc file

# cat >> $HOME/.bashrc <<EOF
export PATH=\$PATH:$HOME/.local/bin
export PYTHONPATH=$HOME/.local/lib/python2.7/site-packages
EOF
# . $HOME/.bashrc

The devpi package provides a local server that will be used for downloads instead of directly accessing pypi.python.org, so this must be started

# devpi-server --start

Both devpi and wheel integrate with pip, so the next setup task is to modify the pip configuration file

# cat >> $HOME/.pip/pip.conf <<EOF
[global]
index-url = http://localhost:3141/root/pypi/+simple/
wheel-dir = /home/berrange/.pip/wheelhouse
find-links = /home/berrange/.pip/wheelhouse
EOF

We’re pretty much done at this point – all that is left is to prime the cache with the all the packages that Nova wants to use

# cd $HOME/src/cloud/nova
# pip wheel -r requirements.txt
# pip wheel -r test-requirements.txt

Now if you run any command that would build a Nova virtualenv, you should notice it is massively faster

# tox -e py27
# ./run_tests.sh -V

That is basically all there is to it. Blow away the virtualenv directories at any time and they’ll be repopulated from the cache. If the requirements.txt is updated with new deps re-running the ‘pip wheel’ command above will update the cache to hold the new bits.

That was all incredibly easy and so I’d highly recommend devs on any non-trivially sized python project make use of it.  Thanks to Joe Gordon for the pointing this out at the summit !