The Arch Linux developers created some fancy scripts to setup a build environment. It consists of a btrfs subvolume with a minimal Arch Linux installation (only packages from base-devel). A snapshot will be created for each build, the builds will be executed in separated systemd-nspawn containers.
Basic Requirements:
We need a mounted btrfs somewhere in our system, you can easily create a loopbackimage and mount that as a btrfs volume if you do not own any free partitions or logical volumes:
# create 75GB image, throw btrfs onto it and mount it dd if=/dev/zero of=arch-test.btrfs bs=1G count=75 pacman -Syyu --noconfirm btrfs-progs mkfs.btrfs arch-test.btrfs mkdir /mnt/aur mount arch-test.btrfs /mnt/aur/
First Installation:
We start with the installation of the Arch Linux devtools, this is the packaged collection of scripts for dealing with packages:
pacman -Syyu --noconfirm devtools
Now we define an environment variable with the path to our snapshots. This can be any writable sub-directory in the btrfs mount (our example is mounted at /mnt/aur). We will also setup the masterimage (the following block is intended to be run as a normal user and not root):
mkdir /mnt/aur/build_test CHROOT=/mnt/aur/build_test echo "export CHROOT=$CHROOT" >> ~/.bashrc sudo mkarchroot $CHROOT/root base-devel
Now we can fetch a package source and build it (again as a normal user):
git clone https://aur.archlinux.org/puppetdb.git cd puppetdb makechrootpkg -c -r $CHROOT
This will spin up an nspawn container in a snapshot of our btrfs image, cleans the build env, build the packages and places it in our current directory. If you do `ls` now you should see `puppetdb-2.3.8-2-any.pkg.tar.xz`. Awesome, package is build.
Next step: automate this for a list of packages
I guess I don’t need to write much about that script as it is very simple:
#!/bin/bash BASEDIR=/tmp/ CHROOT=/mnt/aur/build_test2 function clone_repo() { [ -n "$1" ] || exit cd "$BASEDIR" || exit rm -rf "$1" git clone "https://aur.archlinux.org/$1.git" } function build_package() { [ -n "$1" ] || exit cd "$BASEDIR/$1" || exit echo "$BASEDIR/$1" ls makechrootpkg -c -r $CHROOT } cd "$BASEDIR" || exit while read -r package; do clone_repo "$package" if build_package "$package"; then echo -e "\033[01;32mwe just build $package for you\033[00m" else echo -e "\033[01;31mwe failed on $package /o\ \033[00m" fi done < "$BASEDIR/packages"
This will work really good, unless one package has a dependency on another AUR package or needs an unknown GPG key to validate a signature. What comes next?
- Moving to Jenkins-CI as build System
- Move the package list to Github to allow collaboration
- Automatic fetching for GPG keys
- Resolving dependencies
- Support for concurrent builds
- Seperate jobs + history for each package
- Pushing new packages to a mirror
Setup Jenkins-CI:
I plan to build packages, many many packages. Probably so many that I need to spread the load across multiple servers. Also, to make administration easier, a working Web UI + API + proper build history would be cool. GitHub repository, this holds a aur-packages file with all packages that we would like to build, also we’ve got a groovy script for Jenkins:
def PackagesFile = new File('/var/lib/jenkins/jobs/Arch Package Builder/workspace/aur-packages') PackagesFile.eachLine { line -> packageName = line.trim() job("Arch_Package_${packageName}") { description("This Job builds the ${packageName} package for archlinux") concurrentBuild() label('master') scm { git{ remote{ name('origin') url("https://aur.archlinux.org/${packageName}.git") } branch('master') extensions { cleanBeforeCheckout() } } } triggers { scm('H/20 * * * *') } steps { shell("sudo /usr/bin/makechrootpkg -u -c -r /mnt/aur/build_test -l ${packageName}") } publishers { artifactDeployer { artifactsToDeploy { includes('*.pkg.tar.xz') remoteFileLocation("/var/www/archlinux/aur/os/x86_64/") failIfNoFiles() deleteRemoteArtifacts() } } postBuildScripts { steps { // add a sync() call, which may prevent broken repo DB shell('sync;') // remove old release from repodb, add new one shell("/usr/bin/repo-add --remove --quiet /var/www/archlinux/aur/os/x86_64/aur.db.tar.gz /var/www/archlinux/aur/os/x86_64/${packageName}*.pkg.tar.xz") // delete the unneded btrfs subvol to free up diskspace shell("sudo /usr/bin/btrfs subvolume delete /mnt/aur/build_test/${packageName}") } onlyIfBuildSucceeds(true) } // display fancy jokes and a picture of chuck chucknorris() } } //queue("Arch_Package_${packageName}") }
This is a standard jenkins installation (via `pacman -Syu jenkins`) on a plain Arch Linux server. I only added a few plugins: GitHub Plugin, Artifact Deployer Plug-in, Job DSL.
Jenkins is configured with one job “Arch Package Builder”, this one gets notified about changes from GitHubs Jenkins hook for every commit. Than it will run the groovy script. This iterates at the text file from the repo. Jenkins configures one new job for every package, this consists of:
- A description
- A git repo at https://aur.archlinux.org
- A trigger which checks every 20minutes for changes on the repo
- a build step to actually build the package (with our well known makechrootpkg command from earlier)
There are a few adjustments needed to the base nspawn image and the Jenkins host to work properly (you can run arch-nspawn /mnt/aur/build_test/root bash to get a shell in the master image):
1) Enable 32bit Repos
It is possible that we will build something with 32bit dependencies so we need to enable the multirepo in pacman (we need that on the host and in the image):
arch-nspawn /mnt/aur/build_test/root bash awk 'BEGIN { r=0 } /\[community\]/ { r=1; print "[community]"; next } r == 0 { print $0 } r == 1 { gsub(/#/, "", $0); print $0; r=0 }' /etc/pacman.conf awk 'BEGIN { r=0 } /\[community\]/ { r=1; print "[community]"; next } r == 0 { print $0 } r == 1 { gsub(/#/, "", $0); print $0; r=0 }' /etc/pacman.conf
2) Update sudoers to allow the jenkins for our tooling:
echo "jenkins ALL = NOPASSWD: /usr/bin/makechrootpkg" > /etc/sudoers.d/jenkins echo "jenkins ALL = NOPASSWD: /usr/bin/arch-nspawn" >> /etc/sudoers.d/jenkins echo "jenkins ALL = NOPASSWD: /usr/bin/btrfs" >> /etc/sudoers.d/jenkins
3) Tell gpg to automatically search for missing keys:
echo "keyserver-options auto-key-retrieve" >> /var/lib/jenkins/.gnupg/gpg.conf
4) Use more than one core for compilation (set it to the amount of cores you have or armount+1):
sed -i 's/#MAKEFLAGS="-j2"/MAKEFLAGS="-j9"' /etc/makepkg.conf
Analyze the mkchrootpkg command:
what jenkins does is
sudo /usr/bin/makechrootpkg -u -c -r /mnt/aur/build_test -l ${packageName}
This tells the command to use our master image (-r /path) as a base and create a snapshot from it, the snaptshot is named after the package (-l ${packageName}). This means that every package will get his own snapshot, and this means that we can run as many concurrent builds as we want. The -c option will delete any existing snapshot with the package name before the actual build, this guarantees that the build env is fresh an clean. The -u flag tells makechrootpkg to update the system in the container before the actual packaging begins. This is a requirement of the Arch Linux packaging guidelines, every package has to be build against the latest dependencies.
Final missing step: Create a Repository to serve the files (one mirror to rule them all!)
So, everything is awesome right now, we just need to push an updated package list to github, this will automatically add a jenkins job, build the package and saves it in a safe directory. Now I want to make this accessible to other Arch machines so that the central build system is useful. The default Arch repositories only hold one version of each package, it is not supported to have a history. The Arch Linux Archive is a project that saves all versions of packages and offers snapshots for each day (more about that in an upcoming post). This allows users to do safely down/upgrade (You’ve to know that PKGBUILDs have dependencies on other packages, but without specifying a version number, a package always depends on the newest version of the dependency that was available at the release date of the actual package [was this understandable?]).
I also want to serve the packages from my Jenkins as an ALA repository to have a history. The goal is to have one ALA server serving all repositories, the default ones and our AUR custom repository. Arch Linux Archive tools can only sync from one central place so we need to install a normal Tier2 Arch Linux mirror, add our AUR repository to it and then ise that as a source for the ALA.
We start with the installation of a sync script (save at /usr/local/bin/syncrepo):
#!/bin/bash ## # created by bluewind # modified by bastelfreak ## home="/var/www" target="${home}/archlinux" tmp="${home}/tmp/" lock='/tmp/mirrorsync.lck' #bwlimit=4096 # NOTE: you very likely need to change this since rsync.archlinux.org requires you to be a tier 1 mirror source='rsync://mirror.selfnet.de/archlinux' lastupdate_url="http://mirror.selfnet.de/archlinux/lastupdate" [ ! -d "${target}" ] && mkdir -p "${target}" [ ! -d "${tmp}" ] && mkdir -p "${tmp}" exec 9>"${lock}" flock -n 9 || exit # if we are called without a tty (cronjob) only run when there are changes if ! tty -s && diff -b <(curl -s "$lastupdate_url") "$target/lastupdate" >/dev/null; then exit 0 fi if ! stty &>/dev/null; then QUIET="-q" fi rsync -rtlvH --safe-links --delete-after --progress -h ${QUIET} --timeout=600 --contimeout=60 -p \ --delay-updates --no-motd \ --temp-dir="${tmp}" \ --exclude='*.links.tar.gz*' \ --exclude='/other' \ --exclude='/sources' \ --exclude='/aur' \ ${source} \ "${target}" #echo "Last sync was $(date -d @$(cat ${target}/lastsync))"
Then we add a systemd service and timer:
root@ci ~ # systemctl cat syncrepo.{service,timer} # /etc/systemd/system/syncrepo.service [Unit] Description=Sync Archlinux Repo [Service] Type=oneshot ExecStart=/usr/local/bin/syncrepo # /etc/systemd/system/syncrepo.timer # /usr/lib/systemd/system/archive.timer [Unit] Description=Hourly Archlinux Mirror Update [Timer] OnCalendar=hourly AccuracySec=1m Persistent=true [Install] WantedBy=timers.target root@ci ~ #
This will sync every official repository every hour to /var/www/archlinux. You can now setup your most loved webserver (nginx) to point to /var/www and serve the content. Now we install the archivetools (as a nonroot user):
git clone https://github.com/seblu/archivetools.git cd archivetools/ makepkg -i sed -i "s/#ARCHIVE_RSYNC=*/ARCHIVE_RSYNC='/var/www/archlinux/'" /etc/archive.conf sudo systemctl enable archive.timer sudo systemctl start archive.service
The last command will start the initial sync, this will take some time depending on the speed of your internet connection. The timer will run once a day and sync your mirror to the ALA. You can now setup another vhost to serve /srv/archive. There is the Arch Linux Archive including our AUR repository \o/
Dealing with AUR build dependencies:
It is possible that an AUR package needs another AUR package during the build. This brings us into two issues.
- Detect the dependency and add it to the aur-packages file
- Bring the package in the nspawn container to install it
Job one is a fancy ruby script, you can use that as a precommit hook for the repo that stores the aur-packages file:
#!/usr/bin/ruby ## # Written by bastelfreak ## # enable your local multilib package ## require 'json' require 'net/http' @path = 'aur-packages' @aur_packages = [] @aur_url = 'https://aur.archlinux.org/rpc/?v=5&type=info&arg[]=' @http = '' @matches = [] # read files def get_all_packages path @aur_packages = [] f = IO.readlines path f.each do |line| line.delete!("\n") @aur_packages << line end end def aur_api_connect uri = URI.parse(@aur_url) http = Net::HTTP.new(uri.host, uri.port) http.use_ssl = true @http = http end def get_deps_for_package package uri = URI.parse("#{@aur_url}#{package}") res = @http.request(Net::HTTP::Get.new(uri.request_uri)) ary = JSON.load(res.body)['results'] #ary[0].key?("Depends") ? ary[0]["Depends"] : '' ary.length > 0 ? ary[0]["Depends"] : '' end def is_no_official_package? package !system("pacman -Ssq #{package}", :out => File::NULL) end def add_deps deps # unless deps.nil? deps.each do |dep| add_dep dep end # end end def add_dep dep dep = dep.slice(/^[a-zA-Z0-9@.+_-]+/) puts "processing dep #{dep}" if (is_no_official_package?(dep)) && (!@aur_packages.include? dep) puts "found dep #{dep}" #@aur_packages << dep @matches << dep end end def get_all_deps_for_every_package @aur_packages.each do |package| puts "processing package #{package}" deps = get_deps_for_package package add_deps deps if deps.is_a? Array end end def cycle_until_all_deps_are_found get_all_deps_for_every_package if @matches.length > 0 puts "we found one or more deps, adding them to the file and rescan" @matches = @matches.uniq @aur_packages = @matches File.open(@path, "a") do |f| f.puts(@matches) end @matches = [] cycle_until_all_deps_are_found end end # let the magic happen aur_api_connect get_all_packages @path cycle_until_all_deps_are_found
Fixing the second issue: We now have a AUR repo \o/ we just add that to our /etc/pacman.conf in the btrfs base image. The content is:
[aur]
SigLevel = Optional TrustAll
Server = http://mirror.virtapi.org/archlinux/$repo/os/$arch/
Conclusion:
And we are finally done \o/
We now have a sweet Jenkins-CI that builds all packages we want to. They are automatically pushed to a mirror and adding new packages is done via github PRs. The Arch Linux Archive project creates a nice history of all packages which allows us to easily downgrade and upgrade to other package versions.
Sources:
- Check out the amazing Jenkins-CI API documentation
- Big thanks to Bluewind who helped me out with the Arch build details