Vox Pupuli Tasks – The solution to manage your git modules!

Who is not aware of the following problem. It’s Sunday afternoon, you’re bored. You want to do something useful, you want to contribute to the community – you want to review a pull request!

This might be a more or less known scenario to you. How would somebody start this, how do you identify a suitable pull request that you can review. I spend most of my time working on the Puppet community Vox Pupuli. Vox Pupuli has currently 518 open pull requests. Not all of them are in a state where you can or should review code. A few examples:

  • The PR is already reviewed and the author needs to add tests, but the code is fine
  • The PR is already reviewed and the author needs to document their new feature, the code is fine
  • The PR is already reviewed and the author needs to update the code
  • The PR is already reviewed and the author needs to update the code because it breaks existing tests
  • The author needs to update the code because it breaks existing tests
  • The PR has merge conflicts that must be resolved

It’s possible that you spend an hour just checking existing PRs and you notice that all of them already are reviewed and/or waiting for the author to work on it. This is frustrating and kills the rare time open source collaborators have. Nothing of the above is Puppet- or Vox Pupuli specific. This happens in every bigger community or company. Besides that, there are also some Puppet specific tasks that maintainers need to do on a regular basis (yak shaving):

  • Drop EOL puppet versions from metadata.json
  • Add new Puppet versions to metadata.json
  • Drop EOL linux versions + their tests + their code
  • Implement new linux versions
  • Ensure dependencies are up2date and resolve cleanly

This list is far away from being complete, but it provides a good overview of all the regular tasks. They all consume time that could be spend on reviewing new codes.

My friend Robert Müller and I identified those painpoints because we suffer them every day. Together we want to automate as much as possible of those jobs so the Vox Pupuli collaborators can be more productive and efficient – we started Vox Pupuli Tasks.

All the Vox Pupuli modules are stored at GitHub. And GitHub has this nice concept of Git Hub Apps. Those are webservices that get notifications for every action that happens in a specific GitHub namespace. Also the Apps can respond to it. We (mostly Robert, because he’s awesome) created a Ruby on Rails application that receives GitHub events. Based on that we currently solve two and a half use cases:

Broken test suite

GitHub does not notify anybody if a check fails, also Travis-CI does not notify the pull request author. Many people are capable and interested in fixing their PR, they just don’t know it’s broken. So normally a collaborator checks a PR, sees it has a failed CI job and writes a comment so the PR author gets notified. The community has a guideline to add the tests-fail label to it. That’s partly helpful. Based on that we can filter PRs out in the GitHub search UI. But! the labels don’t disappear after the CI is working again so this approach is pretty much useless. Our Rais App gets notified about all pull requests and changes related to that PR.

  • If a PR has a successful CI run, we check if the label tests-fail is attached. If so, we remove it.
  • If the PR has a running CI job, we schedule a job that checks the CI status in a minute
  • If Ci failed, we attach the tests-fail comment. This makes it easy to filter the PR. But it does not notify the PR author about it. Therefore we also write a comment once into the PR

Merge conflicts

Pretty much as above. PR authors do not get notified if their pull request has a merge conflict. We normally attach the has-conflicts label, but that’s pointless if they don’t get removed automatically. Our Rails application again adds or removes a label and adds a comment. If one pull request gets merged we also trigger a job for all open pull requests in the same module. A merge can trigger conflicts in them.

Yak Shaving

Every puppet module provides parsable information. We’ve got the metadata.json that lists supported OS and Puppet versions. Our .msync.yml tracks the version of modulesync_config that got applied last. travis.yml lists the complete test suite, with two acceptance test jobs for each supported mainstream OS. We can automatically parse the data and at least create statistics about it and inform people when they need to act. In the past we had a very ugly Ruby script for this, which I wrote some time ago. Now with the Rails application, we can act upon changes in the code. We already render some nice statistics so people know when and where to act, but it’s not yet as much automated as it could be. But all checks are currently organized in a plugin system and can be enabled/disabled. This doesn’t hardcode the application into the Puppet ecosystem and keeps it flexible

VPT Status Quo

So what’s the current status of VPT (Vox Pupuli Tasks)? The above features are currently live for all Vox Pupuli puppet modules (not all Vox Pupuli git repositories). This already helps alot while reviewing pull requests. We’ve a customized GitHub search query that filters all PR’s out where a collaborators is waiting for the PR author. This reduces the number of 518 open PRs down to 175!

What’s next?

We want that people use this application. This requires two things:

  • Working documentation about the featureset and installation
  • Application needs to be portable and not tied to Vox Pupuli

Robert and I are currently working on both parts. New users also means new collaborators, and we’re always happy if people jump in and develop with us together. Our goal is to provide a generic Rails application that can act on events from version control systems. We’re currently working together with two companies that might use this. We want to make the application modular so everybody can use this, no matter if you maintain Puppet modules on github.com or python projects on your internal GitLab or Salt modules on yout GitHub enteprise instance.

Roadmap??

We currently don’t have a fixed roadmap, but we want to! There are already a bunch of open issues with features and enhancements that we want to implement. We’re currently in contact with potential users and identify their hard and soft requirements. Based on that we want to create a loose roadmap on GitHub. Do you miss a specific feature? Create an issue!

Do you have feedback? Write @voxpupuliorg, Robert or directly to me on Twitter or respond here with a comment. And most important: Don’t forget to checkout https://voxpupu.li/ ! (The Rails app I wrote about for an hour).

Posted in General, Linux, Puppet | Leave a comment

Dovecot: Apply sieve filter to existing emails

I recently restructured my email setup and updated my sieve filter (server side email filtering). I now have a sieve configuration file that’s way stricter. Many of the emails in my INBOX would now be sorted into subfolders, but Dovecot applies rules only to incoming messages.

Luckily, Dovecot provides a tool to apply existing rules to existing emails!

sieve-filter -v -C -u tim@bastelfreak.de /home/vmail/bastelfreak.de/tim/.dovecot.sieve INBOX

This will simulate the filters in the .dovecot.sieve file to the INBOX folder from the tim@bastelfreak.de. It won’t actually modify emails. If the output looks good, you can add the -W and -e option. With -e, the emails are processed and, if a rule demands it, copied to another folder. With -W, the source folder will be modified as well. This means: If you add -e but not -W, you will end up with the email in two folders.

Posted in General, Linux, Short Tips | Leave a comment

Arch Linux installation guide

A long time ago I wrote a blog post about installing Arch Linux:

I’m aware of the fact that there isn’t one definitive guide for installing it. This highly depends on your hardware, your use case for the system and for the desired software. Nevertheless I thought I update my previous article, because:

  • that’s an easy topic to get back to blogging!
  • I’m lazy and like to have a copy and paste script :P

Assumptions

  • We install a headless server that’s accessible via DHCP/SSH
  • It’s booted via BIOS mode, no UEFI
  • the OS will be stored as /dev/sda

Bootstrap the base system

# clean old partition tables / filesystem information
wipefs --all /dev/sda
# create a fresh GPT partition table
parted /dev/sda --script mklabel gpt
# parted won't format partitions, you've to specify any filesystem
# but the attribute will be ignore
# bios boot partition
parted /dev/sda --script mkpart primary ext3 2048s 4095s
# /boot
parted /dev/sda --script mkpart primary ext3 4096s 1953791s
# rest for LVM
parted /dev/sda --script mkpart primary ext3 1953792s 100%
parted /dev/sda --script set 1 bios_grub on
# setup LVM
pvcreate /dev/sda3
vgcreate vg0 /dev/sda3
lvcreate --size 50G --name root vg0
mkfs.ext4 -v /dev/sda2
mkfs.ext4 -v /dev/mapper/vg0-root
mount /dev/mapper/vg0-root /mnt
mkdir /mnt/boot
mount /dev/sda2 /mnt/boot
# use own mirror with aur repository
echo 'Server = http://mirror.virtapi.org/archlinux/$repo/os/$arch/' > /etc/pacman.d/mirrorlist
# update local repository cache
pacman -Syy
# install base OS
pacstrap /mnt base base-devel vim htop grub openssh linux-hardened linux-hardened-docs linux-hardened-headers linux linux-docs linux-firmware linux-headers linux-lts linux-lts-headers linux-lts-docs lvm2 inetutils man
# generate fstab
genfstab -U /mnt >> /mnt/etc/fstab

chroot into the new system

arch-chroot /mnt
# setup you most favourite hostname
echo myawesomebox.mylocaltld > /etc/hostname
echo LANG=en_US.UTF-8 > /etc/locale.conf
echo "en_US.UTF-8 UTF-8" > /etc/locale.gen
locale-gen
echo LANG=en_US.UTF-8 > /etc/locale.conf
echo KEYMAP=de > /etc/vconsole.conf
# setup a sane list of hooks
sed -i 's/^HOOKS=.*/HOOKS=(base systemd keyboard sd-vconsole autodetect modconf block sd-lvm2 filesystems fsck)/' /etc/mkinitcpio.conf
# you can omit the virtio* modules if this isn't a VM
sed -i 's/^MODULES=.*/MODULES=(ext4 vfat usbhid hid igb virtio_balloon virtio_console virtio_scsi virtio_net virtio_pci virtio_ring virtio ne2k_pci)/' /etc/mkinitcpio.conf
# regenerate initrds with all hooks + modules
mkinitcpio --allpresets
# start basic systemd services after first boot
systemctl enable sshd systemd-networkd systemd-resolved systemd-networkd-wait-online
# install grub + create grub config
grub-install /dev/sda
sed -i 's/quiet/quiet nomodeset/' /etc/default/grub
sed -i 's/.*GRUB_DISABLE_LINUX_UUID.*/GRUB_DISABLE_LINUX_UUID=false/' /etc/default/grub
echo 'GRUB_DISABLE_SUBMENU=y' >> /etc/default/grub
sed -i 's/^GRUB_TIMEOUT=.*/GRUB_TIMEOUT=10/' /etc/default/grub 
grub-mkconfig -o /boot/grub/grub.cfg
# setup ssh login
sed -i 's/#PermitRoot.*/PermitRootLogin prohibit-password/' /etc/ssh/sshd_config
mkdir /root/.ssh
curl https://github.com/bastelfreak.keys --silent > /root/.ssh/authorized_keys
sed -i 's/^/no-port-forwarding,no-X11-forwarding,no-agent-forwarding /g' /root/.ssh/authorized_keys
# change root password
echo 'root:YOURNEWPASSWORD' | chpasswd
# create a networkd config with DHCP
dev=$(ip route show default | awk '/default/ {print $5}')
mac=$(cat "/sys/class/net/${dev}/address")
{
echo '[Match]'
echo "MACAddress=${mac}"
echo ''
echo '[Network]'
echo 'DHCP=yes'
} > /etc/systemd/network/wired.network
# Add AUR repository
{
echo ''
echo '[aur]'
echo 'SigLevel = Optional TrustAll'
echo 'Include = /etc/pacman.d/mirrorlist'
} >>  /etc/pacman.conf
# Done \o/
exit
umount /mnt/boot
umount /mnt
sync
reboot

I hope that this basic setup might be useful for at least a few people. There isn’t anything special about it, but I tried to automate all the interactive parts away. You might like this for your own scripts.

Posted in General, Linux | Tagged , | Leave a comment

Vox Pupuli – Code of the Week 2

This is a new blog series that we would like to introduce. At Vox Pupuli, we receive many Pull Requests. We receive so much awesome code, that isn’t really appreciated. In the past, some of our members sent out tweets if they saw cool code. We now want to start a blog series with regular updates to appreciate the work of our collaborators and contributors.

We continue in week two with an unusual, but awesome, contribution. A long time ago emlun007 created a very fancy logo for our puppetboard project! Sadly, this contribution slipped through:

8bit vox

Another collaborator, Sebastian Rakel, was so kind to separate the image from the background (he kind of patched it – this pun is required to qualify this for the Code of the Week series):

8bit vox

We receive a lot of actual Puppet code as contributions, but there are so many other things that are important as well. Such as updates for documentations or logos. Did you know that we have a dedicated repository for all the logos and images we have?

Thank you emlun007 and Sebastian for this cool contribution!

Did you spot some cool code as well? Write a short blogpost or write an email to our Project Management committee or just drop a note on our IRC channel #voxpupuli on Freenode.

I also published this talk at https://voxpupuli.org/blog/2019/01/07/code-of-the-week/

Posted in General, Linux, Puppet | Tagged , , | Leave a comment

Thunderbird: Hide local hostname in mailheaders

By default, thunderbird uses the local hostname within the SMTP-Submission dialog with the mailserver. There might be situations where you have a hostname that exposes private data, like a company name. Sometimes this is very helpful for debugging, but sometimes you want to hide it. You can configure this in Thunderbird:

Thunderbird Menu -> Preferences -> Prefernces -> Advanced -> General -> Config Editor

In this window, you can create a new config entry named mail.smtpserver.default.hello_argument with the String datatype. As value, you can set any string. Something like `localhost` might even look valid to other mailserver administrators.

The old mailheader:

Received: from normalhostname (unknown [IPv6:*removed*])
	by example.com (Postfix) with ESMTPSA id 9E4302FE07
	for <user@example.com; Sat,  5 Jan 2019 01:50:25 +0100 (CET)

The new headerentry:

Received: from localhost (unknown [IPv6:*removed*)
	by example.com (Postfix) with ESMTPSA id C5151300C5
	for <user@example.com; Sat,  5 Jan 2019 01:41:39 +0100 (CET)
Posted in General, Linux, Short Tips | Tagged | Leave a comment

Vox Pupuli – Code of the Week 1

This is a new blog series that we would like to introduce. At Vox Pupuli, we receive many Pull Requests. We receive so much awesome code, that isn’t really appreciated. In the past, some of our members sent out tweets if they saw cool code. We now want to start a blog series with regular updates to appreciate the work of our collaborators and contributors.

We start the calender week 1 with a contribution from Paul Hoisington. He created a pull request for our yum module.

A breakdown of his code:

$_pc_cmd = [
  '/usr/bin/package-cleanup',
  '--oldkernels',
  "--count=${_real_installonly_limit}",
  '-y',
  $keep_kernel_devel ? {
    true    => '--keepdevel',
    default => undef,
  },
].filter |$val| { $val =~ NotUndef }

We’ve an array of values and a selector statement (which deserves a dedicated post). We don’t know which value the selector will return, Undef is possible, but not desired. We need to filter anything from the array that is Undef. For newcomers: Undef is very similar to Nil in Ruby. So Paul applies the filter method to the array. It will apply a lambda to each value in the array. If it returns true, the value will be returned in a new array with all other values that also evaluated to true.

Now the cool part: NotUndef! Paul uses =~ to match the values against the NotUndef datatype. In most languages, =~ is used to validate against a regular expression. In Puppet, you can also evaluate against a datatype like StringArray or Undef (or many more). The NotUndef type is a meta type which basically applies to all types, exept for Undef. This type is rarely used and not well known, but still super helpful!

Thank you Paul for this cool contribution!

Did you spot some cool code as well? Write a short blogpost or write an email to our Project Management committee or just drop a note on our IRC channel #voxpupuli on Freenode.

I also published this talk at https://voxpupuli.org/blog/2019/01/01/code-of-the-week/

Posted in General, Linux, Puppet | Tagged , , | Leave a comment

I’m alive!

Hi everybody. Yes, this blog is still alive, so am I!

In the past two years I was very busy working in an Open Source Community – https://voxpupuli.org. Besides that I had to invest a huge a mount of time in my evening classes. As I wrote the last post, I was very busy with my bachelor thesis. The degree took eight terms, the thesis had to be made in the fith and sixth semester.

Datawarehousing for Cloud Computing Metrics

That is the title of my thesis. You can find the paper here and the slides for the presentation here (also check out my other talks I did in the past years). Sadly we had to do the paper in German. Yes, we. My degree is now Staatlich geprüfter Informatiker für technische Informatik or in English Equivalent to Bachelor of Engineering for Computer Science. The degree qualifies to be a team leader for technical teams with experts from different areas, such as database management, software engineering and networking. To prove that each student has not only knowledge in those expert fields, but can also lead a team, we had to do the thesis as a group. In my case this was a group of three people. The thesis was one of the best in our class and was rated with more than 92%.

The project itself, the documentation and the presentation were finished last summer (2017). Afterwards I had to prepare for the 3 final exams. I completed the whole further education this summer (2018) with an average grade of 1,7 (where 1 is the best and 6 the worst grade). This was also the best degree in the class.

This small FOSS thing…

I think in 2015 Spencer Krum asked me if I would like to get merge access to the Vox Pupuli organisation, and to help out maintaining the CI infrastructure. Half a year later, this happened: https://blog.bastelfreak.de/2016/05/one-month-in-open-source-projects

Now some time has passed by. Vox Pupuli is getting into the Software Freedom Conservancy and our small group of hackers and enthusiasts evolved into a real community with over 120 people. We now have a proper governance document (based on the great model from the openstack foundation). The community now has Project Management Committee – the PMC. I’m one of the founding members and got elected the third time in a row some days ago \o/

In the past two years, my university allowed me to do a lot of talks (checkout the stuff prefixed with computer sience). Besides that I had the honour to speak at multiple conferences. All talks are available at the above link. Four are even available on youtube!

Thank you GoDaddy EMEA!

last but not least, I want to say thank you to GoDaddy. My team was very helpful the past four years. Evening classes took a lot of time and I got a special treatment, for example was I the only person in an Ops/SRO team that didn’t have to be oncall. Also my current boss allows me to visit all the awesome conferences, not only to represent the company, but also to exchange knowledge and to improve my soft skills.

I’m very happy to work for GoDaddy and I’m looking forward to new opportunities as a conference speaker in 2019. Also I want do start blogging on a regular basis again!

Posted in General | Tagged , , | Leave a comment

Short Tip: Dealing with FreeBSD for noobs

I had the pleasure to work with FreeBSD again in the past week. I have not touched it in over a year and would consider myself a Linux-only person. I designed a new backup infrastructure which requires a FreeBSD server. Here are some of the issues I had to deal with during the installation.

Starting the Installer
So I am working in a datacenter, and walking to the server and inserting a USB drive really sucks, I need PXE. Booting the official FreeBSD images via PXE did not work out. mfsBSD is a collection of scripts to create a FreeBSD live image that can be booted via PXE, I used it successfully in the past. Right now the FreeBSD 11 support isn’t completely implemented, my friend foxxx0 was so kind to test and debug mfsBSD until he could build a working image for me.

Enterprise Filesystem vs Enterprise Hardwareraid
The server is assembled with an LSI Raidcontroller. FreeBSD brings the awesome ZFS filesystem, to use the full potential you need to pass through all drives to the operating system and don’t create a hardware raid. Sadly, the damn LSI 9271 controller is unable to do that (but every cheap controller that costs only 30% can handle this…). As a workaround I have to create a raid 0 for each drive. This can be achived by the following command:
storcli /c0 add vd each type=raid0 pdcache=off

Getting more writable space into mfsBSD
The image I got from foxxx0 was around 87MB, it gets loaded into the ram and it writable, however the partitions are nearly filled up to 100% after the boot. I was to lazy to increase them (means: I had no clue how to do that and I am really really lazy). But instead we can create a tmpfs and mount it somewhere:

mkdir /usr/ports
mdmfs -s 500m md1 /usr/ports

This will create a 500MB tmpfs in /usr/ports.

Trying to Build Software in a fresh System
I need to build storcli, because it isn’t available as a package. This requires a few step. We did the following in a booted mfsBSD (we had to create the tmpfs in /usr/ports to get some free space before that):

portsnap fetch extract
cd /usr/ports/sysutils/storcli
make config fetch checksum depends extract patch configure build
make install

We were unable to run the commands without failures, after fiddling around with it for a longer time we tried the same on an already installed system and it worked out. The mfsBSD isn’t recognized as a proper FreeBSD during the compilation which leads to several errors.

Fixing bsdinstall
FreeBSD ships a tool called bsdinstall, this is a ncurses based installer. This is broken in freebsd11 or msfBSD. It needs a MANIFEST file which wasn’t present in mfsBSD. This can easily be fixed by running:

wget ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/amd64/11.0-RELEASE/MANIFEST -O /usr/freebsd-dist/

After all this I was able to run the bsdinstall script and had a booting FreeBSD 11. More fun will come!

Posted in 30in30, General, Linux | Leave a comment

Short Tip: Install shellcheck on an outdated CentOS

I’ve to install shellcheck on a CentOS 7 box, this is the latest CentOS version. The tool is a great linter for bash scripts, which I want to integrate into our CI pipeline. shellcheck isn’t packaged so I will build it from source. From their installguide:

yum -y install cabal-install
su jenkins
cd ~
cabal update
cabal install shellcheck

This will fail because the cabal lib is outdated. Remember that this is the latest CentOS….
Lucky me, I can build a new cabal version with cabal:

cabal install cabal

After that step we can successfully build shellcheck:

cabal install shellcheck

The shellcheck binary is now available in ~/.cabal/bin/shellcheck

Posted in General, Linux, Short Tips | 1 Comment

Short Tip: Installing msgpack on outdated boxes (ruby1.9.1)

I’m using msgpack to serialize the data between my puppet agents and the masters. Recently I had to puppetize an old Debian Wheezy box. I’ve to install msgpack in advance:

# gem install msgpack
Building native extensions. This could take a while...
ERROR: Error installing msgpack:
ERROR: Failed to build gem native extension.

/usr/bin/ruby1.9.1 extconf.rb
/usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- mkmf (LoadError)
from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
from extconf.rb:1:in `

'

Gem files will remain installed in /var/lib/gems/1.9.1/gems/msgpack-1.0.0 for inspection.
Results logged to /var/lib/gems/1.9.1/gems/msgpack-1.0.0/ext/msgpack/gem_make.out

This error looks strange, but can be fixed by installing ruby1.9.1-dev. Then we get to this:

# gem install msgpack
Building native extensions. This could take a while...
ERROR: Error installing msgpack:
ERROR: Failed to build gem native extension.

/usr/bin/ruby1.9.1 extconf.rb
checking for ruby/st.h... yes
checking for st.h... yes
checking for rb_str_replace() in ruby.h... yes
checking for rb_intern_str() in ruby.h... yes
checking for rb_sym2str() in ruby.h... no
checking for rb_str_intern() in ruby.h... yes
checking for rb_block_lambda() in ruby.h... no
checking for rb_hash_dup() in ruby.h... yes
checking for rb_hash_clear() in ruby.h... no
creating Makefile

make
sh: 1: make: not found

Gem files will remain installed in /var/lib/gems/1.9.1/gems/msgpack-1.0.0 for inspection.
Results logged to /var/lib/gems/1.9.1/gems/msgpack-1.0.0/ext/msgpack/gem_make.out

Now make is missing, we can install the package, then try msgpack again:

# gem install msgpack
Building native extensions. This could take a while...
Successfully installed msgpack-1.0.0
1 gem installed
Installing ri documentation for msgpack-1.0.0...
Installing RDoc documentation for msgpack-1.0.0...

Hooray, works. But we probably don’t need all the docs. We can also use:

# gem install --no-user-install --no-rdoc --no-ri msgpack
Fetching: msgpack-1.0.0.gem (100%)
Building native extensions. This could take a while...
Successfully installed msgpack-1.0.0
1 gem installed

(my first thought was that the latest msgpack release doesn’t work on ruby1.9.1 anymore, and that I had to downgrade that. But while writing this article I noticed that I was wrong and it still works)

Posted in General, Linux, Puppet, Short Tips | Leave a comment