I’m alive!

Hi everybody. Yes, this blog is still alive, so am I!

In the past two years I was very busy working in an Open Source Community – https://voxpupuli.org. Besides that I had to invest a huge a mount of time in my evening classes. As I wrote the last post, I was very busy with my bachelor thesis. The degree took eight terms, the thesis had to be made in the fith and sixth semester.

Datawarehousing for Cloud Computing Metrics

That is the title of my thesis. You can find the paper here and the slides for the presentation here (also check out my other talks I did in the past years). Sadly we had to do the paper in German. Yes, we. My degree is now Staatlich geprüfter Informatiker für technische Informatik or in English Equivalent to Bachelor of Engineering for Computer Science. The degree qualifies to be a team leader for technical teams with experts from different areas, such as database management, software engineering and networking. To prove that each student has not only knowledge in those expert fields, but can also lead a team, we had to do the thesis as a group. In my case this was a group of three people. The thesis was one of the best in our class and was rated with more than 92%.

The project itself, the documentation and the presentation were finished last summer (2017). Afterwards I had to prepare for the 3 final exams. I completed the whole further education this summer (2018) with an average grade of 1,7 (where 1 is the best and 6 the worst grade). This was also the best degree in the class.

This small FOSS thing…

I think in 2015 Spencer Krum asked me if I would like to get merge access to the Vox Pupuli organisation, and to help out maintaining the CI infrastructure. Half a year later, this happened: https://blog.bastelfreak.de/2016/05/one-month-in-open-source-projects

Now some time has passed by. Vox Pupuli is getting into the Software Freedom Conservancy and our small group of hackers and enthusiasts evolved into a real community with over 120 people. We now have a proper governance document (based on the great model from the openstack foundation). The community now has Project Management Committee – the PMC. I’m one of the founding members and got elected the third time in a row some days ago \o/

In the past two years, my university allowed me to do a lot of talks (checkout the stuff prefixed with computer sience). Besides that I had the honour to speak at multiple conferences. All talks are available at the above link. Four are even available on youtube!

Thank you GoDaddy EMEA!

last but not least, I want to say thank you to GoDaddy. My team was very helpful the past four years. Evening classes took a lot of time and I got a special treatment, for example was I the only person in an Ops/SRO team that didn’t have to be oncall. Also my current boss allows me to visit all the awesome conferences, not only to represent the company, but also to exchange knowledge and to improve my soft skills.

I’m very happy to work for GoDaddy and I’m looking forward to new opportunities as a conference speaker in 2019. Also I want do start blogging on a regular basis again!

Posted in General | Tagged , , | Leave a comment

Short Tip: Dealing with FreeBSD for noobs

I had the pleasure to work with FreeBSD again in the past week. I have not touched it in over a year and would consider myself a Linux-only person. I designed a new backup infrastructure which requires a FreeBSD server. Here are some of the issues I had to deal with during the installation.

Starting the Installer
So I am working in a datacenter, and walking to the server and inserting a USB drive really sucks, I need PXE. Booting the official FreeBSD images via PXE did not work out. mfsBSD is a collection of scripts to create a FreeBSD live image that can be booted via PXE, I used it successfully in the past. Right now the FreeBSD 11 support isn’t completely implemented, my friend foxxx0 was so kind to test and debug mfsBSD until he could build a working image for me.

Enterprise Filesystem vs Enterprise Hardwareraid
The server is assembled with an LSI Raidcontroller. FreeBSD brings the awesome ZFS filesystem, to use the full potential you need to pass through all drives to the operating system and don’t create a hardware raid. Sadly, the damn LSI 9271 controller is unable to do that (but every cheap controller that costs only 30% can handle this…). As a workaround I have to create a raid 0 for each drive. This can be achived by the following command:
storcli /c0 add vd each type=raid0 pdcache=off

Getting more writable space into mfsBSD
The image I got from foxxx0 was around 87MB, it gets loaded into the ram and it writable, however the partitions are nearly filled up to 100% after the boot. I was to lazy to increase them (means: I had no clue how to do that and I am really really lazy). But instead we can create a tmpfs and mount it somewhere:

mkdir /usr/ports
mdmfs -s 500m md1 /usr/ports

This will create a 500MB tmpfs in /usr/ports.

Trying to Build Software in a fresh System
I need to build storcli, because it isn’t available as a package. This requires a few step. We did the following in a booted mfsBSD (we had to create the tmpfs in /usr/ports to get some free space before that):

portsnap fetch extract
cd /usr/ports/sysutils/storcli
make config fetch checksum depends extract patch configure build
make install

We were unable to run the commands without failures, after fiddling around with it for a longer time we tried the same on an already installed system and it worked out. The mfsBSD isn’t recognized as a proper FreeBSD during the compilation which leads to several errors.

Fixing bsdinstall
FreeBSD ships a tool called bsdinstall, this is a ncurses based installer. This is broken in freebsd11 or msfBSD. It needs a MANIFEST file which wasn’t present in mfsBSD. This can easily be fixed by running:

wget ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/amd64/11.0-RELEASE/MANIFEST -O /usr/freebsd-dist/

After all this I was able to run the bsdinstall script and had a booting FreeBSD 11. More fun will come!

Posted in 30in30, General, Linux | Leave a comment

Short Tip: Install shellcheck on an outdated CentOS

I’ve to install shellcheck on a CentOS 7 box, this is the latest CentOS version. The tool is a great linter for bash scripts, which I want to integrate into our CI pipeline. shellcheck isn’t packaged so I will build it from source. From their installguide:

yum -y install cabal-install
su jenkins
cd ~
cabal update
cabal install shellcheck

This will fail because the cabal lib is outdated. Remember that this is the latest CentOS….
Lucky me, I can build a new cabal version with cabal:

cabal install cabal

After that step we can successfully build shellcheck:

cabal install shellcheck

The shellcheck binary is now available in ~/.cabal/bin/shellcheck

Posted in General, Linux, Short Tips | 2 Comments

Short Tip: Installing msgpack on outdated boxes (ruby1.9.1)

I’m using msgpack to serialize the data between my puppet agents and the masters. Recently I had to puppetize an old Debian Wheezy box. I’ve to install msgpack in advance:

# gem install msgpack
Building native extensions. This could take a while...
ERROR: Error installing msgpack:
ERROR: Failed to build gem native extension.

/usr/bin/ruby1.9.1 extconf.rb
/usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- mkmf (LoadError)
from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
from extconf.rb:1:in `

'

Gem files will remain installed in /var/lib/gems/1.9.1/gems/msgpack-1.0.0 for inspection.
Results logged to /var/lib/gems/1.9.1/gems/msgpack-1.0.0/ext/msgpack/gem_make.out

This error looks strange, but can be fixed by installing ruby1.9.1-dev. Then we get to this:

# gem install msgpack
Building native extensions. This could take a while...
ERROR: Error installing msgpack:
ERROR: Failed to build gem native extension.

/usr/bin/ruby1.9.1 extconf.rb
checking for ruby/st.h... yes
checking for st.h... yes
checking for rb_str_replace() in ruby.h... yes
checking for rb_intern_str() in ruby.h... yes
checking for rb_sym2str() in ruby.h... no
checking for rb_str_intern() in ruby.h... yes
checking for rb_block_lambda() in ruby.h... no
checking for rb_hash_dup() in ruby.h... yes
checking for rb_hash_clear() in ruby.h... no
creating Makefile

make
sh: 1: make: not found

Gem files will remain installed in /var/lib/gems/1.9.1/gems/msgpack-1.0.0 for inspection.
Results logged to /var/lib/gems/1.9.1/gems/msgpack-1.0.0/ext/msgpack/gem_make.out

Now make is missing, we can install the package, then try msgpack again:

# gem install msgpack
Building native extensions. This could take a while...
Successfully installed msgpack-1.0.0
1 gem installed
Installing ri documentation for msgpack-1.0.0...
Installing RDoc documentation for msgpack-1.0.0...

Hooray, works. But we probably don’t need all the docs. We can also use:

# gem install --no-user-install --no-rdoc --no-ri msgpack
Fetching: msgpack-1.0.0.gem (100%)
Building native extensions. This could take a while...
Successfully installed msgpack-1.0.0
1 gem installed

(my first thought was that the latest msgpack release doesn’t work on ruby1.9.1 anymore, and that I had to downgrade that. But while writing this article I noticed that I was wrong and it still works)

Posted in General, Linux, Puppet, Short Tips | Leave a comment

rubocop-rspec magic: Fixing RSpec/InstanceVariable

We recently introduced RSpec/InstanceVariable into our RuboCop configuration at Vox Pupuli. Using instance variables is not considered best practice so we are currently migrating away from them. Here is en example of the old code style:

  before :each do
    @resource = Puppet::Type.type(:package).new(
      name: 'express',
      ensure: :present
    )
    @provider = described_class.new(@resource)
    @provider.class.stubs(:optional_commands).with(:npm).returns '/usr/local/bin/npm'
    @provider.class.stubs(:command).with(:npm).returns '/usr/local/bin/npm'
  end

The variables are later used, for example:

it 'should use package name by default' do
  @provider.expects(:npm).with('install', '--global', 'express')
  @provider.install
end

or:

it 'should use the source instead of the gem name' do
  @resource[:source] = '/tmp/express.tar.gz'
  @provider.expects(:npm).with('install', '--global', '/tmp/express.tar.gz')
  @provider.install
end

The whole spec file can be seen here, the failing tests are documented here. I was digging around a bit, but didn’t really had a clue how to fix it. Dominic Cleal was so kind to provide a fix (the following replaces the :each block):

let :resource do
  Puppet::Type.type(:package).new(
    name: 'express',
    ensure: :present
  )
end

let :provider do
  described_class.new(resource).tap do |provider|
    provider.class.stubs(:optional_commands).with(:npm).returns '/usr/local/bin/npm'
    provider.class.stubs(:command).with(:npm).returns '/usr/local/bin/npm'
  end
end

This allows us to access provider and resource as normal variables in the rest of the spec test. The fixed version is available here.

Posted in General, Linux, Puppet | Leave a comment

Diving into Management – How to do Meetings

In the past months I had the ability to lead small teams in various little projects (for a few days to weeks). This all happened in some open source projects and in my evening classes. I had a course about project management and management levels a few semesters back. I learned a lot of theory back then, so I was happy to finally try this ‘in the real world’. I would like to share my experiences about meetings with you today.

Meetings
Have you ever met a sysadmin that likes to attend meetings (except for puppet core/module triage)? The excuses I heard the most were:

  • The meetings are too long
  • We were offtopic or there wasn’t even a real topic

Here are my tips if you have to plan and head a meeting:

  • First of you should really think about the people you invite. What is the topic you want to discuss and do you really need to invite 20 people? Do they have to contribute something or would it be enough to send them the procotol afterwards?
  • Somebody has to write a protocol!
  • Set a time limit, never meet longer than 90 minutes
  • Create an agenda, and honor it. This helps everybody to keep the focus on the topic. Also everybody feels a bit more happy if an item on the agenda is achieved.
  • If the meeting is expected to be short (like a daily meeting): Do it standing (like, without chairs, on your feet).
  • Longer discussions are always easier if the environment is nice. organize beanbag chairs, try to do them outside in the sunlight, or in a room with huge windows.
  • Everybody should have the equal rights in a discussion. Introverted and shy people often feel more welcomed on a desk without corners. Or better, without a desk if it isn’t needed.
  • If many people have to made a status report (e.g. a weekly meeting): Assign short timeslots to everyone. If the speaker hits the limit but wants to talk further, people should do a short vote and may double his slot. Do not allow more extensions, do a separate meeting instead.
  • prepare yourself for the meeting. Write a few notes down, even if you think that you have everything in mind. This can also be used as a groundwork for the protocol.

Dealing with distracted attendees
Everybody has seen this: You are trying to explain a complicated topic, but somebody is playing with his smartphone. There are a few possible reasons for this:

  • The meeting is too long! Try to not hit the 90 minutes barrier
  • You are not prepared enough and have a many pauses for thoughts.
  • The person just has other things in mind, for example his new girlfriend and wants to chat with her

Besides better preparation and shorter meetings I can also recommend a smartphone bowl. Everybody puts their smartphones at the beginning of the meeting into a bowl or pot or whatever. This will motivate everybody to not meet longer than needed. Each person whose smartphones rings has to pay the cake for the following Tuesday.

The cakeday
This started as a little project a few years back. Everybody needs a few constant events in his weekly routine (I learned that mostly from Sheldon Cooper, so it maybe is not *that accurate*). For example watching game of thrones on Tuesday or buying petrol on Monday. Positive events in general cheer everyone up, positive events at work are good for the working atmosphere and the productivity. I introduced a Cakeday at work. This is a horrible translation from German and has nothing to do with a birthday. It is just the description of every Tuesday, where somebody in a team organizes a cake. Sadly I noticed my naming fail after I introduced #cakeday on twitter, so I won’t change it but instead continue to confuse people. I introduced the cakeday at my regular job and also in a different project. Where is the coherence to meetings? As this is a (hopefully positive) constant event at work, people seem to be a bit happier. I combine cakedays with days with lots of meetings. The cakeday is a good compensation for meetings. Exchanging the Cake with a BBQ also worked pretty well.

Notice and disclaimer
Do not take any advice in this post serious. I’m new to the whole topic and doing a lot of stuff not optimally. Please let me know when you have got any suggestions, improvements or cookies. To get the bachelor degree I have to make a project, starting in September 2015 and ending in May 2016. The difference to most universities is that all students here are already qualified sysadmins or developers and work full-time. The goal of the project is not only to prove a certain knowledge about information technology, but also about project management. Thus the project has to be realised by a group of people, three in my case. The topic of my team is “Datawarehousing for Cloud Computing Metrics“. I have the project manager role and will try to document our process through the project in a new blog series.

Conclusion
I learned that successful meetings aren’t that hard but still require work. Also cake makes everything better. Please let me know if you think I’m wrong with any of my points or if you have suggestions!

Posted in General, Management, Nerd Stuff | Leave a comment

Create a simple streaming replication for postgres with puppet

I need to build a postgres setup for a important database. The idea is to one master and one-many slaves that can serve read only access. The slaves will work in a hot-standby mode where they continuously receive data from the master. The replication will be synchronous, this means a client that inserts data will get the acknowledgement only if the slave(s) and the master successfully wrote the data to disk. My puppet profile is tested on centos7 machines and uses the puppetlabs/postgresql module in version 4.7.1 and my defined resource for ssh key exchange (the last post). The profile does a few assumptions:

  • for example on the hostnames of the machines. they have to be myawesomemachine01, myawesomemachine02 and so on, the trailing numbers are important (one digit is sufficient)
  • The first machine will always be the master, the rest configured as slaves
  • my master is called psql01.something.com, slaves are psql02.something.com and psql02.something.com
  • We do ssh key exchange for the postgres and root user (root needed for corosync later, postgres for initial setup)
  • IP of the master is 10.254.4.18 (I need to automate that somehow)

Here is the profile:

class profiles::postgrescluster {
  $password = 'mytotalllysecurepassword'
  $replicationuser = 'repl'
  # psql01 is our master, 02 the slave, if we ever promote the slave we should autokill puppet
  case $::hostname {
    'psql01': {
      $manage_recovery_conf = false
      postgresql::server::config_entry{'synchronous_standby_names':
        value => '*',
      }
    }
    default: {
      $manage_recovery_conf = true
      postgresql::server::config_entry{'hot_standby':
        value => 'on',
      }
      postgresql::server::recovery{'Create a recovery.conf':
        standby_mode      => 'on',
        primary_conninfo  => "host=10.254.4.18 port=5432 user=$replicationuser password=$password",
      }
    }
  }
  class { '::postgresql::globals':
    encoding            => 'UTF-8',
    locale              => 'en_US.UTF-8',
    manage_package_repo => true,
    version             => '9.5',
    manage_recovery_conf  => $manage_recovery_conf,
  }->
  class { '::postgresql::server':
    listen_addresses      => "localhost,${::ipaddress_eth0}",
  }
  package{['pg_activity', 'pgtune', 'zabbix-sender']:
    ensure  => 'present',
  } ->
  # change shell so su is allowed
  user{'postgres':
    shell           => '/bin/bash',
    home            => '/var/lib/pgsql',
    purge_ssh_keys  => true,
  } ->
  file{'/var/lib/pgsql/.ssh':
    ensure  => 'directory',
    owner   => 'postgres',
    group   => 'postgres',
  }
  # installs the contrib package
  include ::postgresql::server::contrib'
  # we need ssh key exchange for two users
  $type = 'ed25519'

  $myhash = {root => '/root', postgres => '/var/lib/pgsql'}
  $myhash.each |$sshuser, $homepath| {
    ## create ssh key for $sshuser
    base::ssh_keygen{$sshuser:
      type  => $type,
      home  => $homepath,
    }
    ## export it
    $pubkey = getvar("::${sshuser}_${type}_pubkey")
    $comment = getvar("::${sshuser}_${type}_comment")
    if $pubkey and $comment {
      @@ssh_authorized_key{$comment:
        ensure  => 'present',
        type    => $type,
        options => ['no-port-forwarding', 'no-X11-forwarding', 'no-agent-forwarding' ],
        user    => $sshuser,
        key     => $pubkey,
        tag     => 'postgrescluster',
      }
    }
    # collect it
    Ssh_Authorized_Key <<| tag == 'postgrescluster' and title != $comment |>>
  }

  ## export host key
  if $::sshecdsakey {
    @@sshkey{$::fqdn:
      host_aliases  => $::ipaddress,
      type          => 'ecdsa-sha2-nistp256',
      key           => $::sshecdsakey,
      # defaults to /etc/ssh/ so all users can use it
      #target       => '/root/.ssh/known_hosts',
      tag           => 'postgrescluster',
    }
  }
  ## import host key
  Sshkey <<| tag == 'postgrescluster' and title != $::fqdn |>>

  ## setup replication user
  postgresql::server::role {$replicationuser:
    login => true,
    replication => true,
    password_hash => postgresql_password($replicationuser, $password),
  }
  postgresql::server::pg_hba_rule{'allow replication user to access server':
    type        => 'host',
    database    => 'replication',
    user        => $replicationuser,
    address     => '10.254.4.0/24', # TODO resrict to /32
    auth_method => 'md5',
  }

  # configure replication, this is only needed on master
  postgresql::server::config_entry{'wal_level':
    value => 'hot_standby',
  }
  postgresql::server::config_entry{'max_wal_senders':
    value => 5,
  }
  postgresql::server::config_entry{'wal_keep_segments':
    value => 32,
  }
  postgresql::server::config_entry{'archive_mode':
    value => 'on',
  }
  postgresql::server::config_entry{'archive_command':
    value => 'cp %p /mnt/backup/%f',
  }
}

what does this all do? This profile can be applied to all postgres nodes in the setup. It will configure the postgresql.org repository and install version 9.5 (9.6 is out soon \o/). The initial setup of the replication is PITA because we need to copy the DB content from the master to the slaves. This is very dirty and the fastest solution to implement I could figure out was doing this by hand :sadface:
We need to do the following on the master after a puppet run happend on master + all slaves:
On the slave:

systemctl stop puppet postgres-9.5

On the master:

su postgres
cd ~
psql -c "SELECT pg_start_backup('label', true)"
rsync -cva --inplace --exclude=*postmaster.pid* /var/lib/pgsql/9.5/data SLAVENODE:/var/lib/pgsql/9.5/data
psql -c "SELECT pg_stop_backup()"

then again on the slave:

puppet agent -t

Since postgres 9.1 the command pg_basebackup is available which is an alternative solution for the rsync (and works from the slave to the master – pulling, not pushing from the master to the slave). However I had a bit of time pressure and it didn’t work on the first try, so I made the fallback to rsync. Let me know if you’ve got some ideas on how to make this more reliable and automated!

In an upcoming post I will discuss the setup to automatically promote a slave as new master if one dies including the management of a service IP (handled by corosync/pacemaker).

Posted in General, Linux, Puppet | Leave a comment

Create ssh keys with puppet on a server + pubkey exchange

There are a few solutions to generate ssh keys on a puppet master/server or copy them from hiera to a box. I have got several boxes and every box needs to have ssh access to every other box. I don’t care which key it is in particular, it just have to work. I don’t want to copy the keys from somewhere, transferring private data is a unnecessary security risk so I want to create them on the node. My idea was to have a solution with four parts:

  • defined resource which can create ssh pub/priv keys for me
  • a generic fact that exports public keys
  • exported resource to export the public key as ssh_authorized_key resource
  • collect every exported pubkey except for the own one

Here is my defined resource (which is a 99% copy, I just added a top scope to it):

# this is based on https://github.com/maestrodev/puppet-ssh_keygen/blob/master/manifests/init.pp
define base::ssh_keygen (
  $user     = undef,
  $type     = undef,
  $bits     = undef,
  $home     = undef,
  $filename = undef,
  $comment  = undef,
  $options  = undef,
) {
  Exec { path => '/bin:/usr/bin' }
  $user_real = $user ? {
    undef   => $name,
    default => $user,
  }
  $type_real = $type ? {
    undef   => 'rsa',
    default => $type,
  }
  $home_real = $home ? {
    undef   => $user_real ? {
      'root'  => "/${user_real}",
      default => "/home/${user_real}",
    },
    default => $home,
  }
  $filename_real = $filename ? {
    undef   => "${home_real}/.ssh/id_${type_real}",
    default => $filename,
  }
  $type_opt = " -t ${type_real}"
  if $bits { $bits_opt = " -b ${bits}" }
  $filename_opt = " -f '${filename_real}'"
  $n_passphrase_opt = " -N ''"
  if $comment { $comment_opt = " -C '${comment}'" }
  $options_opt = $options ? {
    undef   => undef,
    default => " ${options}",
  }
  exec { "ssh_keygen-${name}":
    command => "ssh-keygen${type_opt}${bits_opt}${filename_opt}${n_passphrase_opt}${comment_opt}${options_opt}",
    user    => $user_real,
    creates => $filename_real,
  }
}

And here is my custom fact to scan root + postgres user + all normal users for pubkeys and creates facts with comment + the key itself:

Dir.glob(["/root/.ssh/id_*.pub", "/home/*/.ssh/id_*.pub"]).each do |glob|
  # maybe our regex fails, so jump ahead if so
  user = /\w+(?=\/\.ssh)/.match(glob).to_s
  next if user.empty?
  file = File.open(glob)
  line = file.gets.chomp
  type = line.split[0].split('-')[1]
  pubkey = line.split[1]
  comment = line.split[2]

  Facter.add("#{user}_#{type}_pubkey") do
    setcode do
      pubkey
    end
  end
  Facter.add("#{user}_#{type}_comment") do
    setcode do
      comment
    end
  end
end
Dir.glob("/var/lib/pgsql/.ssh/id_*.pub").each do |glob|
  file = File.open(glob)
  line = file.gets.chomp
  type = line.split[0].split('-')[1]
  pubkey = line.split[1]
  comment = line.split[2]

  Facter.add("postgres_#{type}_pubkey") do
    setcode do
      pubkey
    end
  end
  Facter.add("postgres_#{type}_comment") do
    setcode do
      comment
    end
  end
end

Now this allows us to use the following puppet profile:

class profiles::myawesomesshkeyexchange {
  ## create ssh key for root
  base::ssh_keygen{root:
    type  => 'ed25519',
  }
  ## export it
  if $::root_ed25519_comment and $::root_ed25519_pubkey {
    @@ssh_authorized_key{$::root_ed25519_comment:
      ensure  => 'present',
      type    => ed25519,
      options => ['no-port-forwarding', 'no-X11-forwarding', 'no-agent-forwarding' ],
      user    => $sshuser,
      key     => $::root_ed25519_pubkey,
      tag     => 'bla',
    }
  }
  # collect it
  Ssh_Authorized_Key <<| tag == 'bla' and title != $comment |>>
}

this works for the root user, but we have to accept the fingerprint on the first connect because the key isn’t present in the known_hosts file. Also we maybe want to do this for multiple users on the system:

class profiles::myevenmoreawesomesshkeyexchange {
  # we need ssh key exchange for two users
  $type = 'ed25519'
  $myhash = {root => '/root', postgres => '/var/lib/pgsql'}
  $myhash.each |$sshuser, $homepath| {
    ## create ssh key for $sshuser
    base::ssh_keygen{$sshuser:
      type  => $type,
      home  => $homepath,
    }
    ## export it
    $pubkey = getvar("::${sshuser}_${type}_pubkey")
    $comment = getvar("::${sshuser}_${type}_comment")
    if $pubkey and $comment {
      @@ssh_authorized_key{$comment:
        ensure  => 'present',
        type    => $type,
        options => ['no-port-forwarding', 'no-X11-forwarding', 'no-agent-forwarding' ],
        user    => $sshuser,
        key     => $pubkey,
        tag     => 'bla',
      }
    }
    # collect it
    Ssh_Authorized_Key <<| tag == 'bla' and title != $comment |>>
  }

  ## export host key
  if $::sshecdsakey {
    @@sshkey{$::fqdn:
      host_aliases  => $::ipaddress,
      type          => 'ecdsa-sha2-nistp256',
      key           => $::sshecdsakey,
      tag           => 'bla',
    }
  }
  ## import host key
  Sshkey <<| tag == 'bla' and title != $::fqdn |>>
}

Now we’ve a setup that can automatically create, export and exchange ssh keys for multiple users on multiple servers without any manual interaction. Thanks to exported resources this works even when new nodes join the setup, when somebody deletes a key or accident or if boxes get removed. If you manage all entries in the authorized_keys file you should ensure that puppet removes all unknown keys:

user {'root':
  purge_ssh_keys  => true,
}
Posted in General, IT-Security, Linux, Puppet | 1 Comment

Tuning glusterfs for dummies

I’m playing with gluster since a few weeks, here is a short tutorial for optimizations I did for a setup with many small files (1-5mb):

First we checkout the startup time. Gluster by default offers NFS shares, but I don’t need those so we can disable them (note: you can do all of the following commands on any of the gluster nodes, they will communicate the settings to all other nodes):

gluster volume set mirror nfs.disable on

We don’t want that gluster kills our storage because it reaches 100% disk consumption, so we can set a limit:

gluster volume set mirror cluster.min-free-disk 10%

gluster can use the ram as a read-cache. My machines have a huge amount of free ram so I can set a huge caching:

gluster volume set mirror performance.cache-size 25GB
gluster volume set mirror performance.cache-max-file-size 128MB

gluster is able to answer with “wuhu I did a flush() successful and all your data is save”. You can configure if it should really do a flush() or do that later in the background. To increase the performance with small files you should chosse the latter one. But keep in mind that this maybe isn’t consistent!

gluster volume set mirror performance.flush-behind on
Posted in 30in30, General, Internet found pieces | Leave a comment

Linux Short Tip: systemd-networkd and DNS servers

You maybe have noticed that you can configure DNS servers in your systemd-networkd settings, but these addresses don’t appear in /etc/resolv.conf. You need to enable/start systemd-resolved, this daemon checks global DNS settings in /etc/systemd/resolved.conf, DNS settings for each link from systemd-networkd + DNS stuff that comes from a DHCP server and writes everything into /run/systemd/resolve/resolv.conf. You need to create a symlink to your /etc/systemd/resolved.conf, then everything works as expected:

root@ci ~ # systemctl enable systemd-resolved
Created symlink from /etc/systemd/system/multi-user.target.wants/systemd-resolved.service to /usr/lib/systemd/system/systemd-resolved.service.
root@ci ~ # systemctl start systemd-resolved
root@ci ~ # systemctl status systemd-resolved
● systemd-resolved.service - Network Name Resolution
Loaded: loaded (/usr/lib/systemd/system/systemd-resolved.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2016-05-13 13:48:31 CEST; 2s ago
Docs: man:systemd-resolved.service(8)
Main PID: 13897 (systemd-resolve)
Status: "Processing requests..."
Tasks: 1 (limit: 512)
CGroup: /system.slice/systemd-resolved.service
└─13897 /usr/lib/systemd/systemd-resolved

May 13 13:48:31 ci.virtapi.org systemd[1]: Starting Network Name Resolution...
May 13 13:48:31 ci.virtapi.org systemd-resolved[13897]: Positive Trust Anchors:
May 13 13:48:31 ci.virtapi.org systemd-resolved[13897]: . IN DS 19036 8 2 49aac11d7b6f6446702e54a1607371607a1a418552
May 13 13:48:31 ci.virtapi.org systemd-resolved[13897]: Negative trust anchors: 10.in-addr.arpa 16.172.in-addr.arpa 17.
May 13 13:48:31 ci.virtapi.org systemd-resolved[13897]: Using system hostname 'ci'.
May 13 13:48:31 ci.virtapi.org systemd-resolved[13897]: Switching to system DNS server 8.8.8.8.
May 13 13:48:31 ci.virtapi.org systemd[1]: Started Network Name Resolution.
root@ci ~ # cat /run/systemd/resolve/resolv.conf
# This file is managed by systemd-resolved(8). Do not edit.
#
# Third party programs must not access this file directly, but
# only through the symlink at /etc/resolv.conf. To manage
# resolv.conf(5) in a different way, replace the symlink by a
# static file or a different symlink.

nameserver 8.8.8.8
root@ci ~ # rm /etc/resolv.conf
root@ci ~ # ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf
root@ci ~ #

Posted in 30in30, General, Linux, Short Tips | 2 Comments