Create ssh keys with puppet on a server + pubkey exchange

There are a few solutions to generate ssh keys on a puppet master/server or copy them from hiera to a box. I have got several boxes and every box needs to have ssh access to every other box. I don’t care which key it is in particular, it just have to work. I don’t want to copy the keys from somewhere, transferring private data is a unnecessary security risk so I want to create them on the node. My idea was to have a solution with four parts:

  • defined resource which can create ssh pub/priv keys for me
  • a generic fact that exports public keys
  • exported resource to export the public key as ssh_authorized_key resource
  • collect every exported pubkey except for the own one

Here is my defined resource (which is a 99% copy, I just added a top scope to it):

# this is based on https://github.com/maestrodev/puppet-ssh_keygen/blob/master/manifests/init.pp
define base::ssh_keygen (
  $user     = undef,
  $type     = undef,
  $bits     = undef,
  $home     = undef,
  $filename = undef,
  $comment  = undef,
  $options  = undef,
) {
  Exec { path => '/bin:/usr/bin' }
  $user_real = $user ? {
    undef   => $name,
    default => $user,
  }
  $type_real = $type ? {
    undef   => 'rsa',
    default => $type,
  }
  $home_real = $home ? {
    undef   => $user_real ? {
      'root'  => "/${user_real}",
      default => "/home/${user_real}",
    },
    default => $home,
  }
  $filename_real = $filename ? {
    undef   => "${home_real}/.ssh/id_${type_real}",
    default => $filename,
  }
  $type_opt = " -t ${type_real}"
  if $bits { $bits_opt = " -b ${bits}" }
  $filename_opt = " -f '${filename_real}'"
  $n_passphrase_opt = " -N ''"
  if $comment { $comment_opt = " -C '${comment}'" }
  $options_opt = $options ? {
    undef   => undef,
    default => " ${options}",
  }
  exec { "ssh_keygen-${name}":
    command => "ssh-keygen${type_opt}${bits_opt}${filename_opt}${n_passphrase_opt}${comment_opt}${options_opt}",
    user    => $user_real,
    creates => $filename_real,
  }
}

And here is my custom fact to scan root + postgres user + all normal users for pubkeys and creates facts with comment + the key itself:

Dir.glob(["/root/.ssh/id_*.pub", "/home/*/.ssh/id_*.pub"]).each do |glob|
  # maybe our regex fails, so jump ahead if so
  user = /\w+(?=\/\.ssh)/.match(glob).to_s
  next if user.empty?
  file = File.open(glob)
  line = file.gets.chomp
  type = line.split[0].split('-')[1]
  pubkey = line.split[1]
  comment = line.split[2]

  Facter.add("#{user}_#{type}_pubkey") do
    setcode do
      pubkey
    end
  end
  Facter.add("#{user}_#{type}_comment") do
    setcode do
      comment
    end
  end
end
Dir.glob("/var/lib/pgsql/.ssh/id_*.pub").each do |glob|
  file = File.open(glob)
  line = file.gets.chomp
  type = line.split[0].split('-')[1]
  pubkey = line.split[1]
  comment = line.split[2]

  Facter.add("postgres_#{type}_pubkey") do
    setcode do
      pubkey
    end
  end
  Facter.add("postgres_#{type}_comment") do
    setcode do
      comment
    end
  end
end

Now this allows us to use the following puppet profile:

class profiles::myawesomesshkeyexchange {
  ## create ssh key for root
  base::ssh_keygen{root:
    type  => 'ed25519',
  }
  ## export it
  if $::root_ed25519_comment and $::root_ed25519_pubkey {
    @@ssh_authorized_key{$::root_ed25519_comment:
      ensure  => 'present',
      type    => ed25519,
      options => ['no-port-forwarding', 'no-X11-forwarding', 'no-agent-forwarding' ],
      user    => $sshuser,
      key     => $::root_ed25519_pubkey,
      tag     => 'bla',
    }
  }
  # collect it
  Ssh_Authorized_Key <<| tag == 'bla' and title != $comment |>>
}

this works for the root user, but we have to accept the fingerprint on the first connect because the key isn’t present in the known_hosts file. Also we maybe want to do this for multiple users on the system:

class profiles::myevenmoreawesomesshkeyexchange {
  # we need ssh key exchange for two users
  $type = 'ed25519'
  $myhash = {root => '/root', postgres => '/var/lib/pgsql'}
  $myhash.each |$sshuser, $homepath| {
    ## create ssh key for $sshuser
    base::ssh_keygen{$sshuser:
      type  => $type,
      home  => $homepath,
    }
    ## export it
    $pubkey = getvar("::${sshuser}_${type}_pubkey")
    $comment = getvar("::${sshuser}_${type}_comment")
    if $pubkey and $comment {
      @@ssh_authorized_key{$comment:
        ensure  => 'present',
        type    => $type,
        options => ['no-port-forwarding', 'no-X11-forwarding', 'no-agent-forwarding' ],
        user    => $sshuser,
        key     => $pubkey,
        tag     => 'bla',
      }
    }
    # collect it
    Ssh_Authorized_Key <<| tag == 'bla' and title != $comment |>>
  }

  ## export host key
  if $::sshecdsakey {
    @@sshkey{$::fqdn:
      host_aliases  => $::ipaddress,
      type          => 'ecdsa-sha2-nistp256',
      key           => $::sshecdsakey,
      tag           => 'bla',
    }
  }
  ## import host key
  Sshkey <<| tag == 'bla' and title != $::fqdn |>>
}

Now we’ve a setup that can automatically create, export and exchange ssh keys for multiple users on multiple servers without any manual interaction. Thanks to exported resources this works even when new nodes join the setup, when somebody deletes a key or accident or if boxes get removed. If you manage all entries in the authorized_keys file you should ensure that puppet removes all unknown keys:

user {'root':
  purge_ssh_keys  => true,
}
Posted in General, IT-Security, Linux, Puppet | 1 Comment

Tuning glusterfs for dummies

I’m playing with gluster since a few weeks, here is a short tutorial for optimizations I did for a setup with many small files (1-5mb):

First we checkout the startup time. Gluster by default offers NFS shares, but I don’t need those so we can disable them (note: you can do all of the following commands on any of the gluster nodes, they will communicate the settings to all other nodes):

gluster volume set mirror nfs.disable on

We don’t want that gluster kills our storage because it reaches 100% disk consumption, so we can set a limit:

gluster volume set mirror cluster.min-free-disk 10%

gluster can use the ram as a read-cache. My machines have a huge amount of free ram so I can set a huge caching:

gluster volume set mirror performance.cache-size 25GB
gluster volume set mirror performance.cache-max-file-size 128MB

gluster is able to answer with “wuhu I did a flush() successful and all your data is save”. You can configure if it should really do a flush() or do that later in the background. To increase the performance with small files you should chosse the latter one. But keep in mind that this maybe isn’t consistent!

gluster volume set mirror performance.flush-behind on
Posted in 30in30, General, Internet found pieces | Leave a comment

Linux Short Tip: systemd-networkd and DNS servers

You maybe have noticed that you can configure DNS servers in your systemd-networkd settings, but these addresses don’t appear in /etc/resolv.conf. You need to enable/start systemd-resolved, this daemon checks global DNS settings in /etc/systemd/resolved.conf, DNS settings for each link from systemd-networkd + DNS stuff that comes from a DHCP server and writes everything into /run/systemd/resolve/resolv.conf. You need to create a symlink to your /etc/systemd/resolved.conf, then everything works as expected:

root@ci ~ # systemctl enable systemd-resolved
Created symlink from /etc/systemd/system/multi-user.target.wants/systemd-resolved.service to /usr/lib/systemd/system/systemd-resolved.service.
root@ci ~ # systemctl start systemd-resolved
root@ci ~ # systemctl status systemd-resolved
● systemd-resolved.service - Network Name Resolution
Loaded: loaded (/usr/lib/systemd/system/systemd-resolved.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2016-05-13 13:48:31 CEST; 2s ago
Docs: man:systemd-resolved.service(8)
Main PID: 13897 (systemd-resolve)
Status: "Processing requests..."
Tasks: 1 (limit: 512)
CGroup: /system.slice/systemd-resolved.service
└─13897 /usr/lib/systemd/systemd-resolved

May 13 13:48:31 ci.virtapi.org systemd[1]: Starting Network Name Resolution...
May 13 13:48:31 ci.virtapi.org systemd-resolved[13897]: Positive Trust Anchors:
May 13 13:48:31 ci.virtapi.org systemd-resolved[13897]: . IN DS 19036 8 2 49aac11d7b6f6446702e54a1607371607a1a418552
May 13 13:48:31 ci.virtapi.org systemd-resolved[13897]: Negative trust anchors: 10.in-addr.arpa 16.172.in-addr.arpa 17.
May 13 13:48:31 ci.virtapi.org systemd-resolved[13897]: Using system hostname 'ci'.
May 13 13:48:31 ci.virtapi.org systemd-resolved[13897]: Switching to system DNS server 8.8.8.8.
May 13 13:48:31 ci.virtapi.org systemd[1]: Started Network Name Resolution.
root@ci ~ # cat /run/systemd/resolve/resolv.conf
# This file is managed by systemd-resolved(8). Do not edit.
#
# Third party programs must not access this file directly, but
# only through the symlink at /etc/resolv.conf. To manage
# resolv.conf(5) in a different way, replace the symlink by a
# static file or a different symlink.

nameserver 8.8.8.8
root@ci ~ # rm /etc/resolv.conf
root@ci ~ # ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf
root@ci ~ #

Posted in 30in30, General, Linux, Short Tips | 1 Comment

Short Tip: Replacing a failed drive in mdadm softwareraid

Sometimes you check your fileserver and your raid looks like this:
# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md125 : active (auto-read-only) raid6 sdl[1] sdh[7] sdf[5] sdk[2] sdg[6] sdm[8] sdi[0] sdn[10] sde[4]
23441080320 blocks super 1.2 level 6, 512k chunk, algorithm 2 [10/9] [UUU_UUUUUU]
bitmap: 17/22 pages [68KB], 65536KB chunk

Noticed the _ in [UUU_UUUUUU]? This means your raid is degraded and one drive is missing. You had one job harddrive! You can use the output of find /dev/sd? and lsblk to check which device is the missing one. Hopefully the drive is still present to the system, than it should appear. In that case it also should be possible to query it for the serial number:

smartctl -i /dev/sdj | awk '/Serial/ {print $3}'

You should now be able to determine the failed drive in the system and replace it. After that you can add the new one to the raid:

mdadm /dev/md125 --add /dev/sdj

Tip1:
Hard drive failure rates work like the bathtub curve. They die in the first hours of their usage or add the end, mostly around 25.000-30.000 (depends on many many many factors, this is just a rough estimate). But just keep that in mind for the next hours while your raid rebuild, the new drive may fail and you have to replace it again.

Tip2
You can speed up the rebuild with:

echo 2000000000 > /proc/sys/dev/raid/speed_limit_max
echo 200000000 > /proc/sys/dev/raid/speed_limit_min

This forces the kernel to use all available IO power for the rebuild. You can watch it with watch cat /proc/mdstat.

Posted in 30in30, General, Linux, Short Tips | Leave a comment

Doing IPv6 with systemd-networkd – the correct way

Understanding the docs:
I blogged about IPv6 with systemd-networkd in Arch Linux step by step installation guide and provided a hacky workaround to get a working IPv6 address configured. I read through the docs again and hand a longer discussion with Silvio Knizek (systemd guru). We noticed that the documentation isn’t very clear about the correct setup for dualstack with static addresses. The manpage says about the the [Address] section that it can contain an Address= option, for more infos you are advised to check out the docs about the [Network] section. Also you can have multiple [Address] sections. Docs about [Network] say that you can have multiple Address= attributes, however having them in one [Address] block as well doesn’t work.

TL;DR: docs are complicated, here is a working setup:

[Match]
Name=enp0s5

[Address]
Address=37.61.202.154/32
Peer=169.255.30.1/32

[Address]
Address=2a01:488:67:1000:253d:ca9a:0:1/128

[Network]
Gateway=169.255.30.1
Gateway=fe80::1
DHCP=no
DNS=80.237.128.144
DNS=80.237.128.145
Posted in 30in30, General, Internet found pieces, Linux | 1 Comment

Linux Short Tip: Correct IPv6 with ferm firewalling

I mentioned ferm in my last post about gluster (an iptables/ip6tables abstraction layer in perl with a nice firewall config). The default rule-set looks like this:

table filter {
    chain INPUT {
        policy DROP;

        # connection tracking
        mod state state INVALID DROP;
        mod state state (ESTABLISHED RELATED) ACCEPT;

        # allow local connections
        interface lo ACCEPT;

        # respond to ping
        proto icmp icmp-type echo-request ACCEPT;

        # allow SSH connections
        proto tcp dport ssh ACCEPT;

        # ident connections are also allowed
        proto tcp dport auth ACCEPT;

        # the rest is dropped by the above policy
    }

    # outgoing connections are not limited
    chain OUTPUT policy ACCEPT;

    # this is not a router
    chain FORWARD policy DROP;
}

Most people attend to C&P that blog and wrap it in domain ip6 {} if they also want to filter IPv6. This works fine accept for the ICMP part. It only allows echo requests (a normal ping/mtr). But IPv6 does Neighbor Discovery Protocol (the successor of Address Resolution Protocol) via ICMP. Your device won’t be able to reach your gateway without allowing ICMP completely, so a simple C&P will break existing IPv6 setups. My fixed version is:

## respond to ping
#proto icmp icmp-type echo-request ACCEPT;
# allow all icmp (needed for ipv6 NDP)
proto icmp ACCEPT;
Posted in 30in30, General, Linux, Short Tips | Leave a comment

Short Tip: Setup glusterfs share on Arch Linux

I made a detailed tutorial for a Arch Linux installation a few days back. This is a quick follow up post to create a Distributed-Replicated gluster share.

The goal is to create a mirror for several linux distributions. A mirror needs a lot of disk space, but big machines are expensive. Also I would like to have a bit of redundancy. The idea is to use 4 machines, each of them has around 4,2TB storage. Gluster allows us a good combination of replication and distribution, so we can use 8,4TB of storage across two machines, the rest of the free storage is used for replication.

The installation is quite easy. We need to install the rpcbind and gluster packages. I’ve got a seperate partition on all machines, mountet at /glusterfs. After the installation of all packages we can directly create the share (do the following on any of the nodes). Important note: do not do the probe command for the server you are currently working on (we’re on server4 in this example):

gluster peer probe server1
gluster peer probe server2
gluster peer probe server3
gluster volume create mirror replica 2 transport tcp \
server1:/glusterfs/mirror \
server2:/glusterfs/mirror \
server3:/glusterfs/mirror \
server4:/glusterfs/mirror
gluster volume start mirror

Now we should have a working share, hooray. We can verify the peer and share state:

gluster peer status
gluster volume info mirror

We’re now able to mount the share via the glusterfs fuse module on any node:

mount -t glusterfs server1:/mirror /srv/mirror

There isn’t any encryption on the gluster traffic so we should do some firewalling. In my setup nobody except for the gluster nodes itself will mount the share. Here is an example ferm config:

@def $node1_ipv4 = ( ipv4 );
@def $node1_ipv6 = ( ipv6 );
@def $node2_ipv4 = ( ipv4 );
@def $node2_ipv6 = ( ipv6 );
@def $node3_ipv4 = ( ipv4 );
@def $node3_ipv6 = ( ipv6 );
@def $node4_ipv4 = ( ipv4 );
@def $node4_ipv6 = ( ipv6 );

table filter {
  chain INPUT {
    policy DROP;

    # connection tracking
    mod state state INVALID DROP;
    mod state state (ESTABLISHED RELATED) ACCEPT;

    # allow local connections
    interface lo ACCEPT;

    # respond to ping
    proto icmp icmp-type echo-request ACCEPT;

    # allow SSH connections
    proto tcp dport ssh ACCEPT;

    # allow gluster
    proto tcp dport (111 24007 24008 49152 49153 49154 49155 49156 49157) saddr ( $node1_ipv4 $node2_ipv4 $node3_ipv4 $node4_ipv4 ) ACCEPT;
    # the rest is dropped by the above policy
  }

  # outgoing connections are not limited
  chain OUTPUT policy ACCEPT;

  # this is not a router
  chain FORWARD policy DROP;
}

domain ip6 {
 table filter {
    chain INPUT {
      policy DROP;

      # connection tracking
      mod state state INVALID DROP;
      mod state state (ESTABLISHED RELATED) ACCEPT;

      # allow local connections
      interface lo ACCEPT;

      ## respond to ping
      #proto icmp icmp-type echo-request ACCEPT;
      # allow all icmp (needed for ipv6 ND and so on)
      proto icmp ACCEPT;

      # allow SSH connections
      proto tcp dport ssh ACCEPT;

      # allow gluster
      proto tcp dport (111 24007 24008 49152 49153 49154 49155 49156 49157) saddr ( $node1_ipv6 $node2_ipv6 $node3_ipv6 $node4_ipv6 ) ACCEPT;
      # the rest is dropped by the above policy
    }

    # outgoing connections are not limited
    chain OUTPUT policy ACCEPT;

    # this is not a router
    chain FORWARD policy DROP;
  }
}
Posted in 30in30, General, Linux, Short Tips | 1 Comment

Fixing and improving rspec tests

Today I stumbled across our puppet module for gluster. My goal for the next days:

  • modulesync with voxpupuli default settings
  • Fix rspec tests to work with our new test matrix (newer puppet versions and STRICT_VARIABLES=yes)
  • Make a new release for the forge
  • Add Arch Linux support
  • Make another release

Fixing rubocop the nice way:
The module has beed modulesynced before so there weren’t many changes, except for the conversion to the new Hash notation that got introduced in Ruby 1.9. These changes happend here and here. I recommend the following command to get a list of all failed cops:

bundle exec rubocop -c .rubocop.yml -D

Now we know the name of the cops and the files that need improvements. This allows us to try the awesome autofix function, but only for a certain cop or even a certain file (remember how to commit things, always small and logical changes):

bundle exec rubocop --only Style/HashSyntax --auto-correct my/awesome/file.rb

Bringing rspec to the next level of awesomeness:
Now we get to the failed rspec tests, here is one example:

  describe 'installing on Red Hat Enterprise Linux' do
    let :facts do
      {
        :osfamily => 'RedHat',
        :operatingsystem => 'RedHat',
        :operatingsystemmajrelease => '6',
        :architecture => 'x86_64',
      }
    end
    context 'when repo is true' do
      let :params do
        { :repo => true }
      end
      it 'should create gluster::repo' do
        should create_class('gluster::repo').with(
          :version => 'LATEST',
        )
      end
    end

The issue here is that the puppet module references a fact that isn’t used (so the cataloq compilation works) but also not present in our mocking (line 4-7). The old school solution is to add all accessed facts to all tests. This is a bit of copy and paste shit, but works. The module claims to support RHEL6 and 7, but the upper test only checks for RHEL6. I need to copy and paste the whole block to to also test on RHEL7, just with little data modification. This gets even mor anoying when you also support Debian based systemd and/or have a huge amount of tests.

The solution is rspec-puppet-facts. This can automatically mock every operating system specific fact, the list of needed operating systems comes from the metadata.json. The implementation:

  on_supported_os.each do |os, facts|
    context "on #{os}" do
      let(:facts) do
        facts
      end
      context 'with defaults' do
        it { should compile.with_all_deps }
        it 'should create gluster::repo' do
          should create_class('gluster::repo').with(
            version: 'LATEST',
          )
        end
....
      end
    end
  end

Here is the full implementation. rspec-puppet-facts allows us to simply iterate on the list of operating systems, and add all facts to rspec. Take a look at the nesting of contexts. The actual tests have to be nested in the context that adds the facts, because of the scoping.

Conclusion:
Playing a bit with rubocop and rspec-puppet-facts is a nice solution to heavily increase the code quality of existing tests, besides that you can also increase the amount of tests by looping at every OS.

Posted in 30in30, General, Linux, Puppet | Leave a comment

One month in open source projects

I undertook a little experiment last April – to contribute to open source projects for 30 days (similar to #blogmonth). The idea was to contribute in a way that it helps the project itself without any personal gain. For example: Implementing a feature that I need doesn’t count, but implementing something that somebody else requested counts.

Why waste your private time for *that* stuff?
That I was asked a few times. Some people don’t understand the way open source projects work. You do not have to pay with money for the software, you can use it and do whatever you want with it. But somebody has to have the initial awesome idea for a project, somebody has to implement it, write docs, handle issues and feature requests. Open source projects can only survive when contributors exist.

Personal goals:
My idea was to contribute to a few projects on github. The web UI offers a nice way to see your contributions. I do a lot of stuff for the puppet community in spare tiem and at work so I knew which projects needed a bit of help, mostly puppet modules. My aims were to improve my git skills (use it often, break stuff by accident, learn how to fix it via stackoverflow) and do a bit of testing and deal with code quality.

Recap after 30 days:
github stats
My streak started when I had to do some FOSS stuff for work in the last week of march, but I didn’t count that for my desired 30-days period. I experienced a few issues in the beginning:

  • How to correctly interpret a feature request written by another contributor (we need an RFC for a machine-parseable format or feature requests)
  • How to write sane and logical commits
  • Dealing with different styles of programming
  • Empathy for the users of the software

Everything I worked with is in English as other contributors or users write issues, feature requests and comments in English. For most of us English is not our first language which sometimes leads to communication issues. As our grasp of English is limited discussing technical problems in detail can be very difficult. Proving a detailed description of an issue you found is hard, but keep in mind that you have to describe your environment (so others can exclude side effects) and your configuration (so others can try to reproduce it) also. If you’ve a feature request you should describe what you want to achieve (or what the feature has to achieve), not how you want it to be implemented. Very often there are several ways to implement a specific feature and hopefully the person who actually implements it knows how best to do it.

git and git commits are a complicated topic. I wrote a CONTRIBUTING.md for projects I founded which describes my idea. I discussed that in a few projects and was able to convince a few contributors to write smaller commits that only encapsulate a single logical change (one bugfix, or one new feature) and not multiple. This makes a revert in the future way easier if something introduces a regression.

Every developer has his own style, some of them similar to your own, some are far far away (this doesn’t automatically mean that they are better or worse). Fathoming the thoughts of a dev by reading his code isn’t always easy, also documentation is not always present or up2date or at times even misleading. Working with many different projects or developers and reading different styles helps to get better at it, but it does take time. You know you’re good when you can determine the author based on the code style.

Empathy, the last and most important point. You maybe want to implement a feature which would break backwards compatibility. You want to change the code style to be more readable by developers which would be incompatible with old versions of $languageinterpreter (like perl or *ruby*). Also very common: you want to drop support for an older release of the language, but many people still use it and don’t want to or can’t upgrade (RedHat I hate you, I really do). Finding the best way here is not always easy, there are always people that won’t like your decisions. The goal is to speak to many people involved and find a working solution together (hey: this is the devops spirit).

Conclusion:
This was a great experience and I still contribute on a daily basis. I finally wrote my first rspec tests, I enabled over 90 rubocop cops in a puppet module, bumped many dependencies and released a few puppet modules. The biggest profit: I was able to work with so many different people. Nice and friendly people, all interested in software. I met so many new faces (via internet) and I’m looking forward to meet a few of them in person at the next conference or a community event!

If you’re also interested in contributing to open source as well or if you want to blame me because I broke a puppet module you use -> join #voxpupuli on freenode.

Posted in 30in30, General, Nerd Stuff | Leave a comment

Arch Linux step by step installation guide

I recently created a simple step by step guide to get a basic Arch Linux running on a VPS. Most providers don’t provide standard Arch Linux images but a VNC console + ISO upload solution, this guide is made for such an environment, I used a VPS from Hosteurope for this.

Getting started:
You can always download the latest ISO from my own mirror. Upload it to your hosting provider, restart the VPS with it and enable VNC access.

Setup network:
VNC is always tricky. The original RFC specifies a 8 character password, not more or less, also everything is plaintext and keyboard layouts often feel random and inconsistent. We will use VNC as short as possible. The ISO will print you a fancy syslinux menu where you can choose to boot a 32bit system, 64bit, or from the first hard disk, we will choose 64bit here. The system will automatically login on TTY1 with the zsh shell. Here we setup our network:

ip a a 37.61.204.220/24 dev enp0s5
ip r a 169.255.30.1 dev enp0s5
ip r a default via 169.255.30.1

The underlying hypervisor is a bit strange and requires us to set this point to point route + gateway. Setting up a DNS resolver isn’t needed but will speed up further SSH logins (because sshd does a reverse DNS lookup for every incoming connection):

echo "nameserver 80.237.128.144" > /etc/resolv.conf
systemctl start sshd
passwd


We can now connect via ssh to the VM \o/
You can use whatever partition schema you like or stay with the default one provided by the Hoster. I won’t go into details for creating partitions here because “the correct partitioning program” and the used schema is always very opinionated. I used the following, created with parted (1 is the bios_grub partiton that is needed for grub on GPT partition tables, 2 is for / and 3 is for separate data):

sh-4.3# parted /dev/sda unit s print
Model: ATA ubuntu1404-x86_6 (scsi)
Disk /dev/sda: 9017753600s
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start      End          Size         File system  Name  Flags
 1      2048s      4095s        2048s                           bios_grub
 2      4096s      97656831s    97652736s    ext4
 3      97656832s  9017753566s  8920096735s  xfs

sh-4.3#

You should check the correct alignment of your partitions:

for partition in {1..3}; do
  parted /dev/sda align-check optimal "$partition";
done

If everything is correct we can start formatting, mount it and install the minimal list of needed packages for a good system:

mkfs.ext4 /dev/sda2
mkfs.ext4 /dev/sda3
mount /dev/sda2 /mnt
mkdir /mnt/glusterfs
mount /dev/sda3 /mnt/glusterfs
pacstrap /mnt base base-devel vim htop grub openssh

Only base is needed to get a working system (this is a group of packages), but I can’t work without vim and htop. openssh is needed to enable remote access if we boot the system the first time. base-devel (also a group) is needed on many many systems as well, so I’m also installing it.

Configuration:
A system needs a fstab file, writing this from hand is always wacky. Specifying disks by their UUID is recommended, but doing this by hand is fault-prone so people tend to refer to their block device path like /dev/sda1. This is okay until someone adds another hard disk or changes your disk driver. Thankfully the Arch developers created a little wrapper script to create the fstab for us with UUIDs:

genfstab -U /mnt >> /mnt/etc/fstab

Now we switch into a chroot environment to configure the future hostname, language, keyboard layout and stuff like this:

arch-chroot /mnt
echo myawesomeserver.mydomain.io > /etc/hostname
echo LANG=en_US.UTF-8 > /etc/locale.conf
echo "en_US.UTF-8 UTF-8" > /etc/locale.gen
locale-gen
echo LANG=en_US.UTF-8 > /etc/locale.conf
echo KEYMAP=de > /etc/vconsole.conf
passwd

Almost-final step: Make it bootable and accessible:
Arch Linux doesn’t ship a default initramfs, we need to generate one. Then we can install grub + create the grub configuration file:

mkinitcpio -p linux
grub-install /dev/sda
grub-mkconfig -o /boot/grub/grub.cfg

Last step: Configure the network and enable sshd:
Systemd brings the cool systemd-networkd to automatically setup your network with a unified configuration format across every Linux distribution. We paste the following content in /etc/systemd/network/wired.network:

[Match]
Name=enp0s5

[Address]
Address=37.61.204.220
Peer=169.255.30.1/32
#Address=2A01:488:67:1000:253D:CCDC:0:1

[Network]
Gateway=169.255.30.1
Gateway=fe80::1
DNS=80.237.128.144
DNS=80.237.128.145

We can enable the needed daemon and sshd with:

systemctl enable sshd
systemctl enable systemd-networkd

(Please also check out Doing IPv6 with systemd-networkd – the correct way where I describe a better way to handle IPv6)
However there is one current issue with systemd-networkd (maybe in combination with the used hypervisor, it won’t configure the specified IPv6 address (that’s why I commented it out) but throws a syntax error (wat?!). We can create a oneshot service to configure the address after the network is up in /etc/systemd/system/ipv6.service:

[Unit]
Description=Adds IPv6 address
After=network.target

[Service]
Type=oneshot
ExecStart=/usr/bin/ip -6 address add 2A01:488:67:1000:253D:CCDC:0:1/128 dev enp0s5

We can’t enable user-generated units in a chroot, so we need to do it by hand:

ln -s /etc/systemd/system/ipv6.service /etc/systemd/system/multi-user.target.wants/ipv6.service

The default sshd config doesn’t allow root login with a password, you now have three options:

  • create a seperate user
  • Allow password based login
  • Throw your ssh key (only public part please) into /root/.ssh/authorized_keys

Then we can exit the chroot environment, unmount everything and reboot \o/

Conclusion:
It took me 13 minutes to setup everything, from booting the ISO to the reboot. This can even be automated by a little bash script. My fellow Bluewind provided me a little script that automates the installation. The ISO can fetch and execute a script if you provide it on the kernel cmdline, this is useful if you want to completely automate the setup and boot the ISO via PXE.

Posted in 30in30, General, Linux | 4 Comments