Short Tip: Replacing a failed drive in mdadm softwareraid

Sometimes you check your fileserver and your raid looks like this:
# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md125 : active (auto-read-only) raid6 sdl[1] sdh[7] sdf[5] sdk[2] sdg[6] sdm[8] sdi[0] sdn[10] sde[4]
23441080320 blocks super 1.2 level 6, 512k chunk, algorithm 2 [10/9] [UUU_UUUUUU]
bitmap: 17/22 pages [68KB], 65536KB chunk

Noticed the _ in [UUU_UUUUUU]? This means your raid is degraded and one drive is missing. You had one job harddrive! You can use the output of find /dev/sd? and lsblk to check which device is the missing one. Hopefully the drive is still present to the system, than it should appear. In that case it also should be possible to query it for the serial number:

smartctl -i /dev/sdj | awk '/Serial/ {print $3}'

You should now be able to determine the failed drive in the system and replace it. After that you can add the new one to the raid:

mdadm /dev/md125 --add /dev/sdj

Tip1:
Hard drive failure rates work like the bathtub curve. They die in the first hours of their usage or add the end, mostly around 25.000-30.000 (depends on many many many factors, this is just a rough estimate). But just keep that in mind for the next hours while your raid rebuild, the new drive may fail and you have to replace it again.

Tip2
You can speed up the rebuild with:

echo 2000000000 > /proc/sys/dev/raid/speed_limit_max
echo 200000000 > /proc/sys/dev/raid/speed_limit_min

This forces the kernel to use all available IO power for the rebuild. You can watch it with watch cat /proc/mdstat.

Posted in 30in30, General, Linux, Short Tips | Leave a comment

Doing IPv6 with systemd-networkd – the correct way

Understanding the docs:
I blogged about IPv6 with systemd-networkd in Arch Linux step by step installation guide and provided a hacky workaround to get a working IPv6 address configured. I read through the docs again and hand a longer discussion with Silvio Knizek (systemd guru). We noticed that the documentation isn’t very clear about the correct setup for dualstack with static addresses. The manpage says about the the [Address] section that it can contain an Address= option, for more infos you are advised to check out the docs about the [Network] section. Also you can have multiple [Address] sections. Docs about [Network] say that you can have multiple Address= attributes, however having them in one [Address] block as well doesn’t work.

TL;DR: docs are complicated, here is a working setup:

[Match]
Name=enp0s5

[Address]
Address=37.61.202.154/32
Peer=169.255.30.1/32

[Address]
Address=2a01:488:67:1000:253d:ca9a:0:1/128

[Network]
Gateway=169.255.30.1
Gateway=fe80::1
DHCP=no
DNS=80.237.128.144
DNS=80.237.128.145
Posted in 30in30, General, Internet found pieces, Linux | 1 Comment

Linux Short Tip: Correct IPv6 with ferm firewalling

I mentioned ferm in my last post about gluster (an iptables/ip6tables abstraction layer in perl with a nice firewall config). The default rule-set looks like this:

table filter {
    chain INPUT {
        policy DROP;

        # connection tracking
        mod state state INVALID DROP;
        mod state state (ESTABLISHED RELATED) ACCEPT;

        # allow local connections
        interface lo ACCEPT;

        # respond to ping
        proto icmp icmp-type echo-request ACCEPT;

        # allow SSH connections
        proto tcp dport ssh ACCEPT;

        # ident connections are also allowed
        proto tcp dport auth ACCEPT;

        # the rest is dropped by the above policy
    }

    # outgoing connections are not limited
    chain OUTPUT policy ACCEPT;

    # this is not a router
    chain FORWARD policy DROP;
}

Most people attend to C&P that blog and wrap it in domain ip6 {} if they also want to filter IPv6. This works fine accept for the ICMP part. It only allows echo requests (a normal ping/mtr). But IPv6 does Neighbor Discovery Protocol (the successor of Address Resolution Protocol) via ICMP. Your device won’t be able to reach your gateway without allowing ICMP completely, so a simple C&P will break existing IPv6 setups. My fixed version is:

## respond to ping
#proto icmp icmp-type echo-request ACCEPT;
# allow all icmp (needed for ipv6 NDP)
proto icmp ACCEPT;
Posted in 30in30, General, Linux, Short Tips | Leave a comment

Short Tip: Setup glusterfs share on Arch Linux

I made a detailed tutorial for a Arch Linux installation a few days back. This is a quick follow up post to create a Distributed-Replicated gluster share.

The goal is to create a mirror for several linux distributions. A mirror needs a lot of disk space, but big machines are expensive. Also I would like to have a bit of redundancy. The idea is to use 4 machines, each of them has around 4,2TB storage. Gluster allows us a good combination of replication and distribution, so we can use 8,4TB of storage across two machines, the rest of the free storage is used for replication.

The installation is quite easy. We need to install the rpcbind and gluster packages. I’ve got a seperate partition on all machines, mountet at /glusterfs. After the installation of all packages we can directly create the share (do the following on any of the nodes). Important note: do not do the probe command for the server you are currently working on (we’re on server4 in this example):

gluster peer probe server1
gluster peer probe server2
gluster peer probe server3
gluster volume create mirror replica 2 transport tcp \
server1:/glusterfs/mirror \
server2:/glusterfs/mirror \
server3:/glusterfs/mirror \
server4:/glusterfs/mirror
gluster volume start mirror

Now we should have a working share, hooray. We can verify the peer and share state:

gluster peer status
gluster volume info mirror

We’re now able to mount the share via the glusterfs fuse module on any node:

mount -t glusterfs server1:/mirror /srv/mirror

There isn’t any encryption on the gluster traffic so we should do some firewalling. In my setup nobody except for the gluster nodes itself will mount the share. Here is an example ferm config:

@def $node1_ipv4 = ( ipv4 );
@def $node1_ipv6 = ( ipv6 );
@def $node2_ipv4 = ( ipv4 );
@def $node2_ipv6 = ( ipv6 );
@def $node3_ipv4 = ( ipv4 );
@def $node3_ipv6 = ( ipv6 );
@def $node4_ipv4 = ( ipv4 );
@def $node4_ipv6 = ( ipv6 );

table filter {
  chain INPUT {
    policy DROP;

    # connection tracking
    mod state state INVALID DROP;
    mod state state (ESTABLISHED RELATED) ACCEPT;

    # allow local connections
    interface lo ACCEPT;

    # respond to ping
    proto icmp icmp-type echo-request ACCEPT;

    # allow SSH connections
    proto tcp dport ssh ACCEPT;

    # allow gluster
    proto tcp dport (111 24007 24008 49152 49153 49154 49155 49156 49157) saddr ( $node1_ipv4 $node2_ipv4 $node3_ipv4 $node4_ipv4 ) ACCEPT;
    # the rest is dropped by the above policy
  }

  # outgoing connections are not limited
  chain OUTPUT policy ACCEPT;

  # this is not a router
  chain FORWARD policy DROP;
}

domain ip6 {
 table filter {
    chain INPUT {
      policy DROP;

      # connection tracking
      mod state state INVALID DROP;
      mod state state (ESTABLISHED RELATED) ACCEPT;

      # allow local connections
      interface lo ACCEPT;

      ## respond to ping
      #proto icmp icmp-type echo-request ACCEPT;
      # allow all icmp (needed for ipv6 ND and so on)
      proto icmp ACCEPT;

      # allow SSH connections
      proto tcp dport ssh ACCEPT;

      # allow gluster
      proto tcp dport (111 24007 24008 49152 49153 49154 49155 49156 49157) saddr ( $node1_ipv6 $node2_ipv6 $node3_ipv6 $node4_ipv6 ) ACCEPT;
      # the rest is dropped by the above policy
    }

    # outgoing connections are not limited
    chain OUTPUT policy ACCEPT;

    # this is not a router
    chain FORWARD policy DROP;
  }
}
Posted in 30in30, General, Linux, Short Tips | 1 Comment

Fixing and improving rspec tests

Today I stumbled across our puppet module for gluster. My goal for the next days:

  • modulesync with voxpupuli default settings
  • Fix rspec tests to work with our new test matrix (newer puppet versions and STRICT_VARIABLES=yes)
  • Make a new release for the forge
  • Add Arch Linux support
  • Make another release

Fixing rubocop the nice way:
The module has beed modulesynced before so there weren’t many changes, except for the conversion to the new Hash notation that got introduced in Ruby 1.9. These changes happend here and here. I recommend the following command to get a list of all failed cops:

bundle exec rubocop -c .rubocop.yml -D

Now we know the name of the cops and the files that need improvements. This allows us to try the awesome autofix function, but only for a certain cop or even a certain file (remember how to commit things, always small and logical changes):

bundle exec rubocop --only Style/HashSyntax --auto-correct my/awesome/file.rb

Bringing rspec to the next level of awesomeness:
Now we get to the failed rspec tests, here is one example:

  describe 'installing on Red Hat Enterprise Linux' do
    let :facts do
      {
        :osfamily => 'RedHat',
        :operatingsystem => 'RedHat',
        :operatingsystemmajrelease => '6',
        :architecture => 'x86_64',
      }
    end
    context 'when repo is true' do
      let :params do
        { :repo => true }
      end
      it 'should create gluster::repo' do
        should create_class('gluster::repo').with(
          :version => 'LATEST',
        )
      end
    end

The issue here is that the puppet module references a fact that isn’t used (so the cataloq compilation works) but also not present in our mocking (line 4-7). The old school solution is to add all accessed facts to all tests. This is a bit of copy and paste shit, but works. The module claims to support RHEL6 and 7, but the upper test only checks for RHEL6. I need to copy and paste the whole block to to also test on RHEL7, just with little data modification. This gets even mor anoying when you also support Debian based systemd and/or have a huge amount of tests.

The solution is rspec-puppet-facts. This can automatically mock every operating system specific fact, the list of needed operating systems comes from the metadata.json. The implementation:

  on_supported_os.each do |os, facts|
    context "on #{os}" do
      let(:facts) do
        facts
      end
      context 'with defaults' do
        it { should compile.with_all_deps }
        it 'should create gluster::repo' do
          should create_class('gluster::repo').with(
            version: 'LATEST',
          )
        end
....
      end
    end
  end

Here is the full implementation. rspec-puppet-facts allows us to simply iterate on the list of operating systems, and add all facts to rspec. Take a look at the nesting of contexts. The actual tests have to be nested in the context that adds the facts, because of the scoping.

Conclusion:
Playing a bit with rubocop and rspec-puppet-facts is a nice solution to heavily increase the code quality of existing tests, besides that you can also increase the amount of tests by looping at every OS.

Posted in 30in30, General, Linux, Puppet | Leave a comment

One month in open source projects

I undertook a little experiment last April – to contribute to open source projects for 30 days (similar to #blogmonth). The idea was to contribute in a way that it helps the project itself without any personal gain. For example: Implementing a feature that I need doesn’t count, but implementing something that somebody else requested counts.

Why waste your private time for *that* stuff?
That I was asked a few times. Some people don’t understand the way open source projects work. You do not have to pay with money for the software, you can use it and do whatever you want with it. But somebody has to have the initial awesome idea for a project, somebody has to implement it, write docs, handle issues and feature requests. Open source projects can only survive when contributors exist.

Personal goals:
My idea was to contribute to a few projects on github. The web UI offers a nice way to see your contributions. I do a lot of stuff for the puppet community in spare tiem and at work so I knew which projects needed a bit of help, mostly puppet modules. My aims were to improve my git skills (use it often, break stuff by accident, learn how to fix it via stackoverflow) and do a bit of testing and deal with code quality.

Recap after 30 days:
github stats
My streak started when I had to do some FOSS stuff for work in the last week of march, but I didn’t count that for my desired 30-days period. I experienced a few issues in the beginning:

  • How to correctly interpret a feature request written by another contributor (we need an RFC for a machine-parseable format or feature requests)
  • How to write sane and logical commits
  • Dealing with different styles of programming
  • Empathy for the users of the software

Everything I worked with is in English as other contributors or users write issues, feature requests and comments in English. For most of us English is not our first language which sometimes leads to communication issues. As our grasp of English is limited discussing technical problems in detail can be very difficult. Proving a detailed description of an issue you found is hard, but keep in mind that you have to describe your environment (so others can exclude side effects) and your configuration (so others can try to reproduce it) also. If you’ve a feature request you should describe what you want to achieve (or what the feature has to achieve), not how you want it to be implemented. Very often there are several ways to implement a specific feature and hopefully the person who actually implements it knows how best to do it.

git and git commits are a complicated topic. I wrote a CONTRIBUTING.md for projects I founded which describes my idea. I discussed that in a few projects and was able to convince a few contributors to write smaller commits that only encapsulate a single logical change (one bugfix, or one new feature) and not multiple. This makes a revert in the future way easier if something introduces a regression.

Every developer has his own style, some of them similar to your own, some are far far away (this doesn’t automatically mean that they are better or worse). Fathoming the thoughts of a dev by reading his code isn’t always easy, also documentation is not always present or up2date or at times even misleading. Working with many different projects or developers and reading different styles helps to get better at it, but it does take time. You know you’re good when you can determine the author based on the code style.

Empathy, the last and most important point. You maybe want to implement a feature which would break backwards compatibility. You want to change the code style to be more readable by developers which would be incompatible with old versions of $languageinterpreter (like perl or *ruby*). Also very common: you want to drop support for an older release of the language, but many people still use it and don’t want to or can’t upgrade (RedHat I hate you, I really do). Finding the best way here is not always easy, there are always people that won’t like your decisions. The goal is to speak to many people involved and find a working solution together (hey: this is the devops spirit).

Conclusion:
This was a great experience and I still contribute on a daily basis. I finally wrote my first rspec tests, I enabled over 90 rubocop cops in a puppet module, bumped many dependencies and released a few puppet modules. The biggest profit: I was able to work with so many different people. Nice and friendly people, all interested in software. I met so many new faces (via internet) and I’m looking forward to meet a few of them in person at the next conference or a community event!

If you’re also interested in contributing to open source as well or if you want to blame me because I broke a puppet module you use -> join #voxpupuli on freenode.

Posted in 30in30, General, Nerd Stuff | Leave a comment

Arch Linux step by step installation guide

I recently created a simple step by step guide to get a basic Arch Linux running on a VPS. Most providers don’t provide standard Arch Linux images but a VNC console + ISO upload solution, this guide is made for such an environment, I used a VPS from Hosteurope for this.

Getting started:
You can always download the latest ISO from my own mirror. Upload it to your hosting provider, restart the VPS with it and enable VNC access.

Setup network:
VNC is always tricky. The original RFC specifies a 8 character password, not more or less, also everything is plaintext and keyboard layouts often feel random and inconsistent. We will use VNC as short as possible. The ISO will print you a fancy syslinux menu where you can choose to boot a 32bit system, 64bit, or from the first hard disk, we will choose 64bit here. The system will automatically login on TTY1 with the zsh shell. Here we setup our network:

ip a a 37.61.204.220/24 dev enp0s5
ip r a 169.255.30.1 dev enp0s5
ip r a default via 169.255.30.1

The underlying hypervisor is a bit strange and requires us to set this point to point route + gateway. Setting up a DNS resolver isn’t needed but will speed up further SSH logins (because sshd does a reverse DNS lookup for every incoming connection):

echo "nameserver 80.237.128.144" > /etc/resolv.conf
systemctl start sshd
passwd


We can now connect via ssh to the VM \o/
You can use whatever partition schema you like or stay with the default one provided by the Hoster. I won’t go into details for creating partitions here because “the correct partitioning program” and the used schema is always very opinionated. I used the following, created with parted (1 is the bios_grub partiton that is needed for grub on GPT partition tables, 2 is for / and 3 is for separate data):

sh-4.3# parted /dev/sda unit s print
Model: ATA ubuntu1404-x86_6 (scsi)
Disk /dev/sda: 9017753600s
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start      End          Size         File system  Name  Flags
 1      2048s      4095s        2048s                           bios_grub
 2      4096s      97656831s    97652736s    ext4
 3      97656832s  9017753566s  8920096735s  xfs

sh-4.3#

You should check the correct alignment of your partitions:

for partition in {1..3}; do
  parted /dev/sda align-check optimal "$partition";
done

If everything is correct we can start formatting, mount it and install the minimal list of needed packages for a good system:

mkfs.ext4 /dev/sda2
mkfs.ext4 /dev/sda3
mount /dev/sda2 /mnt
mkdir /mnt/glusterfs
mount /dev/sda3 /mnt/glusterfs
pacstrap /mnt base base-devel vim htop grub openssh

Only base is needed to get a working system (this is a group of packages), but I can’t work without vim and htop. openssh is needed to enable remote access if we boot the system the first time. base-devel (also a group) is needed on many many systems as well, so I’m also installing it.

Configuration:
A system needs a fstab file, writing this from hand is always wacky. Specifying disks by their UUID is recommended, but doing this by hand is fault-prone so people tend to refer to their block device path like /dev/sda1. This is okay until someone adds another hard disk or changes your disk driver. Thankfully the Arch developers created a little wrapper script to create the fstab for us with UUIDs:

genfstab -U /mnt >> /mnt/etc/fstab

Now we switch into a chroot environment to configure the future hostname, language, keyboard layout and stuff like this:

arch-chroot /mnt
echo myawesomeserver.mydomain.io > /etc/hostname
echo LANG=en_US.UTF-8 > /etc/locale.conf
echo "en_US.UTF-8 UTF-8" > /etc/locale.gen
locale-gen
echo LANG=en_US.UTF-8 > /etc/locale.conf
echo KEYMAP=de > /etc/vconsole.conf
passwd

Almost-final step: Make it bootable and accessible:
Arch Linux doesn’t ship a default initramfs, we need to generate one. Then we can install grub + create the grub configuration file:

mkinitcpio -p linux
grub-install /dev/sda
grub-mkconfig -o /boot/grub/grub.cfg

Last step: Configure the network and enable sshd:
Systemd brings the cool systemd-networkd to automatically setup your network with a unified configuration format across every Linux distribution. We paste the following content in /etc/systemd/network/wired.network:

[Match]
Name=enp0s5

[Address]
Address=37.61.204.220
Peer=169.255.30.1/32
#Address=2A01:488:67:1000:253D:CCDC:0:1

[Network]
Gateway=169.255.30.1
Gateway=fe80::1
DNS=80.237.128.144
DNS=80.237.128.145

We can enable the needed daemon and sshd with:

systemctl enable sshd
systemctl enable systemd-networkd

(Please also check out Doing IPv6 with systemd-networkd – the correct way where I describe a better way to handle IPv6)
However there is one current issue with systemd-networkd (maybe in combination with the used hypervisor, it won’t configure the specified IPv6 address (that’s why I commented it out) but throws a syntax error (wat?!). We can create a oneshot service to configure the address after the network is up in /etc/systemd/system/ipv6.service:

[Unit]
Description=Adds IPv6 address
After=network.target

[Service]
Type=oneshot
ExecStart=/usr/bin/ip -6 address add 2A01:488:67:1000:253D:CCDC:0:1/128 dev enp0s5

We can’t enable user-generated units in a chroot, so we need to do it by hand:

ln -s /etc/systemd/system/ipv6.service /etc/systemd/system/multi-user.target.wants/ipv6.service

The default sshd config doesn’t allow root login with a password, you now have three options:

  • create a seperate user
  • Allow password based login
  • Throw your ssh key (only public part please) into /root/.ssh/authorized_keys

Then we can exit the chroot environment, unmount everything and reboot \o/

Conclusion:
It took me 13 minutes to setup everything, from booting the ISO to the reboot. This can even be automated by a little bash script. My fellow Bluewind provided me a little script that automates the installation. The ISO can fetch and execute a script if you provide it on the kernel cmdline, this is useful if you want to completely automate the setup and boot the ISO via PXE.

Posted in 30in30, General, Linux | 5 Comments

OSDC 2016 Talk Recommendations

OSDC – The Open Source Datacenter Conference – just happend a few days back. Here is a short list of videos that I can recommend:

A Job and an Adventure – Dawn Foster

I met Dawn once at Puppetcamp Düsseldorf, she organized the event. Dawn is a powerful woman, speaking about open source and getting a job in that area.

An Introduction to Software Defined Networking SDN – Martin Loschwitz

Martin is well known for his work at Hastexo and now at Sys11. He is responsible for the Sys11-Stack. I had the opportunity to see his work at the last Sys11 Conference.

Introduction to Testing Puppet Modules – David Schmitt

David is a Senior Software Engineer at Puppet(labs), he puts a huge effort in making rspec and Puppet Modules better, rspec basics are the central point of his talk. I’m lucky to have David as an advisor for my bachelor thesis this summer.

What´s wrong with my Puppet – Felix Frank

Felix is a featured Puppet Community member since a very long time. You can always recognize him by his hair. I was able to meet him at the last config mangement camp and we had a great time. In this talk is goes into detailed puppet debugging for different use cases with different tools.

Posted in 30in30, General, Internet found pieces | Leave a comment

Let’s Encrypt automation – The awesome way

Free SSL for the mass \o/
Cryptography is important. I like to encrypt as much traffic and data as possible, not only the important stuff. Let’s Encrypt is a new project sponsored by multiple big companies and the Linux Foundation to provide free and automated SSL certificates for everyone. There are a few – not so awesome – solutions to get a certificate. The project ships a little daemon which can communicate with their API, but I don’t like that. Running a daemon is always a security challenge. It it possible to use the daemon as a client only, start it once, renew cert/get a new one, exit.

My fellow aibo blogged about this in January and created a nice systemd service + timer for that. You had to run the command from the service once via terminal because it asks you to accept their Terms of Service and to provide an email address.

I recently made a little adjustment together with aibo to also provide these to information, now you can completely automate the SSL setup. Here is out modified service file:

Setup:

[Unit]
Description=renew certificates for %I

[Service]
Type=oneshot
ExecStartPre=/usr/bin/mkdir -p /tmp/letsencrypt-auto
ExecStart=/usr/bin/letsencrypt certonly \
  --webroot \
  --webroot-path=/tmp/letsencrypt-auto \
  --renew-by-default \
  --keep \
  --agree-tos \
  --email tim@bastelfreak.de \
  -d %I
ExecStartPost=/usr/bin/nginx -s reload

[Install]
WantedBy=multi-user.target

Save that as /etc/systemd/system/letsencrypt-renew@.service, also get the following timer for /etc/systemd/system/letsencrypt-renew@.timer:

[Unit]
Description=run cert renew for %I every two month

[Timer]
OnCalendar=*-*/2-4 1:0:0
Persistent=true

[Install]
WantedBy=multi-user.target

You now want a SSL cert for myawesomestuff.example.com? Just do systemctl enable letsencrypt-renew@myawesomestuff.example.com.timer and wait until the timer starts. Or if you want a new cert now, just run systemctl start letsencrypt-renew@myawesomestuff.example.com.serice. You need more certificates? Just enable the timer again with a different domain name \o/

Webserver integration:
Here is a snippet from my nginx vhost:

upstream jenkins {
  server 127.0.0.1:8090 fail_timeout=0;
}

server {
  listen 80;
  listen [::];
  server_name ci.virtapi.org;

  location /.well-known {
    root /tmp/letsencrypt-auto;
  }

  location / {
    return 301 https://$host$request_uri;
  }
}

server {
  listen 443 ssl;
  listen [::]:443 ssl;

  server_name ci.virtapi.org;

  ssl_certificate /etc/letsencrypt/live/ci.virtapi.org/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/ci.virtapi.org/privkey.pem;

  location / {
    proxy_set_header        Host $host;
    proxy_set_header        X-Real-IP $remote_addr;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header        X-Forwarded-Proto $scheme;
    proxy_redirect http:// https://;
    proxy_pass              http://jenkins;
  }
}

Conclusion:
ssl all the things
Let’s encrypt is really cool, systemd is also cool, the combination is even cooler. This brings us a lightweight solution to get as many certificates as we want.

Posted in 30in30, General, Internet found pieces, IT-Security, Linux | 1 Comment

Short Tip: Fiddling around with login shells

I’m currently playing around with LARS, this script collection creates a arch ISO with some post-install magic, for example setting a login shell for root. The used code is:

usermod -s /usr/bin/bash root

I could boot up the image and ssh into it with my key. But three things weren’t working:

  • Login as normal user and do su
  • Login at a TTY
  • Login via ssh with a password

I got a “Access Denied” message in all three cases. I digged around for two days. I know that Arch Linux moved binaries around a few years back, /bin is now a symlink to /usr/bin. Just for fun a colleague changed the shell in the /etc/passwd to /bin/bash, just like in the good old days, and WTF everything was working?!
We found the /etc/shells file:

#
# /etc/shells
#

/bin/sh
/bin/bash

# End of file

This file lists all shells that are allowed to be used as a login shell. By default it lists the symlink for bash, not the absolute path itself… I still don’t know why the keybased login was working, but this is another mystery.

Posted in 30in30, General, Linux, Short Tips | Leave a comment