Fiddling with duply and ed25519 keys

I’m using duply for my backups, the config is very simple (/root/.duply/backup/conf):

GPG_KEY='IDOFMYGPGKEY'
GPG_PW='PASSWORD'
GPG_OPTS='--compress-algo=bzip2'
TARGET='sftp://user@storage01.server.de:22//user'
SOURCE='/'
DUPL_PRECMD="nice -n 20 ionice -c 3"
MAX_AGE=2M
MAX_FULL_BACKUPS=8
MAX_FULLBKP_AGE=1W
DUPL_PARAMS="$DUPL_PARAMS --full-if-older-than $MAX_FULLBKP_AGE "
VOLSIZE=512
DUPL_PARAMS="$DUPL_PARAMS --volsize $VOLSIZE "
VERBOSITY=4
DUPL_PARAMS="$DUPL_PARAMS --asynchronous-upload "

I installed a new host with Debian 8 which ships the following software versions:

# dpkg -l | grep -E "paramiko|duply|duplicity"
ii duplicity 0.6.24-1 amd64 encrypted bandwidth-efficient backup
ii duply 1.9.1-1 all easy to use frontend to the duplicity backup system
ii python-paramiko 1.15.1-1 all Make ssh v2 connections with Python (Python 2)

After installing it, I created a ed25519 key pair, copied it to my backup server, and tried to ssh:

# ssh user@storage01.server.de
Could not chdir to home directory /customers/user: No such file or directory
This service allows sftp connections only.
Connection to storage01.server.de closed.

The server only allows sftp for this user and no normal ssh, so this is expected, the key seems to work. Lets test duply:

# duply backup status
Start duply v1.9.1, time is 2015-12-04 10:22:49.
Using profile '/root/.duply/backup'.
Using installed duplicity version 0.6.24, python 2.7.9, gpg 1.4.18 (Home: ~/.gnupg), awk 'GNU Awk 4.1.1, API: 1.1 (GNU MPFR 3.1.2-p3, GNU MP 6.0.0)', bash '4.3.30(1)-release (x86_64-pc-linux-gnu)'.
Autoset found secret key of first GPG_KEY entry 'IDOFMYGPGKEY' for signing.
Checking TEMP_DIR '/tmp' is a folder (OK)
Checking TEMP_DIR '/tmp' is writable (OK)
TODO: reimplent tmp space check
Test - Encrypt to 'IDOFMYGPGKEY' & Sign with 'IDOFMYGPGKEY' (OK)
Test - Decrypt (OK)
Test - Compare (OK)
Cleanup - Delete '/tmp/duply.23085.1449220969_*'(OK)

--- Start running command STATUS at 10:22:50.404 ---
The authenticity of host 'storage01.server.de' can't be established.
SSH-RSA key fingerprint is d1:d6:47:0e:d7:6c:98:ba:f5:3c:d2:ef:cd:9f:0a:d7.
Are you sure you want to continue connecting (yes/no)? yes
BackendException: ssh connection to bastelknecht@storage01.server.de:22 failed: No authentication methods available
10:22:52.766 Task 'STATUS' failed with exit code '23'.
--- Finished state FAILED 'code 23' at 10:22:52.766 - Runtime 00:00:02.362 ---

Hu? broken? but lets test sftp:

# sftp user@storage01.server.de
Connected to storage01.server.de.
sftp> ls
bastelknecht
sftp> ^D

Works perfectly fine. Long story short: After debugging this for an hour I switched to a simple rsa key pair, aaaaaand it works. pyhton-paramiko doesn’t like new key types, the version shipped on Debian 8 is really old, I will test newer ones in the future.

Posted in General, Linux | 1 Comment

Link collection of the week

Here is another collection of links that I’ve found/read in the past week:

Posted in General, Internet found pieces | Leave a comment

Securing Postfix on a shared Webserver

I’m operating a webserver for shared webspace accounts. I’m not responsible for the stuff hosted on the webspace, but for the server itself. Many people like to host their wordpress or joomla on this server, both scripts are well known for a large list of vulnerabilities. I’m running a local postfix for sending my cron mails, but I don’t want that php scripts abuse my Postfix for sending spam. You can advice Postfix to only accept mails, that are delivered via sendmail/mail, from certain users (root in my case). First, create a file that holds every system user that is allowed to send mails:
# cat /etc/postfix/AllowedSystemUsers
root OK

Than tell Postfix to honor this file:

 echo 'authorized_submit_users = hash:/etc/postfix/AllowedSystemUsers' >> /etc/postfix/main.cf

Last step, create a hashmal and reload Postfix to active the changes:

postmap /etc/postfix/AllowedSystemUsers
systemctl reload postfix.service

You can add more users to the file if you want, but keep in mind that you always have to run the postmap command.

(Take a look here for more)

Posted in General, IT-Security, Linux, Short Tips | Leave a comment

Fighting PHP spam

A good friend owns a very old joomla website which has been hacked. It is massively sending spam, how can we investigate this?

First of all, reject outgoing mails in your firewall (ferm.conf style):

  chain OUTPUT {
    policy ACCEPT;

    # connection tracking
    mod state state INVALID DROP;
    # drop every outgoing mail
    proto tcp dport 25 REJECT;
  }

Now we have time to investigate. A good start is a look at the mailq:

...DD460113C27 1388 Tue Dec 1 22:30:58 emily_smith@foobar.de
(delivery temporarily suspended: connect to mailin-02.mx.aol.com[152.163.0.100]:25: Connection refused)
anahas5353@aol.com

-- 441 Kbytes in 270 Requests.

so already 270 spam mails, ups. You can force php-fpm to extend mail headers, log the mail() calls to a file but also disable the mail function:

php_admin_flag[mail.add_x_header] = on
php_admin_value[mail.log] = /home/www.fuckedupwebsite.de/logs/mail.php.log
php_admin_value[disable_function] = mail

Now we wait a few minutes until more mails were tried to be sent, than take a look in the mail.php.log:
[30-Nov-2015 11:48:09 Europe/Berlin] mail() on [/home/www.fuckedupwebsite.de/htdocs/templates/protostar/images/sql59.php(1962) : eval()'d code:775]: To: ercole.giona@poste.it -- Headers: Date: Mon, 30 Nov 2015 11:48:09 +0100 From: Bernice Lowe Message-ID: X-Priority: 3 X-Mailer: PHPMailer 5.2.9 (https://github.com/PHPMailer/PHPMailer/) MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="b1_ed6348634bcdc8a0ab1da27532c7f85d" Content-Transfer-Encoding: 8bit

So we found the first infected file, sql59.php. Delete it, clean the mailqueue with postsuper -d ALL and take a look at the /var/log/mail.log if there are new entries. It is possible that new mails appear, but nothing gets logged into the mail.php.log, than we need to take a look at the origin of the mail. We can dump a mail that is stuck in the postfix queue with postcat + the mail ID from mailq:
postcat -q DD460113C27 | grep X-PHP-Originating-Script
X-PHP-Originating-Script: 1044:template.php(1961) : eval()'d code

Now we know the name of the file, but not the path, go into the docroot via ssh and find all php files with this name + that do an eval():

find . -type f -name template.php -exec grep -i -l 'eval' '{}' \;
./tmp/install_5523c3dd4f727/installer/adapters/template.php
./tmp/install_553e2d7f39ee9/installer/adapters/template.php

perfect, just a few matches. take a look at them, identify the correct one, delete it, clear the queue, watch the mail.log, repeat for every new mail.

This is a good solution to detect the spamming php scripts, but this doesn’t fix the issue which allowed $bad_person to upload these files to the webspace.

Posted in General, IT-Security, Linux, Short Tips | Leave a comment

#vDM30in30 review and thoughts

Wuhu I made it \o/

this is the 32. post that I create in this November (if I finish it in the next 8 minutes…). I started the 30 posts in 30 posts action on the 2. of November because Rob encouraged me to do it. The first days were really easy, I started a few new projects which where a good basis for several posts. The middle of the month was hard, I run out of ideas and motivation. Talking to other participants helped me to start again. I started one day too late, but nevertheless I was able to write two posts more than I had to.

I learned a few things during this month. I improved my English skills, found a few new friends (some even review my stuff before publishing!), I learned many new words and improved my vocabulary. The two most important points: I’m new better at structuring my posts and more people are reading my stuff. Not only this, they rethink the stuff I write, they provide positive feedback and also scrutinize my posts which created several awesome discussions about git, Puppet and IT security this month. Discussions like them are the key for all of us to get better in our job. We are arguing with others, we are learning new things and sometimes we convince people or we change our own opinion on certain points.

I will try to continue to write blog posts more frequently than I did before this action. I am excited to see how many people I can do next year!

(Here is a collection of all posts from this November)

Posted in 30in30, General, Nerd Stuff | Leave a comment

Baking awesome Cookies (like a software engineer)!

Christmas time is cookie time in germany, here is a recipe, for uhm, many:

Ingredients:

  • 500gr flour
  • 250gr normal butter (no light version or something if extra salt)
  • 250gr suggar
  • 16 gr vanilla sugar
  • 4 middle sized eggs

[because of stupid web browsers (chrome) the preview pictures can lie on their side, klick on them for high res blur pictures but with the correct orientation]

making this is really easy. Take eveything into a big mixing bowl. Tip1: Do not use a normal bowl, makes the mixing way harder. Tip2: Do not use a bowl that is too small, also makes the mixing harder. Tip3: DO NOT COMBINE TIP1 and TIP2 (like I did):

And combined with the eggs:

The bowl has no proper form for mixing in it, which made it quite hard, but it worked. The dough is ready when it is totally sticky:

Now wrap it up in cling wrap and put it in the freezer for at least 4 hours. This will cool down the butter which makes it easier to roll it out (everything is better with a thumb up):

After 4 hours you can start to roll it out, try to make it at least 0,5cm thick, excel the dough, put the pieces on baking paper, coat them with yolk and than add some suggar based crumbles:

Now comes the really fun part. We are software engineers, so we work like software engineers, even while baking. Rob Nelson talked about deployment pattern during his puppetconf talk (which I linked here and here):

okay => good => better => new tech => okay => good => better => repeat (expect the good and not the best, but repeat/iterate)

This is a perfect pattern for baking cookies! I made dough for four-five baking tins, so we got four to five iterations to start with “okay cookies” and end up with “freaking awesome cookies”. Here are the other baking tins that I prepared:



Here is a picture of the first result. They are perfect for our pattern because there is room for improvements!

They are really thin, I had them in the oven for 10 minutes at 200 degree Celsius (like the original recipe mentioned). What can we do to improve it? I made the dough a bit thicker, but didn’t change the time/temperature:

This looks better, but still not perfect. For the next iteration I lowered the time to 9 minutes:

And now I have a huge box full of cookies \o/
Thanks Rob for teaching me how to bake!

Posted in 30in30, General, Internet found pieces, Recipes | Leave a comment

Fiddling with MySQL

Let us have a look at a simple procedure in MySQL 5.6:

DELIMITER $
CREATE DEFINER=`root`@`%` PROCEDURE `validate_storage`(IN type INT, IN option INT, OUT result varchar(3))
BEGIN
  DECLARE id1 INT;
  SELECT id into id1 FROM storage_type WHERE id=type$
  SELECT id into @id2 FROM cache_option_id WHERE id=option$
  SELECT IF((id1 = type AND @id2 = option),'yes', 'no') into result$
END;
DELIMITER ;

Easy one. A note about variables. id1 is a local/user defined variable, the scope is limited to the BEGIN/END block, @id2 is a global variable, accessible from everywhere. User defined variables need to declared with DECLARE.

Here we have got a procedure which stores a transaction:

DELIMITER //
CREATE PROCEDURE EventDate_Update_test(OUT isvalid INT)
BEGIN
  START TRANSACTION;
  DECLARE testzeit DATE;
  SET testzeit = CURDATE();
  UPDATE `blabli` SET`Status`= 4 WHERE `EventDate` <= testzeit;
  IF NOT EXISTS (SELECT `Status` FROM `blabli` WHERE `EventDate` <= testzeit AND NOT `Status`= 4)
  THEN
    isvalid = 1;
    COMMIT;
  ELSE
    isvalid = 0
    ROLLBACK;
  END IF;
end //
DELIMITER ;

This will throw you an error, you can not use the variable testzeit. Why? because the transaction makes any magic and doesn’t allow local variables, even if the procedure is in a BEGIN/END statement. I did not found this edge case in the MySQL docs, I am not sure if it is not documented or if I missed it. Here is the final working version:

DELIMITER //
CREATE PROCEDURE EventDate_Update_test(OUT isvalid INT)
BEGIN
  START TRANSACTION;
  SET @testzeit = CURDATE();
  UPDATE `blabli` SET`Status`= 4 WHERE `EventDate` <= @testzeit;
  IF NOT EXISTS (SELECT `Status` FROM `blabli` WHERE `EventDate` <= @testzeit AND NOT `Status`= 4)
  THEN
    isvalid = 1;
    COMMIT;
  ELSE
    isvalid = 0
    ROLLBACK;
  END IF;
end //
DELIMITER ;

Thanks MySQL! Took me 30min to fiddle this out!

Posted in 30in30, General, Internet found pieces, Linux | Leave a comment

Why the internet sucks part 2

[Here is part 1]
The Internet is split up into three different provider types. The first type are the access provider. They provide you with DSL access via old copper based phone lines or with access to their shared TV-cable. The counterpart are the service provider, they operate datacenters where you can rent webspace, cloud foo or dedicated servers. The third type are the provider between them, Tier1 and Tier2 carrier who provide connectivity between two points (for example level3, INIT7, NTT, GTT, OX). I’m working together and for hosting providers since 2009. Most of them really care about their backbone. They operate redundant fiber rings with connections to multiple access provider and carrier and they always try to have ‘the best network, the lowest latency’, compared to their competitors. The quality and the maximum amount of bandwidth of their backbone is an essential quality attribute.

We’ve got access provider on the other hand. they often have a monopole in certain areas. If you want internet in your living room, you need a certain provider. This leads to a really big problem; they don’t have to care about their network quality, because the customer can not change the provider. The really big player like Deutsche Telekom, France Telecom, Telecom Italia or Telefonica (all of them were owned by the local government and had a monopole in the complete country for a long time) don’t operate any big peerings, they want to create a “double payment system”. Their customers have to pay them to connect the customer with the internet, and they want money from the big hosting providers like netflix or youtube to provide them fast access to their network. This leads to serious drawbacks for the customers. during primetime, they have got packet loss to youtube or they only get a few kb/s through their 10mbit-50mbit line.

Most customers are not aware of this issue. They think that the root of this issue comes from the hosting provider and they start to blame them. We all should really measure our available bandwidth and their quality. Do we have a direct routing or 30 hops to the destination? Do we have packet loss? We all should constantly report these issues to our access provider, otherwise it will get worse each year.

Posted in 30in30, General, IT-Security | 1 Comment

Cooking Chicken Mango Curry

We need for 3 dinners:

Ingredients:

  • 400gr chicken breast fillet
  • 300gr mango
  • a bunch of scallions
  • 6 hand full of rice

Spices:

  • cane sugar
  • lemon juice
  • 300ml coconut milk
  • curry paste
  • curry powder
  • garlic powder
  • seasalt
  • curcuma
  • coriander
  • olive oil

[because of stupid web browsers (chrome) the preview pictures can lie on their side, klick on them for high res blur pictures but with the correct orientation]

We start with cutting the fillets into small chunks:

Put the meat into a hot pan with hot olive oil, add a bit of curry paste, curry powder and seasalt. Start chopping the mango while the meat roasts. Add the mango into a pot with cane sugar, the coconut milk, curcuma, coriander and a bit curry paste:

Now you should lower the heat on the pan, chop the scallions and add them to the meat:

Cook everything on low heat until the mango looks like a sauce, than add it to the meat and heat up a pan for the rice:

Throw everything together when the rice is done:

Posted in 30in30, General, Recipes | Leave a comment

Zabbix Autodiscovery for KVM VM Images

A very short description about Zabbix:

Zabbix Agent: Executes Checks on a system that you want to monitor (for example checking the avialable memory).

Zabbix Server: Central part that receives values from the agents, serves the frontend with data, throws everything into a Database, sends active checks to agents.

Zabbix DB: This is a Postgres DB which stores monitoring values but also configuration data. If you run a huge setup you want to put it on a dedicated box.

Zabbix Web: This is the webinterfaces that connects to the DB to present you the values in a nice way. It also offers a JSON based API.

The monitoring itself is split up into multiple parts:
Checks: This is a thing that collects a specifc dataset on a node. for example the CPU temperature or memory usage.

Trigger: A trigger uses the observer pattern to start an action if one or multiple checks return a specific value (CPU temp higher than X since Y minutes and network traffic higher than Z).

notifications: zabbix can inform you by mail, jabber, sms and several other backends about trigger that matched. It is also possible to create escalation paths or repeat notifications in an interval if nobody fixed the issue or acknowledged it.

In a normal case, you define the specific checks for a server. One commen scenario is the disksize monitoring for virtualmachines. But you don’t want to hardcode the paths to images or the amount of them on the zabbix server/DB, you want to detect them dynamically. Zabbix offers low level discovery for this use case:

A local discovery script returns a JSON array with a macro => value notation, you can create check/trigger templates who uses these macros. Here is a short ruby snippet which detects KVM images and creates the JSON data for you (every image is saved as /var/lib/libvirt/images/VMNAME/image.qcow2, the name should always be an Integer):

#!/usr/bin/env ruby

##
# Written by Tim Meusel
# improved by jokke
# Discovers qcow2 disks + customer IDs
# Returns JSON, tested on Ruby 1.8.7
##

require 'rubygems'
require 'json'

data={:data => []}
ignored_vs = [1]
Dir.glob("/var/lib/libvirt/images/*/image.qcow2").each do |dir|
  vmname = Integer(dir.split(File::SEPARATOR)[5]) rescue next
  next if ignored_vs.include? vsname
  data[:data] << { '{#VMNAME}' => vmname, '{#VMPATH}' => dir}
end

puts JSON.pretty_generate(data)

this will return something like this:
{
"data": [
{
"{#VMNAME}": 2,
"{#VMPATH}": "/var/lib/libvirt/images/2/image.qcow2"
},
{
"{#VMNAME}": 3,
"{#VMPATH}": "/var/lib/libvirt/images/3/image.qcow2"
},
{
"{#VMNAME}": 4,
"{#VMPATH}": "/var/lib/libvirt/images/4/image.qcow2"
}
]
}

You can define this in your zabbix agent config as a new UserParameter (zabbix agent should never run as root. depending on your permissions it can’t read the directories with the images itself so you need to run it with sudo or update permissions):

# cat /etc/zabbix/zabbix_agentd.d/vmimages.conf 
UserParameter=discovery.vmimages,sudo /usr/bin/lld_vsdisks.rb

It is now possible to configure a discovery rule. Login to your Zabbix webfrontend -> Configuration -> Templates -> Select the template where you want to add the rule and checks -> Discovery rules -> Create discovery rule. As they key you know of to specify the name of the UserParameter, type is “Zabbix agent”. Now the Server knows that every host with this template should have a UserParameter which points to a script which returns JSON with macros. You can also add “Item prototypes” to this discovery rule. Here a screenshot of the rule itself and than for a check:

Zabbix has a build in check for getting a file size, vfs.file.size IIRC, but this also runs with the zabbix agents privileges, you can create another UserParameter for checking the size as root:

# cat /etc/zabbix/zabbix_agentd.d/vmimages.conf 
UserParameter=vm.disksize[*],sudo /usr/bin/get_size.sh "$1"
UserParameter=discovery.vmimages,sudo /usr/bin/lld_vsdisks.rb

And the script:

# cat /usr/bin/get_size.sh
#!/bin/bash
wc -c "$1" | awk '{print $1}'

You *could* do something like this in the vmimages.conf:

UserParameter=vm.disksize[*],sudo wc -c  "$1" | awk '{print $1}'

But you need to escape the second $1 which is really PITA….

You should now have a sweet solution to automatically discover new VM images and monitor their size. You can also add trigger prototypes to the template if you want or add additional checks.

Posted in 30in30, General, Linux | Leave a comment