MTR and the IPv6 fuckup

I wrote about the internet and administrators unable to deal with IPv6, here is another sad story:

I use MTR to detect network issues:
$ mtr 8.8.8.8 --report --report-cycles=10 -n
Start: Sat Nov 21 14:31:47 2015
HOST: basteles-bastelknecht.baste Loss% Snt Last Avg Best Wrst StDev
1.|-- 88.198.253.2 0.0% 10 0.4 2.0 0.3 10.0 3.2
2.|-- 88.198.253.1 0.0% 10 0.6 0.5 0.4 0.7 0.0
3.|-- 88.198.244.253 0.0% 10 1.2 1.6 1.1 2.5 0.0
4.|-- 213.239.228.97 0.0% 10 1.1 0.8 0.4 1.4 0.0
5.|-- 213.239.245.113 0.0% 10 1.0 0.9 0.6 1.4 0.0
6.|-- 213.239.245.18 0.0% 10 5.3 5.5 5.3 6.3 0.0
7.|-- 213.239.245.1 0.0% 10 5.3 7.1 5.3 20.7 4.8
8.|-- 80.81.193.108 0.0% 10 5.8 6.3 5.6 9.9 1.2
9.|-- 72.14.238.225 0.0% 10 5.9 18.8 5.8 88.6 26.9
10.|-- 216.239.46.179 0.0% 10 6.3 7.3 6.0 13.1 2.2
11.|-- 8.8.8.8 0.0% 10 6.3 6.3 5.9 7.0 0.0

MTR works fine, until you place an IPv6 resolver in your /etc/resolv.conf. If MTR tries to use it, it exits. I tested this on Debian 8. There is a bug report from 2012! about this issue. I try to increase Ipv6 traffic so my first resolver is always a IPv6 one if the host has IPv6 as well. Is here anyone around that is able to provide a valid patch? Or knows an alternative to MTR with working IPv6 resolver support? Interesting is that MTR suppors tracing of IPv6 addresses very well, just not for resolver :(

Posted in 30in30, General | 1 Comment

Why Archlinux is awesome!

Arch, Arch Linux, Archlinux, all is possible to refer to this project (Arch Linux is correct). This is a rolling release distribution, which means that there aren’t any Major releases. A Software maintainer creates a new release, for example Nmap 7, the Arch package maintainer tests the release and ship it if it works. That’s it! Nobody cares if it is a feature update or “just” a security fix, every update gets shipped. This is important for several reasons:

Don’t fuckup my system!
Debian, CentOS, Ubuntu, they all are well known for shipping old software. They release a new major version, like CentOS 7, which from now on only receives security updates. They don’t want big changes that “could break anything in the distro”. For example, they ship Apache 2.4, everything is fine, until the Apache Foundation releases 2.5 with security fixes. Now the CentOS package maintainer have to take a look at the new source code, try to figure out what changed and backport these changes to their Apache 2.4 package and update it. So problem is, a normal package maintainer isn’t really familiar with the source code, while patching it, it breaks. I had hard fights with Canonical for breaking their Qemu, LibVirt and Linux-Kernel releases. All issues I had with Ubuntu were based on their stupid patches to backport anything. You don’t have this problem if you run a rolling release distribution like Gentoo or Arch, because the package maintainer try to not modify the code.

I want a secure environment!
Take a look at the last year, we had so many issues in OpenSSL and Bash. In a few cases, there is a release, for example Bash 4 (fictional release). The developers release a newer version, Bash 4.5. Arch gets updates often one to seven days after the release. If CentOS initially shipped Bash 4, they will stay with it until they end support for the complete CentOS release, which will be June 2024 for CentOS 7. Now somebody reviews Bash 4, finds a huge security hole, and notice that it got fixed in 4.5 by accident. Nobody told anybody that 4.5 had important fixes, so nobody backported them. Which means that all these release-based distributions are open for hacking and exploiting. This is a serious issues that happened several times in the past. This won’t happen with rolling release because you get every damn update!

But every major software update will break my legacy system, I don’t want rolling release!
I heard that so often, especially from old CentOS administrators, they love their old system because they never touch them (on the other hand, they also don’t support software that is newer than 10 years, it simply doesn’t work on their systems). But this statement simply isn’t true, sometimes there a bigger changes that you might want to know, so you should read the distributions release notes, they will announce bigger changes (this counts for Gentoo and Arch). But if you use the software as expected (for example Apache: save your websites where the Apache devs recommend it), everything will work and the package manager emerge/pacman will do his job to provide you a running software.

so right now we got the following reasons for Arch:

  • Always up2date software => no hateful OS major updates
  • As secure as possible (regarding patches
  • No broken packages with patches from package maintainers

But there are other cool things about arch: The community is cool! I’m proud to say that I’m part of this community. We’ve got an active IRC channel where you can really always ask questions, we do surprising birthday presents that are sometimes recorded on cam, I also try to organize in person meetups from time to time. I’m very proud that I could organize the first Arch BBQ event this sommer with a lot of meat:
meat

Our next reunion will probably be on the 9. of January where we do a Christmas/New Year cooking session.

Conclusion
The technical aspects of a linux distribution are important during the decision for your next most loved OS, but I think that the community is as important. They provide you help, feedback. They also allow you to participate, to direct the development. Let me know what is your most loved OS and why!

Posted in 30in30, General, Linux | 1 Comment

Link collection of the (last) week(s)

Some interesting and important happenings and things I found in the past week (and prior):

Posted in 30in30, General, Internet found pieces, IT-Security, Linux, Puppet | Leave a comment

Building APIs with swagger

I’m currently designing a new API endpoint for marmoset. In general, there are two ways to build this: Just start with coding or start with planning. As we are smart people, of course we start with planning! There are several good ways to plan an API:

I chose swagger in this case, but why? Swagger is a popular framework for REST APIs. I want to create a new API endpoint which meets the REST design concept so this is a good start. Swagger is also very popular in my working environment (which is another argument but not a good one).
Swagger offers an editor where you can design a yaml document which describes your API (similar to the RAML project). Defining your API in a digital way is a good thing, because this is machine-parsable, so you can use the layout later for $things. Also, you’ve now a good document which you can rely on during technical discussions with team-members. While you write the API itself, based on the document, others can start to write other applications that rely on this API, also based on the yaml notation.

Swagger’s editor has two awesome features: You can export the yaml as a basic server in several languages/frameworks, but also as a simple client. This prevents a lot of manual code typing.

Another awesome thing about swagger: You can create a fancy html UI , called Swagger UI, where you can play with the API.

shut up and take my money

How to create this? There is no creation by hand! You only need to implement a swagger lib into your project and use a good description for every method. Swagger will parse this and create the UI automatically. Why is this important? Because of another design paradigm:

There must be only one source of truth

Which means: You can write an awesome documentation for your software by hand, but it is totally useless if the software does a different thing. Creating the docs based on the real code helps to fulfill this coding paradigm.

Are you interested in swagger now? You can start their editor or host it on your own and import my API definition by URL to take a look! (also feel free to try to fix the remaining error that pops up if you parse my yaml)

Posted in 30in30, General, Nerd Stuff | Leave a comment

Optimized git workflow for rapid deployment

legacy git workflows
There are many common ways of using git, for example the YOLO method where everybody directly pushes into his master branch. This is a common workflow but well…
i like to life dangerously

There is another approach, for example used by the phpBB team:
phpbb git workflow

You’ve got a master branch which is always even with the latest release. You create a development branch after the release, every topic branch (bugfix or feature implementation) is a branch based on the latest development branch. This is a well known approach but it isn’t very fast (in terms of releasing a new change). but it is mostly used in teams that push a new release (in terms of semantic versioning) every few weeks with a bunch of changes or release at scheduled times (every second tuesday in a month for example).

Rapid deployment?
I’m currently working on several projects that use github and another development model for rapid deployment (how can we name the model?). First of all we need to clarify what rapid deployment is: You basically want to release every change into master directly after it is tested. Many teams are working with agile methods like scrum and kanban, we try to develop many small changes and we directly want to use them.

How does this work with git and github?
The current master code-base is always your development basis, so every new feature and bugfix gets his own so-called topic-branch. You should try to break up a huge new feature into smaller changes, they should be as small as possible but as huge as they have to be to be working without applying the other changes of the new feature (does somebody understand what I mean?). Every change should get his own commit, than you create a pull request for this single commit which gets hopefully soon reviewed and merged into master.

Why so many pull requests?
We don’t use the YOLO method so we don’t push to master directly, somebody else has to review our changes and merge them into master. A review always takes time, so document your code, use a meaningful commit message and commit description. What also speeds up reviews? Small changes! This is really important for open source projects, many contributors have a 9-5 job and contribute in their spare free time, most people contribute more if they can do it in small pieces so they are happier with small pull requests.

But there is no testing branch!
Yeah, so? Nobody needs a testing branch. Testing is very important, you can use CI systems like Travis CI to hook up on your github repos. Every commit will trigger a travis job will than will trigger all your defined tests. Github has a nice solution to display the test result in a PR so the reviewer directly notices if the changes are working or not. This all works without a dedicated branch for testing.

Conclusion
This is a cool method if you want to deploy changes very fast. It is uses in many project in the puppet world, I’m also trying to use it for my own VirtAPI projects, which development model do you use? Let me know!

Posted in 30in30, General, Linux | Leave a comment

Writing 30 posts in 30 days review

Rob encouraged me to participate in the ’30 posts in 30 days’ challenge this November. I got mixed feedback on the previous posts in the past days, the feedback on twitter was really good, but I also got a bit negative feedback. A few people told me that they skip the complete blog series, other told me that the series didn’t meet their expected high level.

I had to take a day of and think about the series and if it fulfills my personal expected level. Today I first wanted to write a longer post about the quality and the things that readers and writers can expect in this series but Rob was an hour faster than me.

I can only agree on what Rob wrote. The main goal is to write more frequently (to share knowledge, meet new people, improve language skills) and to get comfortable with writing. Writing blog posts is harder than most people think of and also consumes a lot of time. I’m happy with my existing posts, except the “why internet sucks” article where I fucked up the links. I always like to get constructive feedback, but it hits me really hard when people tell me that they stop following without a justification.

Robert tweeted today about my posts:

which brings me to the actual reason why I started this particular blog post: Feedback is really really important. If you want to do a good job you need constant feedback and not only once in a while (called feedback loops, for your bullshit bingo). Feedback always triggers a retrospective. You think about the work you did, the stuff that went great, the stuff that sucked but but needed to be done and is now finally done. You will keep the parts from the feedback loop that were good in mind to repeat them, the points with constructive criticism will also stay in mind because you automatically want to improve them in the next cycle. Feedback always adds another point of view onto a topic. Sometimes I you didn’t like your work but outsiders really love it. You will notice that the work isn’t actually that bad, you just suck at self-assessment (underestimation is okay). On the other hand you sometimes really like your creation but the feedback is bad. Why? Because you probably lost focus and really created cool stuff, but not for the requirements (“Hey I built a ship”, “thx but we asked for a plane”).

Positive feedback keeps you motivated, always try to slice your project into small pieces that you can handle in one day, try to get feedback from anywhere before you end your day. This will make you happy, because you can easily measure how good your work is and how you need/can improve it.

In that sense: let me know your opinion about the series in general; do you like the topics? What do you think about the post qualitiy?

Posted in 30in30, General | 1 Comment

What is Puppet-Community?

I’m a Puppet-Community member since the 31. of August 2015. Since than, a few colleagues asked me what we do:

In my opinion, our work is split up into three parts:

  • Maintain so many different tools and plugins for the Puppet namespace
  • Maintain and coordinate module development
  • Support other Puppet users on IRC and meetups (FOSDEM here we come!)

Puppet tools and plugins
There is a huge lists of puppet tools. Catalog differ, dashboards, environment automation stuff, but also many many plugins for puppet-linter. We maintain a list of all known tools, also we maintain a few tools ourself.

Puppet modules
We run some of the most well known puppet modules, we got a few from the Puppet Labs namespace, some were created by ourself, we also got a few from other developers. You own a module and need help? Let us know! We’ve got good contribution guidlines, do a peer review, and we build an own CI service. We try to add unit and acceptance tests to any module, recently we added rubocop support to also improve the ruby code quality.

Giving support and sharing knowledge
Almost everybody of us is present in #puppet-community on freenode, supporting each other and guests in puppet development and the usage of any tool. We’re also present in Puppet Labs channels #puppet and #puppet-dev and actively working on the future of puppet, you can find at least one of us in every weekly PR triage.

You’re interested in Puppet? You need help? Join us!

Posted in 30in30, General, Linux, Puppet | Leave a comment

Why the internet sucks

So, this is the internet, welcome to it. You can view this post because my webserver has an IP-Address. Also you have one, congratz!

Sadly, the internet has a problem, we all are running out of these strange old IPv4 addresses. There are 2^32 of them, a huge part is reserved for internal use, the other part is uses “in the real internet”. Right now we’ve got more people with internet capable devices than IPv4 addresses. Do we have a solution? Yes we have! IPv6 is the successor of Ipv4, the address space increased to 2^128 which is really huge. The address of this blog is 2a01:4f8:11a:b1d::2. So why does the internet suck? Because so many sysadmins didn’t deploy IPv6 right now!

Why? They have a legacy IPv4 deployment, this works for their usecase, but they simply ignore the fact that many many people run on IPv6-only or CGN/DS-light lines. The result is that one part of the internet can’t access the other part, who are these bad players?
Git provider seem to fail:

Many projects are hosting on AWS, they support IPv6 but in a very very dirty and broken way. Cloudflare offers free IPv6, but a few customers simply don’t enable it.

So you all:
If you run any service: RUN IT WITH IPv6
If you use any service without IPv6: Tell the operators to enable it, again and again and again

Posted in 30in30, General | 3 Comments

Preparing Puppet Workshops

I’m currently planning a puppet workshop for my colleagues to introduce them into the world of puppet and code driven infrastructure.

Here is my current agenda:

generic useful links:

I’m currently fighting with the ordering of the topics. Also, did I miss anything important?

Posted in 30in30, General, Internet found pieces, IT-Security, Linux, Puppet | 1 Comment

Making surfing a bit more secure

The internet is a dangerous place, people are trying to get your private data to sell it, track you across many sites, provide insecure connections so also third parties can get your data our they embed strange ads that try to slow down your browser until it is unusable. Last week I got asked during a talk about the tools that I use so I thought to publish the list here:

NoScript
NoScript is a Firefox Addon for blocking any kind of scripts, mostly javascript and flash apps. It also has a very good anti-XSS and anti-Clickjacking protection. Allowing html is mostly okay because this is only a markup language without any power on your machine (html5 has a few exceptions). But Javascript for example can do evil things like scanning your network, trying to bruteforce your local router or sending (private) data into the internet.

AdBlock
AdBlock (and all the alternatives like uBlock) parse the html layout of a page before displaying it. They remove ads and replace them with whitespace. Some ads are just annoying, some other are flash based and not trustworthy. In the past, advertisment provider failed to validate the submitted ads and they contained malware. Adblock can be extended by multiple filter lists, for example a list that removes social stuff like facebook/twitter buttons (they also track you!).

Ghostery
Ghostery is a smart little addon which blocks advertisement and tracking server. So the content isn’t downloaded to your computer (like it would be with AdBlock). Ghostery is developed by a company and not by a free organization, you never know why they do it for free. A cool alternative solution is Privacy Badger, developed by the EFF. Sadly I was unable to change the default policy from allow all to drop all (which ghostery can, why I prefer it).

HTTPS Everywhere
Encrypting your stuff is important. Two points that matter: encryption needs to be strong, and you need to encrypt as much as possible. Think about somebody intercepting your traffic and capturing it. If you only encrypt the important stuff (like online banking) than the hacker notices that most of the traffic is unencrypted and boring, which brings him to the conclusion that every encrypted traffic must be important and he will try to bruteforce it (or manipulate ssl certs). If all or most of your traffic is encrypted, the hacker won’t know which parts are important and he has to decrypt everything to find useful information. Here rules HTTPS Everywhere (another addon from the EFF), the addon detects if a website supports HTTPS, if this is the case it redirects you from the insecure HTTP version to HTTPS.

Certificate Patrol
Certificate Patrol detects if a website changes his SSL cert and notifies you. This is useful because in the past a few CAs where hacked or signed certificates for domains that weren’t operated by the issuer.

Posted in 30in30, General, IT-Security | Leave a comment