Working at a remote-hostile company

Categories: life

I want to start by saying that I’m quite happy with my current employer, this post isn’t written to speak ill of my employer, it’s just my view on working as a remote employee at a company where the culture isn’t exactly remote friendly.

This post is going to be about the problems you can encounter as a remote employee working for a company where most or all other employees are working at the same office. I’ll try to suggest solutions to these problems, but not all are tested, so implement with causion.


While chatting and emailing are great tools, sometimes it’s better to be able to talk to eachother and see eachother. That’s when you need to hold a meeting. The best way to do this is to meet face to face, but this is quite impractical if people are scattered across the globe.

Here’s my list of things to do before, during and after having a meeting:

  1. Write an agenda. Decide how much time to allocate for the meeting. Meetings without an agenda and an end time are wasted meetings.
  2. Book a time when everyone needed for the meeting are able to attend. Put it into a shared with an alert the day before and an hour before.
  3. Decide the conference tool to use and preferably test it before starting the meeting. This can be tested between different attendees or a few minutes before the meeting with all attendees. I recommend Zoom for remote conferences.
  4. Notify at least 10-15 minutes before the meeting if you’re unable to attend or will be late. Don’t postpone the meeting for a single person, because that ruins the schedule for all the other attendees.
  5. Appoint someone to record the meeting and transcribe it and store it somewhere where everyone can read it at a later time. This is good to remember what was decided and also for others to catch up on what was said and decided on the meeting.
  6. If you’re gonna treat attendees with some fruit / cookies / cake / etc, make sure that the remote attendees also gets treated something. (Call the nearest bakery or tell them to go buy something at the expense of the company.)
  7. Follow up on what was decided on the meeting after a previously decided time to implement.


Planning, planning and more planning! Everyone loves (or hates) planning. It’s even harder to do properly if you don’t use tools that suite your model of working. But it needs to be done.

So, tips on making planning easier for everyone, even your remote workers!

  1. Maintain a shared 1 year plan, 6 month plan, 3 month plan and a sprint plan. The plans should contain features, not tasks.
  2. Keep a shared backlog with all tasks, even tasks that are not connected to a feature. (Maintenance tasks, bug fixes, etc)
  3. Find the tools that allows you to follow the above structure.
  4. As an employee, maintain your own weekly Kanban board. My Kanban board consists of three columns: TODO, DOING, DONE. This works well for me. Whenever a task ends up in DONE, update it in the shared tool as well.
  5. Find a method, try it out at least 6 months, then adjust if needed. Switching methods of planning often is not fun.

Don’t apply my tips as law. Try them, adjust them and scrap them if needed. It’s what works for me.


Written communication is better than oral communication.

With that said, there are different kinds of written and oral documentation.


Chat is great! Chat sucks!

Chatting is a great tool if the message isn’t super important, it’s a nice place to vent of some steam. But, if important discussions are happening in the chatting tool, please summarize what was decided and put it in a blog post or wiki article.


I love emails. I love email even more when the sender can write a proper email.

So what do I define as a proper email? Well, it needs a subject, preferably a well written one so I know at glance if I want to read this email now or if it can be read at a later time. Then it needs content. If you think the whole thing fits into the subject, don’t send an email, send a message in the chat application.

So, what is email good for? I find it useful for meeting summaries, announcements and different kinds of reports.

Requests and tasks should not be sent via email, it should be put into the planning tool and assigned to me.


SMS is great! It’s fast, works most of the time and usually gets the attention of the recipient. But! Use SMS with extreme caution. It should only be used in life or death situations (Servers are on fire, major parts of the product is down, etc). SMS should be reserved for alert systems like PagerDuty or similar.

Conference call (Skype, Zoom, etc)

Use this for meetings when there are remote attendees.

Phone call

As with SMS, use with extreme caution. Is quite nice to use for person to person communication if the call has been scheduled.


I love blog posts. They’re straightforward, usually structured and I can read them whenever it suits me. I wish more people used blogs when communicating on how to do stuff, announcements, reports and such things.


A wiki is great for persistent documentation, stuff that you want to search and find easily when there’s a problem. It shouldn’t be filled with articles on how to setup Docker or how to brew coffee. That sort of stuff is better suited for a blog. What I do expect to find in a wiki are panic lists, documentation on how stuff is set up, where to find what info and more. That’s what I expect to find in a wiki. I want to visit the wiki when something is burning and be able to simply click my way around to find solutions.

Work hours

There’s so much to be said about work hours, but it pretty much boils down to this: Set a schedule, put it in a shared calendar and communicate whenever the reality differs from the schedule.

Off-work activities

Activities with your collegues can be great fun, but it requires planning, especially when there are remote workers. But in general, try to plan things to be done whenever the remote employees are present. It’s not fun to be the only one who always misses out on friday beers, go-karting and what not.

I think that’s all I want to write at the moment. Feel free to contact me on Twitter if you have any comments.

Ansible baby steps

Categories: ansible

A light introduction to Ansible, promoting best practices and install cowsay on Ubuntu 14.04.

What the hell is Ansible you ask? Well, I’m here to help!

Ansible is a tool to configure Linux (And Windows) servers via SSH and Python scripts. It allows you to write scripts in YAML and Python, which are executed against and on remote servers.

Why should you use Ansible? You should use Ansible if you want to avoid tedious and error prone manual work. Sure, it’s fine to run a few commands on your server to install a few applications, change some configuration files and so on, but fast forward a year, do you still remember what you did? Can you quickly run those commands again if you need a second server or need to replace the existing server?

If you answered yes on both both questions, Ansible isn’t for you. However, if you didn’t answer yes on both questions, tag along on my journey to teach you what Ansible is, how to use it for a single server and in large deployments.

Install Ansible

First things first, we need to install Ansible onto your computer, this is the control computer, the one that executes the commands on the remote targets. The target computers doesn’t need to have Ansible installed (But they do need Python installed!).

The easiest and recommended way of installing Ansible is via Pythons pip tool.

Run the following command to install Ansible via pip:

pip install --user ansible

This installs Ansible into my local Python library, which on my Mac OS X computer is located at /Users/myname/Library/Python/2.7/bin/ansible. Be sure to add /Users/myname/Library/Python/2.7/bin to your $PATH variable to be able to run Ansible properly.

Run ansible --version to verify you have a working installation.

Your first command

Let’s start off by pinging your computer:

ansible -m ping localhost

This will print something similar to this:

localhost | SUCCESS => {
    "changed": false,
    "ping": "pong"

If you get a warning about a missing host file (/etc/ansible/hosts), just ignore it, we won’t use that file anyway.

The command we ran is Ansibles equivalent of running ping localhost, but it verifies that it can properly connect to the host. In the case of localhost, it should always work.

Ad-hoc commands

Remember I said earlier that Ansible is a bunch of scripts you can run against your targets? Well, Ansible can run stand-alone commands against your targets as well.

To print the current time according to your computer, run this:

ansible -a "date" localhost

The expected output looks something like this:

localhost | SUCCESS | rc=0 >>
Tue Jun 28 22:15:10 CEST 2016

You now know how to run ad-hoc commands against your computer.


Let’s recap what we’ve learnt so far.

  1. How to install Ansible
  2. How to make sure the connection to a target works
  3. How to display the current date and time according to a target

But let’s dig a bit deeper and try to understand what the parameters to the ansible command means.

  • -m = The module to run. A module in Ansible is a Python script to be executed on the target. The default module if none is specified is the shell module. It executes it’s arguments as a standard shell command.
  • -a = The module arguments. A module can accept zero or more arguments to decide what to do. In the case of figuring out the targets date, we used -a with a value of date but didn’t specify a module to run. This forwards date to the shell module, which runs the command.
  • localhost = The host pattern to match against. We used the full name of the host, but you can specify a regex as well, like this: db[0-9], which will try to connect to all hosts matching the regex. This however, requires an inventory file. More on that later.


So, let’s talk about inventories. An inventory is an ini-like file which contains all your targets. It can look like this:

Altough that will work, it’s not very helpful. Let’s add a name to the host.

my-computer ansible_host=

This gives is a nicer output, but it requires the remote target to have a user named the same as your local user. Let’s add that:

my-computer ansible_host= ansible_user=myname

Save the file somewhere, call it whatever. I usually create a directory for my project and create a folder called inventories inside that folder, then save my inventory file inside that directory. So I end up with something like this:

└── inventories
    └── development

My inventory file is called development.

Now we have a complete inventory. To add more targets, simply add a new line with the information needed.


The collection of scripts to be applied to a target are called a playbook in Ansible. Let’s make one!

Open your editor and type the following:

- hosts: my-computer
    - name: install cowsay
      apt: >
      become: yes

And that’s your first playbook! Save it as playbook.yml in your project directory.

Let’s explain the parts of it.

- hosts: my-computer defines which hosts to apply the tasks to. This can contain the name of a host or a regex to match hosts. It can also be a group or the special group all which matches all hosts in an inventory.

tasks: defines a list of tasks to be executed from top to botton on the target.

- name: install cowsay is your first task. The name isn’t required, but highly recommended to have. You can name it whatever you want.

apt: > let’s Ansible know that we want to execute the apt module.

name=cowsay is the first argument to the apt module. It’s the name of the package we’d like to install. Different modules have different arguments.

update_cache=yes lets the apt know we want to run apt-get update before installing the package.

become: yes lets Ansible know that we want to run this module with sudo. So become: yes is equivalent to sudo my-module.

Now that we understand the playbook, let’s run it!

ansible-playbook -i inventory/development playbook.yml

We’re using a new command, ansible-playbook, which is what’s used to execute a playbook against targets.

The -i inventory/development tells Ansible to use our inventory file to create a collection of targets to execute the playbook against.

When you run this command, you should end up with something like this:

PLAY [my-computer] *************************************************************

TASK [setup] *******************************************************************
ok: [my-computer]

TASK [install cowsay] **********************************************************
changed: [my-computer]

PLAY RECAP *********************************************************************
my-computer                : ok=2    changed=1    unreachable=0    failed=0

This let’s us know that the task install cowsay ran and it changed something on the target. If you run the playbook again, it’ll say ok for that task instead of changed.

Run ansible -i inventory/development -a "cowsay" all to verify that cowsay was properly installed.

It should print something like this:

my-computer | SUCCESS | rc=0 >>
<  >
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

Congratulations! You’ve created your first Ansible playbook and inventory.

That’s it for today.

Read more

As always, if you have a comment, don’t hesitate to reach out to me on Twitter @hagbarddenstore or via email or via IRC Freenode where I go by the name Kim^J.

Using nginx to load balance microservices

Categories: infrastructure

How to load balance services with nginx, confd and etcd.

Imagine this, you have a bunch of services running on a machine, all is fine and dandy. You have the occasional downtime when you upgrade a service to a new version, but nothing you can’t handle.

Then, out of nowhere, you get loads of traffic, you need to scale horizontally, adding more servers to your cluster. Upgrading becomes harder, takes longer and there’s more downtime.

You figure it’s time to load balance your services, but how to do it in a way that scales and is easy to manage?

Well, that’s where nginx+confd+etcd comes into play.

The overall architecture is this, nginx handles the load balancing part, confd updates nginx configuration based on values in etcd, and services update etcd with their information.

This allows confd to reconfigure nginx whenever there’s a change in etcd, thus reconfiguring nginx in near realtime.

So, how do we set this up? Well, I’m assuming you have the services part figured out already, so I’m gonna skip that part. So, onto how to get started with etcd.


Etcd is a key-value database, with a simple HTTP API, allowing the services to easily insert values.

Etcd is very easy to install, it’s a matter of downloading an executable and running it. You can also run it on Docker using the image.

But we’re gonna focus on the stand-alone binary, so, head to and grab the latest stable release for your OS.

The package should be installed on atleast 3 different server nodes, preferably 5, to provide failover. Installation is easy, just unzip and run the executable by running the following command:


etcd --name $(hostname -s) \
     --initial-advertise-peer-urls \
     --listen-peer-urls \
     --listen-client-urls, \
     --advertise-client-urls \
     --discovery $ETCD_DISCOVERY

NOTE: The ETCD_DISCOVERY must be the same on all machines.

So, let’s explain the parameters.

  • –name = Name of the instance running etcd, this must be a within the cluster unique identifier. It’s used by etcd to separate nodes apart. The hostname or machine id are good candidates.
  • –initial-advertise-url = URL to advertise to other etcd nodes to allow internal etcd communication. The hostname or any IP address which other etcd nodes can reach are good values for this parameter.
  • –listen-peer-urls = URL on which etcd listens for internal etcd communication. This should be the same value as --initial-advertise-url in most cases.
  • –listen-client-urls = URLs on which etcd listens for client communication. This is the address you use to communicate with etcd. Preferably you want to add both and a public IP, to allow both localhost communication and external clients.
  • –advertise-client-urls = URLs which etcd advertises to the cluster. This could be the same as --listen-client-urls minus the localhost address.
  • –discovery = URL to a discovery service, used by nodes to discover the cluster when no previous contact has been made. This value should be the same on all nodes you wish to include in the same cluster. You can get a new URL by running curl where 3 is the minimum amount of nodes in the cluster. You need to have atleast 3 nodes in your cluster to make a highly available cluster. A size between 5 and 9 is recommended if you’re running a cluster with high uptime requirements.

Now that we’ve got parameters covered, let’s run the command on atleast 3 servers and you should have a functional etcd cluster up and running. You can verify by running curl http://localhost:2379/version on one of the machines.

Next step is to setup nginx!


We’re not gonna do any custom configuration on nginx, so a simple # apt-get install nginx is sufficient if you’re running a Debian-based OS.

With nginx running (Verify by running curl http://localhost/), let’s move on to the next step, which is installing and configuration confd!


So, finally at the step which does all of the magic!

First things first, we need to install confd. Head over to Github and get the latest release (At the time of writing, latest is v0.12.0-alpha3), put it on your nginx machines and unzip. Move the confd binary into /usr/bin/confd.

Next step is to create a configuration file for confd, a template and optionally an init startup script.


This is the confd nginx configuration file, it tells confd where to find the template file, where to place the result, which command to run on change and what keys to watch.

src = "nginx.conf.tmpl"
dest = "/etc/nginx/sites-enabled/services.conf"
owner = "nginx"
mode = "0644"
keys = [
reload_cmd = "/usr/sbin/service nginx reload"
  • src = Name of the template file to execute on each change.
  • dest = Name of the file where the output of the template should be placed.
  • owner = File owner of the dest file.
  • mode = File mode of the dest file.
  • keys = Etcd keys to watch for change. You can watch /, but to ignore keys you’re not interested in, you should specify which keys you’re interested in. You don’t need to specify the full keys (That would defeat the point of this post!), but the static part in the beginning of the key, in this case /service.
  • reload_cmd = Command to run after the template has run.

Put the above content in /etc/confd/conf.d/nginx.toml, then continue with the next file.


Ah, the template file!

I’m not gonna explain the content of this file, it’s a mix between Go’s text/template markup and nginx’s configuration file.

If you want to figure out the stuff between {{ and }}, head over to Go’s text/template and confd template documentation.

{{ $services := ls "/services" }}
{{ range $service := $services }}
{{ $servers := getvs ( printf "/services/%s/servers/*" $service ) }}
{{ if $servers }}
upstream {{ $service }} {
  {{ range $server := $servers }}
  {{ $data := json $server }}
  server {{ $ }}:{{ $data.port }};
  {{ end }}
{{ end }}
{{ end }}

server {
  server_name hostname;
  {{ range $service := $services }}
  {{ $servers := getvs ( printf "/services/%s/servers/*" $service ) }}
  {{ if $servers }}
  location /{{ $service }} {
    rewrite /{{ $service }}/(.*) /$1 break;

    proxy_pass http://{{ $service }};
    proxy_redirect off;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forward-Proto $scheme;
  {{ end }}
  {{ end }}

Copy and paste the above content into /etc/confd/templates/nginx.conf.tmpl.


This step is optional and requires Upstart (Present on Ubuntu). Feel free to adapt the script to other init systems. The main part is the last line, which starts the confd daemon. Replace the etcd-address with the address to one of the machines running etcd.

If you wish to this as a daemon, without Upstart, simply run /usr/bin/confd -backend etcd -watch -node http://etcd-address:2379/ &.

description "confd daemon"

start on runlevel [12345]
stop on runlevel [!12345]

env DAEMON=/usr/bin/confd
env PID=/var/run/

respawn limit 10 5

exec $DAEMON -backend etcd -watch -node http://etcd-address:2379/

If you went with the Upstart route, run sudo service confd start.

Onto the next step, we’re almost done!


All of this did nothing! Calm your horses, we haven’t added any services to etcd yet!

This is quite simple though, simply execute

curl http://any-etcd-node:2379/v2/keys/services/$service_name$/servers/$hostname$ \
     -d value='{"host":"$public_ip$","port":$port$}' -d ttl=60 -X PUT

Replace $service_name$ with the name of the service you wish to load balance, replace $hostname$ with hostname or machine id of the instance running the service, replace $public_ip$ with the IP address on which the nginx machine(s) can reach the service and lastly replace $port$ with the port on which the service is listening for incoming HTTP traffic.

Do note the -d ttl=60 parameter, this tells etcd that it should delete the value in 60 seconds, so you need to continuously execute the curl command to keep the value in etcd. By doing this, you allow etcd to clean up services that is no longer available. Tweak the number to suit your use cases. My recommendation is to have a ttl of 60 and updating every 45 seconds. This allows for some downtime if a service crashes, but it shouldn’t affect things that much, but as said, adjust to your use case.

When you stop your service, you need to delete the key (And stop the updating script/routine!), this is done by running curl http://any-etcd-node:2379/v2/keys/services/$service_name$/servers/$hostname$ -X DELETE with the same values as the create script above. This tells etcd and confd that the service is no longer avilable and should be removed ASAP. My recommendation is to run this before you stop the service, as to hinder nginx from sending requests to a service which no longer exists.

Last words

This is it! I hope you enjoyed the post and that you’ll find it useful to setup your own highly available web applications.

Confd is by no means restricted to just configuring nginx, it can configure pretty much anything, as long as the configuration is file based and there’s a reload/restart command available.

I like using nginx since I know it quite well and it’s fast, mature and highly reliable.

Got any questions? Ping me on Twitter or send me a message on IRC where I go by the name Kim^J.

Uploading Ubuntu to Openstack Glance

Categories: openstack

How to upload Ubuntu 14.04 to an Openstack provider.

Run the following commands to download the latest Ubuntu 14.04 image and upload it to the Openstack provider.

$ curl \
    -o trusty-server-cloudimg-amd64-disk1.img \

$ glance image-create \
    --architecture x86_64 \
    --name "ubuntu-14.04.4" \
    --min-disk 10 \
    --os-version 14.04.4 \
    --disk-format qcow2 \
    --os-distro ubuntu \
    --min-ram 1024 \
    --container-format bare \
    --file trusty-server-cloudimg-amd64-disk1.img \

The instructions assume you have a properly configured Openstack environment running on the computer where you run the above commands.

Uploading CoreOS to Openstack Glance

Categories: openstack

How to upload CoreOS to an Openstack provider.

Run the following commands to download the latest CoreOS image and upload it to the Openstack provider.

$ curl \
    -o coreos_production_openstack_image.img.bz2 \

$ bunzip2 coreos_production_openstack_image.img.bz2

$ glance image-create \
    --architecture x86_64 \
    --name "coreos" \
    --min-disk 10 \
    --disk-format qcow2 \
    --min-ram 1024 \
    --container-format bare \
    --file coreos_production_openstack_image.img \

The instructions assume you have a properly configured Openstack environment running on the computer where you run the above commands.

Continuously deploy hugo sites

Categories: hugo


How to setup continuous deployment of a Hugo website hosted on Github to AWS S3 by using Travis CI as the build/deployment service.

I finally did it. I setup something that builds my website and pushes it to AWS S3.

To be able to follow along, you need a Github account, an AWS account and you need to register on Travis-CI.

You should also have the Travis CI CLI installed to be able to encrypt values in the .travis.yml file.

I’m assuming that you have prior knowledge with AWS, Hugo and Git.

Creating S3 bucket

So, first you need an S3 bucket where you can host your content. Go ahead and create one and give it a unique name. My preference is to use the same name for the bucket as you would use for the domain which will point to the bucket.

Example would be naming the bucket if your website URL is

So, with that done, onto the next task, making the bucket available to the world.

Set bucket policy

So, in the S3 console, navigate to your bucket, click on Properties, expand Permissions and click on Edit bucket policy.

Paste the following and change to your buckets name:

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "Allow Public Access to All Objects",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "*"

Click Save and that’s it! Your bucket is now available to the world. Onto the next task, which is allowing Travis CI to push data to your bucket.

Create IAM user

Head over to the IAM Console. In the left menu, click on Users, then click on Create New Users. Enter a username of your choice, I recommend travis-ci. Then click Create and then Download Credentials. Remember where you save the downloaded file, since you’ll need the values in that file later on.

Next up, allowing the newly created user access to the previously created S3 bucket.

Create bucket policy

Head over to Policies, click Create Policy, click Select next to Create Your Own Policy. Give the policy a name, like or something nicer. Then write a description if you like. Then, copy the JSON below and paste it into the Policy Document textbox and replace with the name of your bucket. Click Create Policy and you’re done!

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "Stmt1456258376000",
            "Effect": "Allow",
            "Action": [
            "Resource": [

Next, attaching the policy to the previously created user.

Attach policy to IAM user

So, head over to Users again and click on your Travis CI user. Click on the Permissions tab and click on Attach Policy and find your previously created policy, then click Attach Policy.

That’s it for the AWS part! Next is creating a Travis CI build script.

Creating Travis CI build script

So, to allow Travis CI do the magic, we need a build script.

Open your favorite text editor and paste the text below into your editor.

language: go
  - 1.6

sudo: false

  - go get

  - hugo

  - provider: s3
    access_key_id: <access_key_id>
    secret_access_key: <secret_access_key>
    bucket: <bucket>
    region: eu-west-1
    local_dir: public
    skip_cleanup: true

Before we save, we need to change the <access_key_id>, <secret_access_key> and <bucket>.

To safely store the access key id, secret access key and bucket name, Travis CI provides you with a CLI tool to encrypt values. So, open up a terminal and write this:

cd path/to/your/hugo/git/repository

travis encrypt "<access_key_id_>"

This will ask you to confirm the repository and then return a value looking like this:

secure: "aaeeee..."

Simply copy that value and paste int your .travis.yml file one line below the key you’re setting the value for.

Repeat this for the <secret_access_key> and the <bucket> values.

The final file should look something like this:

language: go
  - 1.6

sudo: false

  - go get

  - hugo

  - provider: s3
      secure: "gibberish"
      secure: "gibberish"
      secure: "gibberish"
    region: eu-west-1
    local_dir: public
    skip_cleanup: true

Save the file as .travis.yml, commit your changes and push to Github.

That’s it! Now all you need to do is to connect your Github repository on Travis CI, but that’s pretty straightforward, just head over to the Travis CI website and you’ll be guided.

Got any questions? Did I miss something, spelling errors or other suggestions? Fork and send me a pull request, or contact me on Twitter

Ansible gotcha: Playbook sudo vs task sudo

Categories: ansible


When running a playbook with sudo: yes, it also runs the facts module with sudo, so ansible_user_dir will have the value of the root user, rather than the expected home directory of the ansible_ssh_user.

So, today I where pushing some generated SSH keys to servers to be able to pull changes from git without having to add each servers respective SSH key to the git server. Since I’m using Ansible to automate configuration of servers, I setup the task like this:

- name: push git deploy key
    dest="{{ ansible_user_dir }}/.ssh/{{ }}"
    content="{{ item.content }}"
    - name: id_rsa
      content: "{{ git_deploy_key_private }}"
    - name:
      content: "{{ git_deploy_key_public }}"
  no_log: True
  tags: configuration

Nothing wrong with the above code, I verified with ansible -m setup host that ansible_user_dir had the correct value. But, when you add sudo: yes to the playbook, like this:

- hosts: host
  sudo: yes
    - { role: deploy_git_keys }

Then Ansible runs the facts module with sudo, which makes the ansible_user_dir contain the home directory of the user sudo was ran as, which by default is root. So, it had the value of /root rather than my expected /home/ubuntu. So the keys where pushed to /root/.ssh/ rather than my expected /home/ubuntu/.ssh/.

So, the solution is to not have sudo: yes in your playbook, but have them on the various tasks that really need it or to not rely on ansible_user_dir, but I prefer the former.

You can verify this behaviour by running the following playbook:

- hosts: all
  sudo: yes
    - name: print ansible_user_dir
    - name: print $HOME
      command: echo $HOME

- hosts: all
    - name: print ansible_user_dir
      sudo: yes
    - name: print $HOME
      command: echo $HOME
      sudo: yes

- hosts: all
  sudo: no
    - name: print ansible_user_dir
    - name: print $HOME
      command: echo $HOME

The first play is gonna display /root twice, the second play will display /home/ubuntu first and then /root and the third play will show /home/ubuntu twice.

Static DNS with DHCP on CoreOS

Categories: coreos, infrastructure


How to set static DNS servers while still getting IP and routes from the DHCP server in CoreOS.


I’ve setup two servers running CoreOS and SkyDNS to provide DNS resolution in my AWS VPC, creating a DHCP option set which used the two CoreOS instances as the DNS servers, which meant that the CoreOS instances got themselves as their DNS server. This meant that whenever the instances where unavailable they couldn’t resolve any names, which meant I couldn’t pull Docker images.

The solution

To fix this problem, I figured I wanted to set a static DNS, which is easy with the following commands:

# echo "nameserver" > /etc/resolv.conf
# echo "nameserver" >> /etc/resolv.conf

But this solution doesn’t persist between boots or whenever the network service is restarted, since the dhcpclient will overwrite the /etc/resolv.conf file. So, to have a permament solution I needed to add the configuration into the cloud-config file. My initial cloud-config looked like this:


    addr: $private_ipv4:4001
    peer-addr: $private_ipv4:7001
    - name: etcd.service
      command: start
    - name: fleet.service
      command: start
    - name:
      runtime: true
      content: |


Which looked like it should work and it did work, but with the funny exception that it used the DNS servers I had specified and also the one from DHCP. So my /etc/resolv.conf looked something like this:


So the /etc/resolv.conf now contains two correct lines and two wrong lines. The 10.0.0.{10,20} addresses are the addresses to the CoreOS instances.

So, after looking through the documentation some more, I found that you should add the following lines to your cloud-config:


This gave me the following final cloud-config:


    addr: $private_ipv4:4001
    peer-addr: $private_ipv4:7001
    - name: etcd.service
      command: start
    - name: fleet.service
      command: start
    - name:
      runtime: true
      content: |



Which gave me the following /etc/resolv.conf file:


Success! Now I have a CoreOS cluster with static DNS servers which persist between reboots and network restarts.


Do not copy and paste the cloud-config examples into your cloud-config before you edit the discovery token.

AWS goodies

Categories: aws


A collection of snippets and scripts to be used against the Amazon Web Services to ease management of servers.


All snippets and scripts are provided as is and I provide no guarantee that they won’t destroy your environment. They work on my machine™.

All snippets and scripts assume you have a working installation of the aws-cli and have setup your credentials properly.

Most scripts also use the fantastic jq tool.

How to generate a new key-pair and save it into ~/.ssh

The export line is optional, if you don’t use it, simply replace $name with the name you want.

The chmod line is also optional, but strongly recommended, since SSH will complain if the key is readable to the world.

export name=the-name-you-want

aws ec2 create-key-pair --key-name $name | \
jq ".KeyMaterial" --raw-output > ~/.ssh/$name.pem

chmod 0400 ~/.ssh/$name.pem

How to get the stack status from CloudFormation

This snippet is useful in scripts to determine when a stack has reached a certain status. Can be used to wait for a UPDATE_COMPLETED before running other commands, like Ansible playbooks.

The export line is optional, if you don’t use it, simply replace $name with the name you want.

export name=your-stack-name

aws cloudformation describe-stacks --stack-name $name | \
jq ".Stacks[0].StackStatus" --raw-output

How to sync a local folder with a remote S3 folder

aws s3 sync my-folder s3://my-bucket/my-folder

IP addresses

Categories: Infrastructure


A cheat sheet to IPv4 subnetting and other quirks. Common subnet configurations with CIDR and netmask notation.


I’m no expert in IPv4 networking, I’ve simply learnt stuff as I needed it. This post is mostly a cheat sheet for me, when I need to setup some infrastructure.

Private networks

There are three private address spaces currently in use.

  • - Not exactly a private network, but a reserved range for loopback intefaces.
  • - 16,777,216 addresses
  • - 1,048,576 addresses
  • - 65,536 addresses

In networks ranging from /1 to /30, there are always two addresses that are unusable as host addresses. They are the first address and last address. Example, in a subnet, the address and are reserved as network address and broadcast address.

Since addresses are divided into 4 groups and and 32 (The CIDR bit) is dividable by 4, we can form this rule of thumb:

  • If CIDR is lower than 8, it changes all groups.
  • If CIDR is lower than 16 but higher or equal to 8, it changes the last three groups.
  • If CIDR is lower than 24 but higher or equal to 16, it changes the last two groups.
  • If CIDR is higher or equal to 24, it changes the last group only.

Examples: A /8 network ranges from *.0.0.0 to *.255.255.255, a /16 network ranges from *.*.0.0 to *.*.255.255 and thus a /24 network ranges from *.*.*.0 to *.*.*.255.

Table of netmasks

CIDR Netmask Hosts Usable hosts Comments
1 2,147,483,648 2,147,483,646
2 1,073,741,824 1,073,741,822
3 536,870,912 536,870,910
4 268,435,456 268,435,454
5 134,217,728 134,217,726
6 67,108,864 67,108,862
7 33,554,432 33,554,430
8 16,777,216 16,777,214 Class A
9 8,388,608 8,388,606
10 4,194,304 4,194,302
11 2,097,152 2,097,150
12 1,048,576 1,048,574
13 524,288 524,286
14 262,144 262,142
15 131,072 131,070
16 65,536 65,534 Class B
17 32,768 32,766
18 16,384 16,382
19 8,192 8,190
20 4,096 4,094
21 2,048 2,046
22 1,024 1,022
23 512 510
24 256 254 Class C
25 128 126
26 64 62
27 32 30
28 16 14
29 8 6
30 4 2
31 2 2
32 1 1

Table of ranges