Category Archives: Sysadmin

ZFS compression and encryption

Up until a recent overhaul, I was using btrfs in raid1 to manage the 4 drives I had in my NAS. However it’s been clear for a while that the momentum is behind zfs. It has more features, better stability, and generally inspires much more confidence when things go wrong. btrfs still has its place in managing single-device boot volumes, but for multiple physical devices, I would definitely recommend zfs over btrfs.

When I added a couple of new 16TB disks, I opted to create a new pool with a single mirror vdev. If I need to expand it in future, I’ll add another mirrored vdev to the pool.

Continue reading

Home Server – new HBA edition

Some long time readers of this blog may remember my home server articles, the most recent being “Ubuntu Home Server 14.04 – A DIY NAS“. There haven’t been any more recently because there’s not been much to report. The server described in that article, built in 2014, has been backbone of my home network ever since.

Since then, I have swapped out hard drives a couple of times (it now contains 2x16TB Seagate Exos and 4x4TB Seagate IronWolf), doubled the ram to 8GB, and added a NVME riser card (along with a cheap 128GB NVME SSD), so I could have a separate boot drive while using all 6 SATA ports for hard drives.

Along the way it also lost HTPC and media player duties to an Apple TV, so now it’s little more than a file and backup server with Plex Media Server, Syncthing, and Duplicati installed. And the operating system has been upgraded from Ubuntu 14.04 to 16.04, 18.04, 20.04 and now 22.04.

A couple of weeks ago though, it failed. And by failed I mean, all I got was blank screen when powering on. No post, and no signs of life other than spinning fans.

My immediate thought was a loose connector, or possibly memory or motherboard failure, so I disconnected everything, blew the dust out and plugged everything back in. With the hard drives unplugged, everything worked. With 4 hard drives plugged in it still worked. Then it failed again when I connected the last two.

By now I figure I’m looking at a dodgy SATA cable, SATA port, or hard drive, but the core components are obviously fine. So why not give it a minor overhaul at the same time?

Continue reading

Domain Expert vs Generalist

When should you use a blunt generalist tool, and when should you use a sharper domain-specific tool?

I posted a question on Serverfault recently, and received a relevant answer that wasn’t quite what I was looking for:

Systemd – How do I automatically reload a unit, when another oneshot service is fired by timer?

My reply to the answer thanked him for it, but mentioned that I think systemd is the right place to do this “sort of thing”. In reply to my reply, he told me that systemd is “absolutely the wrong place” to do this sort of thing, which is pretty strong language!

I think we’re approaching this from different perspectives here, so let’s break the problem down in general terms.

Continue reading

Provisioning Vault with Code

A couple of years ago, Hashicorp published a blog post “Codifying Vault Policies and Configuration“. We used a heavily modified version of their scripts to get us going with Vault.

However there are a few problems with the approach, some of which are noted in the original post.

The main one is that if we remove a policy from the configuration, applying it again will not remove the objects from Vault. Essentially it is additive only, and while it will modify existing objects and create new ones, removing objects that are no longer declared is arguably just as important.

Another problem is that shell scripts inevitably have dependencies, which you may not want to install on your shell servers. Curl, in particular, is extremely useful for hackers, and we don’t want to have it available in production (in our environment, access to the vault API from outside the network is not allowed).

Finally, shell scripts aren’t easy to test, and don’t scale particularly well as complexity grows. You can do some amazing things in bash, but once it gets beyond a few hundred lines it’s time to break out into a proper language.

So that’s what I did.

The result is a tool called vaultsmith, and it’s designed to do one thing – take a directory of json files and apply them to your vault server.

Continue reading

Ubuntu Home Server 14.04

I had grand intentions.

This home server article was to be a detailed masterpiece, a complete documentation of my home server setup.

It hasn’t turned out that way, and many pieces are missing. Turns out, that writing a detailed article on setting up a server is much harder than just doing it! So what you see here is what I finally managed to publish, 5 months after actually building it. I hope you find it useful, and I don’t rule out the possibility that I may update parts of it in future. Continue reading

Ubuntu Home Server 14.04 – A DIY NAS

It’s been more than 4 years since I wrote about home servers, but my Ubuntu Home Server article was, for a while, the most popular post on this blog. Since moving to the UK though, I’ve taken a more appliance-based approach to my home network. For the last few years I’ve been using a Boxee Box for media playback, and a 4-bay Netgear ReadyNAS duo NV2+ for storage, mainly to keep the bulk of my possessions to a minimum.

The appliance approach does have advantages. It is power efficient, easy to setup, and very low maintenance. But after getting an internet connection with decent upload speed, I wanted to run CrashPlan on the NAS without having to have another PC running. I managed to get it running by following directions I found here.

There’s just one problem:

3.3 months to upload 350GB is a little too long

3.3 months to upload 350GB is a little too long

Performance is abysmal, and I’ve only selected the most important data – my photos. I’m limited not by my internet connection, but by the NAS’s anaemic CPU and lack of ram (just 256Mb). Furthermore, it’s always had very slow read and write speeds – generally around 2Mb/sec, and loading a large directory via its Samba shares can take a while.

So I started to look for a replacement. My requirements:

  • Minimum 2GB ram
  • Strong CPU, preferably x86
  • 4+ drive bays
  • Linux based OS
  • Root access to said OS

The best pre-built option I could find which meets those requirements is the Thecus N5550, but at £383 it is a long way from cheap. And it barely meets the specs; an Atom CPU is strong for a NAS but not by modern x86 standards.

While the customised software shipped with a NAS does offer some conveniences, it also gets in the way of using newer Linux features such as BTFS RAID 5/6 (which is currently not considered stable but should be within the next 12 months). You’re also reliant on the vendor for distribution upgrades, and the priority is going to be shiny features which consumers will appreciate, not keeping the foundation OS up to date. The ReadyNAS NV2+ is currently running Debian Squeeze, and will be until the day support ends.

At this point I realised that a pre-made NAS with the level of power and flexibility I wanted doesn’t exist at a realistic price point. And with the end of Boxee support its days as a useful device are numbered, so a HTPC could be on the cards as well. It’s time to build my own server again.

Continue reading

They used to call this /.’d

Maybe these days it’s “hackernews’d”.

Some kind person posted a link to this article, which resulted in an email alert from Linode about outgoing traffic at midnight this evening.

This blog runs on a single wee Linode instance, but fortunately it’s over-engineered for its usual traffic volume, and served by nginx, php-fpm and the WordPress totalcache plugin.

It seemed to weather the storm really comfortably with load hovering around 0.2.

Network Graph

Wordpress Graph

Safely running bulk operations on Redis with lua scripts

This article was also posted on the Gumtree devteam blog

If there was one golden rule when working with redis in production, it would be

“Don’t use KEYS”

The reason for this is that it blocks the redis event loop until it completes, i.e. while it’s busy scanning its entire keyspace, it can’t serve any other clients.

Recently, we had a situation where code was storing keys in redis without setting an expiry time, with the result that our keyspace started to grow:
Continue reading

Fixing Puppet 3.2 symlinks on OSX Mavericks

Received the following error when running puppet after upgrading to Mavericks:

[ruby]
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:45:in `require’: cannot load such file — puppet/util/command_line (LoadError)
from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:45:in `require’
from /usr/bin/puppet:3:in `


[/ruby]

Solution is to symlink the packages to the new ruby 2.0.0 directory:
[shell]
#!/bin/bash

sudo ln -s /usr/lib/ruby/site_ruby/1.8/puppet /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/site_ruby/2.0.0/puppet
sudo ln -s /usr/lib/ruby/site_ruby/1.8/puppet.rb /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/site_ruby/2.0.0/puppet.rb
sudo ln -s /usr/lib/ruby/site_ruby/1.8/semver.rb /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/site_ruby/2.0.0/semver.rb
sudo ln -s /usr/lib/ruby/site_ruby/1.8/facter /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/site_ruby/2.0.0/facter
sudo ln -s /usr/lib/ruby/site_ruby/1.8/facter.rb /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/site_ruby/2.0.0/facter.rb
sudo ln -s /usr/lib/ruby/site_ruby/1.8/hiera /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/site_ruby/2.0.0/hiera
sudo ln -s /usr/lib/ruby/site_ruby/1.8/hiera.rb /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/site_ruby/2.0.0/hiera.rb
[/shell]

Should be fixed in the next major version.

Reference: https://projects.puppetlabs.com/issues/18205

Calling mysqldump in Python

Python is a fantastic tool to know, and despite being a beginner I find myself using it more and more for everyday tasks. Bash is great for knocking together quick scripts, but when you want to something a little more complex such as interfacing with APIs or other systems over a network, you really need a more fully-featured programming language.

The topic of this post, however, is the kind of task that bash is perfect for. Thanks to mysqldump, a database backup script can be written in a few lines and dump/restores are easily automated. So why on earth would we do this in Python?
Continue reading