Life after Crashplan

Crashplan’s email to home customers

If you’re reading this and don’t know me personally, you’re probably aware that Crashplan decided to “sunset” their Crashplan Home offering on August 22nd last year. No new subscriptions are being taken, and it will cease to exist from August 2018. Unfortunately, my subscription expired in December.

I was hugely satisfied with Crashplan, and thought it was by far the best online cloud backup solution in the market for the average home user.

  • It offered free peer-to-peer backups which meant I could backup my devices to my own server, or even trade encrypted backups with friends.
  • The client to backup to your own devices was free, and the cost for online cloud backups was a very-reasonable $150 USD for 12-months of unlimited backup storage.
  • By virtue of being written in Java, the client was available for Windows, Mac and Linux (I have all 3).
  • It supported headless operation, albeit with a bit of jiggery-pokery, i.e. editing the client config file to point to another agent via an SSH tunnel. This meant I could run it on my home NAS device, which naturally stores my important data (Photos mainly).
  • No limits on the number of devices that were backed up, or charges per-device.

Naturally, I was disappointed when they announced they were discontinuing it. “No worries!” I thought, there must be something else out there. As it turns out, Crashplan Home was almost too good to be true. Continue reading

From Ivy Bridge to Threadripper Part 1 – A Water Cooling Retrospective

Some of the links in this article are Amazon affiliate links, which pay me a commission if you make a purchase.

I could have brought a plain old Ryzen, a Core i7 or even another Core i5. But with Intel sitting on its hands the past 5 years in the face of no competition, I decided it was time to splash out and reward AMD for not only investing in CPUs again, but making an interesting high-end desktop product while not nickel & diming its customers over PCI-E lanes.

And so, I brought a 1920X.

I don’t really need 12 cores. Other than general browsing, my PC is used for work, (coding) plus a bit of gaming, and a gaming CPU this is not. Running multiple VMs and M.2 devices without slowing down will be nice, but this build is mostly overkill for my needs. And that’s really the point! Continue reading

Vimperator, Vimium and the switch to Qutebrowser

Much to the chagrin of anyone that uses a web browser on my PC, with particular apologies to my girlfriend, I’ve been an enthusiastic user of Vimperator for several years. Vimperator is a Firefox extension that turns it into a keyboard-driven browser, one that behaves much like the text editor Vim. Unfortunately, Mozilla has deprecated the XUL extension API that vimperator and many legacy extensions depend on, in favour of the web extension standard pioneered by Google with Chrome.

I can understand why they made this change. Legacy extensions are a large source of pain, can pose a security risk to host systems, and make invasive, sometimes unstable changes to the behaviour of the web browser. Mozilla often took flak from users for poor performance caused by poorly-written extensions, which were sometimes installed without the users’ knowledge or consent (Skype click to call extension, I’m looking at you).

What’s more, XUL extensions are partly responsible for the slow roll-out of their multiprocess implementation, called Electrolysys.

All this probably explains why Mozilla decided to adopt Google’s extension standard, and drop XUL. Unfortunately, Vimperator, a volunteer project, looks as though it will be a casualty of this decision, and it is highly likely that it never work after the legacy extension APIs are removed in Firefox 57.

So with Vimperator effectively dead, what’s a Vim user to do? Continue reading

Building and Packaging a Python command-line tool for Debian

python-logo-notext-svgPython packaging has a chequered past.

Distutils was and still is the original tool included with the standard library. But then setuptools was created to overcome the limitations of distutils, gained wide adoption, subsequently stagnated, and a fork called distribute was created to address some of the issues. Distutils2 was an attempt to take the best of previous tools to support Python 3, but it failed. Then distribute grew to support Python 3, was merged back in to setuptools, and everything else became moot!

Unfortunately, it’s hard to find reliable information on python packaging, because many articles you might find in a Duckduckgo search were created before setuptools was reinvigorated. Many reflect practices that are sub-optimal today, and I would disregard anything written before the distribute merge, which happened in March 2013. Continue reading

Mechanical Keyboards – Stop reading if you like your money

filco_tenkeyless_ninja_uk_largeAs a Sysadmin who works on a keyboard all day, I was enamoured by Jeff Atwood’s post about his CODE Keyboard, that he developed in partnership with WASD Keyboards. Essentially, the CODE Keyboard is a pre-spec’d standard WASD keyboard, with Cherry MX Clear switches.

If you have no idea what I’m talking about, you should read WASD’s Mechanical Keyboard Guide, which covers the different types of switches and their characteristics.

I’ll admit I didn’t do a lot of research. After a quick read on the site about the switches, I figured brown would probably suit me best, as I didn’t want to annoy my neighbours too much, and I chose the smaller Ten-Key-Less layout to keep the mouse closer (rarely am I entering large numbers in sequence, so the numeric keypad simply wastes space most of the time).

The problem: I can’t go back. Continue reading

Streaming JSON with Flask

I have a SQLAlchemy query (able to behave as an iterator) which could return a large result set. First version of the code was very simple. Release objects have a to_dict() function which returns a dictionary, so I append to a list and jsonify the result:

# releases = <SQLAlchemy query object>

output = []
for r in releases:
    output.append(r.to_dict())

return jsonify(releases=output), 200

(context on github)

This result set could potentially grow to a point that fitting it memory would be impractical – with only a thousand releases there is already a significant lag before we start getting results. Continue reading

Archival Storage Part 1: The Problems

All of us have data which has value beyond our own lives. My parents’ generation have little record of their childhoods, other than the occasional photo album, but what little records there are, are cherished. My own childhood was well preserved, thanks to the efforts of my mother. Each of my brothers and I has a stack of photo albums, with dates and milestones meticulously documented.

Today, we are generating a massive amount of data. While the majority of it will not be of interest to future generations, I believe preserving a small, selective record of it, akin to the photo albums my mother created, would be immensely valuable to my relatives and descendants – think of your great grandparents jewellery, a photo album of your childhood that your parents created, immigration papers of your predecessors.

Modern technology allows us to document our lives in vivid detail, however the problem is that the data is transient by nature. For example, this blog is run on a Linode server – if I die, the bill doesn’t get paid and Linode deletes it. If Linode goes away, I have to be there to move it to a new server. If Flickr goes away, my online photos are lost. If Facebook goes away, all that history is lost. Laptops and computers are replaced regularly, and the backups created by previous computers may not be readable by future ones, unless we carry over all the data each time.

In part one of this series (this article) I document the problems of common backup solutions for archival storage, with reference to my own set-up. In part two, I’ll detail my “internet research” into optical BD-R media and how it solves these problems, and in part 3 I’ll deal with checksums and managing data for archival (links will be added when done).

Part 1 is fairly technical, so if you just want safe long-term storage, install and configure Crashplan, and skip to part 2.

Continue reading

Kiwis, London, and expectations

Recently there’s been a conversation in the expat community about Kiwis making the move to London. Alex Hazlehurst’s article, which set out to dispel the myth that finding a job in London is easy for kiwis, attracted a fair bit of commentary (she also has a nicely designed blog here). Some of it was nice, some not so nice, and one reply was well written but somewhat condescending.

This conversation is not about people coming for an extended holiday. It is not about coming to London on the two-year visa, with nothing but travel plans and maybe a bit of bar or temp work here and there. It’s about young Kiwis moving to London to start or continue their careers, as I and many others have done. Continue reading