Niall’s virtual diary archives – Monday 29th February 2016

by . Last updated .

Monday 29th February 2016: 12.56am. Link shared: https://www.htbridge.com/ssl

So that ends a marathon weekend of cloud server migration, and I am officially exhausted! Even with the two weeks of meticulous planning and performing two trial runs, a multitude of surprises and gotchas crept up which impeded rapid progress when doing it for real, and my thanks to Megan for taking on a lion's share of the Clara-minding this weekend so I could get this stuff done. I am therefore extremely pleased to say that everything on dedi3.nedprod.com is now on dedi4.nedprod.com, and in theory at least is working (I'll find out tomorrow). Tomorrow I'll set dedi3.nedprod.com being securely wiped, ready to hand back the server's lease on Tuesday. Thank you dedi3 for four good years of service!

The new server installation represents a very solid investment of time and I expect it to be good for five years before I'll need to rebuild again. A quick summary of improvements:

* Linux from 2.6.32 => 4.2.8, and moreover I can now keep pace with new kernel releases whereas I was blocked on 2.6.32 for years before.

* Virtualisation from OpenVZ only => LXC + KVM

* Storage from ext4 => ZFS (yes, ZFS-on-root on Linux, see below for thoughts on this)

* Offsite replication from DRBD + OpenVPN => ZFS + SSH

* Offsite encryption from ecryptfs + btrfs + DRBD => ZFS

* There is no longer any method of compromising my home network by compromising the public node, unlike before where the public node was an extension-by-VPN of the home network (albeit as a DMZ).

* All crypto whether on OpenVPN, SSH or SSL has been bumped up to a minimum of forward secrecy custom primes Diffie-Hellman @ 4096 bits with >= SHA256 and >= AES128, and no legacy crypto support whatsoever. Everything has brand new keys and certs in case dedi3 was compromised, and all HTTP now always redirects to HTTPS i.e. HTTP is no longer available anywhere. This loses me accessibility from WinXP and Android 2.x devices, but I don't care about those anymore.

* Minimum connectivity firewalls now exist on outbound as well as inbound traffic for the VMs, so if a VM gets compromised you can achieve diddly squat. VMs are totally isolated from one another.

* All VM storage lives inside a ZFS dataset, and the entire filing system for all VMs and the host is snapshotted once per night with six months worth of state kept automatically. This should eliminate any problems with bad unattended automatic software upgrades, which are now turned on for VMs as well as the host itself. Similarly, any compromise of a VM can be simply "rolled back".

* Finally, HTTPS now get an "A" score from https://www.htbridge.com/ssl and https://www.ssllabs.com/ssltest/. dedi3 had been getting a "B-" which wasn't bad for a four year old configuration, but there was a lot of weak crypto support in there and like everybody else I had been using the common 1024 bit DH primes (i.e. the ones suspected of having been cracked by the NSA so they could read all SSL traffic using the common primes).


You might notice I am running ZFS-on-root on Linux which most would say is a bad idea. It's definitely the case that it's slow, slow slow on Linux, vastly slower than ext4 and noticeably slower than ZFS on FreeBSD. Still, no stability problems so far despite a lot of i/o performed, though i/o sees some dramatic latency burst spikes from time to time in a way ZFS on FreeBSD never sees when you pile i/o load on, so ZFS on Linux still wouldn't be suitable for anyone running intensive workloads in my opinion (yes, I know people are, but I bet they've had to trial and error tune the settings to make it work decently, what I am saying is ZFS on Linux still isn't "fire and forget" yet unlike ZFS on Solaris or FreeBSD).

No, I chose ZFS out of (a) an expectation it will shortly improve as Ubuntu is going to ship ZFS-on-root soon and that'll be a great debugger of the out-of-the-box experience (b) the sheer convenience of nightly snapshotting, offsite replication and encryption all without effort and maintenance by me and (c) the fact you can tell ZFS to store two copies of everything, and if one copy becomes damaged ZFS auto-heals itself which is about the best you can do in a single HD storage environment (pairs of drives dramatically increase the rental cost, as does adding a fast ARC caching SSD, so you play the hand you are dealt).

Anyway, job's a good one, time for night night as I do have a day job after all!

Go back to the archive index Go back to the latest entries

Contact the webmaster: Niall Douglas @ webmaster2<at symbol>nedprod.com (Last updated: 2016-02-29 00:56:00 +0000 UTC)