Alternative Cloud Storage update

Earlier this year I posted a blog post titled “Looking at alternative Cloud Storage” in which I stated I’d post a series of posts detailing my setup; well 6 months has passed and lots has changed OwnCloud was forked into NextCloud, I’ve also tried many different combinations of hardware and software to finally settle on my current setup, so I thought I’d write a quick update before RL interferes anymore.


Initially I chose Debian Stretch (Testing), OwnCloud Nextcloud and SnapRAID as my software stack of choice but while this had several advantages, mostly familiarity, there were several “holes” in my plan.

  • SnapRAID needed to be run manually in order to keep my data integrity at an optimal level which isn’t ideal when the main priority was to keep my data safe.
  • Nextcloud was installed directly within Debian along with Tiny Tiny RSS and DokuWiki and a couple of other webapps I use, which is a pain to maintain.
  • Much as I love Debian using a Testing version to get the latest nginx, openssl, etc on a “production” server had caused some breakages which I fixed but was a royal pain.

So I decide to experiment with alternative solutions (UnRAID, BSD, other Linux solutions) before finally settling on Ubuntu 16.04 with a ZFS Pool (3x 2Tb SATA Drives) for storage and LXD for managing the different services on the server (Tiny Tiny RSS, Dokuwiki, Nextcloud, etc). It’s been a while since I tried Ubuntu but I really like the power of LXD/LXC over Docker, as I tend to run entire stacks as opposed to individual applications which Docker seems so much better at, and ZFS is really amazing as a storage filesystem.

Crashed Hard Drive

I was going to start writing up my experiences but on Friday one of my 2Tb drives died, now being replaced under warranty, and so I’m learning how to replace a drive in a ZFS pool a little sooner than anticipated 🙂 The great thing is that with the 3rd drive removed the pool is still online (DEGRADED Mode) and all my data is safe unless another drive dies! Fortunately this is not the only copy of my data as I create a mirror of the data onto a separate system, part of the 3-2-1 backup strategy I have 🙂

Soon I shall write up all the lessons I’ve learned over the last 6 months and hopefully someone will find it useful.

UPDATE (1st Nov 2016): Toshiba have replaced my P300 2Tb drive with a P300 3Tb drive so I’ve decided to rebuild the ZFS RAIDZ1 zpool using partitions instead of disks so I can use the extra 1Tb of space for logs, temporary files and other scratch data thus saving the wear on the main SSD bootdisk 🙂

I’ve also added an ASMedia ASM1061 SATA Controller to the system specifically for the 3Tb drive.

Looking at alternative Cloud Storage

Today Cloud Storage is ubiquitous. It’s everywhere and most people use one provider or another whether they realise it or not. There are a great many providers to choose from, providers like Google Drive, Dropbox, Box, SpiderOak and Microsofts OneDrive and I’ve tried each and every one I’ve encountered. My requirements are very specific and so personally I chose Copy because they provided the best coverage in client OS support, specifically Windows, Android, iOS and Linux, and their sync speed was excellent.

9_Copy logo

I’ve used Copy since they first started and have been very happy with their service, in fact I was delighted that their Linux client supported GUI and terminal installation as well as supporting the Raspberry Pi. I’ve never been overly happy with hosting my data with a 3rd parties ever since Google shut down Google Reader, which I used quite heavily and I now run a self-hosted Tiny Tiny RSS installation for collating all my news feeds into one handy location, but never really considered self-hosting Cloud Storage as feasible. I had always thought if the provider I used shutdown I would simply move but reckoned with Barracuda Networks as its parent Copy would survive. So with Copy’s clients embedded in my home network syncing backups, photos, music and source files between servers and clients everything was running smoothly. I was happy my 3-2-1 backup solution was in place and working. I could easily test restoring backups on separate VMs to ensure all my data was safe, or so I thought.

Unfortunately on February 1st Barracuda Networks announced the demise of Copy. Honestly I shouldn’t have been surprised, a lot of Cloud Storage providers come and go so Copy’s demise was almost guaranteed in a weird, twisted kind of way. However it did leave me in a bit of a bind. I started evaluating all the options available and found none provided all I needed and so I began searching for a self-hosted replacement.

The main reason I used Cloud Storage was backups, backups, and more backups. I can never have enough. Whether it’s backups of the photos of our girls growing up or the latest MariaDB dumps from my web server I need to have 3 copies, 2 physically at home (separate media) and 1 offsite copy. Copy used to provide the offsite backup but this time around I decided to switch things around a bit. Being a geek I decided to see what spare hardware I had lying around and see what I could put together as a reliable self-hosted solution for all my file serving needs. Relying on all the clients syncing to different Cloud accounts was handy but ultimately futile. The amount of Cloud Storage actually required is rather small (10’s of gigs rather than 100’s or 1,000’s of gigs).


To that end I setup a server running Debian with OwnCloud as the Cloud Provider. OwnCloud has met all my requirements so far but I’ll be continuing to evaluate it as I add more data over time. For now all the clients and servers now sync to the OwnCloud installation and nightly all the data is rsync’d to another server (Raspberry Pi). All the important data i.e. Photos, Source Code, MariaDB dumps and server configs, although not Music as this can easily be recovered from iTunes, etc, are then compressed and encrypted into a single output file. This encrypted backup file is currently sync’d to a single Dropbox account, which can easily be replaced if Dropbox ever go out of business; until a more appropriate and/or secure remote backup location can be found.


This may all seem a little extreme but the outcome is actually quite positive. I’ve setup a nice home server which ensures all the files are synchronised properly. Data integrity has been added via SnapRAID using 128-bit checksum and parity checking to prevent bitrot something I’d never considered before. The remote backup is entirely flexible now and not reliant on a single vendor and as long as my harddrives are alive my data is always accessible. So in many ways Copy’s closure is the best thing to happen to my data to date.

In future posts I will detail exactly how I setup my home server and backup solution, including making it available securely on the internet over HTTPS with dynamic DNS support.

For now I breathe a sigh of relief knowing my 3-2-1 backup solution is back in action and more importantly back under my direct control.

Network Upgrades … finally

So I’ve finally managed to get around to reconfiguring my home network to Wireless-N from Wireless-G. Also all wired devices now plug into gigabit switches. Unusually instead of DHCP addresses I’ve decided to statically allocate each machine/tablet/phone a static IP address and only guests get a DHCP address. This actually simplifies management of the network as I now know for certain what each device on the LAN is.

Also during this upgrade (I’ve added a new Western Digital MyNet N750 router) I changed all the SSID’s and their passphrases. They’re now much longer and secure. I also switched off the old WEP wifi access point which used to be used by the girls Nintendo DS Lites but they’ve since upgraded to 3DS XLs and so don’t need this AP any more. As a consequence I’ve noticed that the network is faster! Was the WEP AP causing interference or did I have some freeloaders using my network? I don’t know but if there were any they’re now gone 🙂

To ease the addition of new devices each person in the house has a range of 10 IP’s allocated for their devices (which is more than enough).

I also enabled OpenDNS Family Shield on the router for the safety of the kids and to reduce the chances of stumbling across any malware sites.

All in all the upgrade went quite well with no hiccups so far. Here’s hoping it stays that way 🙂

My Wedding

Today, after 12 years together, my darling Sonia has done me the greatest honour of becoming my wife and is henceforth known  Mrs. Doyle.

Our 2 wonderful daughters made the most wonderful bridesmaids on this special day.

Our Wedding

We must extend our thanks to everybody who helped make our special day one of the best in our lives.

HTC Desire ICS Update

MIUI Android

For well over a year now I’ve been running MIUI Gingerbread on my HTC Desire as this was the only ROM which would “stick”. I tried several others such as CyanogenMod but while they would install cleanly, they would never work after the phone was rebooted. I’d be left at the white HTC screen forever. This is probably beacuse (for those who understand the HTC Desire hardware) I have a PVT4 HTC Desire with an erase size of 20000 instead of 40000, usually an erase size of 20000 is associated with a PVT1 model, or so I’m lead to believe on the Internet. Unfortuantely MIUI have stopped producing updates for the HTC Desire (Bravo) and the last available ROM was 2.4.13 so I’ve been looking for an upgrade that would “stick” ever since.

 Evervolv ICS

Enter Evervolv ICS ROM for HTC Desire. I’ve been watching the progress of Evervolv ICS for a while now and kept meaning to try it out but never did. However last week I received a 32Gb Class 10 MicroSDHC card as an upgrade to the 8Gb Class 4 in my Desire so I decided to bite the bullet and give Evervolv a try.

Installation was incredibly simple. Through AmonRa recovery (ClockworkMod doesn’t work properly on my HTC Desire!) I created a 1 Gb EXT2 partiotn on my SD Card so I could use Mounts2SD and save precious internal storage (144Mb available max on clean install!)

Installation via the Aroma Installer is really simple and very quick. After installing of Evervolv ICS I ran the following commands through the Terminal Emulator app provided …

m2sd apps enable
m2sd dalvik enable
m2sd data enable
m2sd dlcache enable

Each prompting for a reboot. Once I had run ALL of these commands I rebooted the phone and miraculously it “stuck” Evervolv booted back up and copied all the data to the EXT2 partition on the SD card and ICS runs beautifully on the HTC Desire.

I’ve rebooted several times since just to make sure it “sticks” and so far all is good. I can honestly say ICS is such an advance on Gingerbread it’s unbelievable. I feel like I have a new phone and no longer feel the urge to go out and update my phone just yet as my contract doesn’t expire until Jan 2013, although I still wouldn’t mind a Samsung Galaxy SIII 😉