Backing up data in your home is important. What if your computer crashes, what happens to all those photos on it? Here’s a harder one. What if your home catches fire. What happens to the photos now? Mix in a homelab and this is a big ball of nasty.
If you pay for home fire insurance, you should also pay for a backup solution. If you have a homelab, you need a storage and backup solution cause well, shit will hit the fan with a homelab. Backups don’t need to be boring or annoying.
Doing a home based backup makes a lot of sense. You get the speed and reliability with frequent updates at a low operational cost. Doing a cloud based backup makes a lot of sense in addition to this so you can have the security if things really go bad. Doing both is the best of all worlds.
For home based I run a Synology setup. For the features and functions you get with Synology’s Disk Station Manager UI and plugins, nothing else beats the platform.
I have 2 volumes setup. One large volume formatted using btrfs file format. This format gives lots of great reliability and backup features including snapshots of files. You can also create shared folders in btrfs with size quotas, which MacOS will property detect with Time Machine. So we have multiple systems keep their Time Machine backups on the same volume.
The other and smaller volume is formatted using ext4. This has slightly better write throughput, which I use for persistent storage in my homelab. On this volume I have multiple Kubernetes persistent volumes using iSCSI, and a mounted shared to my vSphere homelab using NFS.
All systems in the home backup regularly to the NAS. As mentioned earlier, when you use Synology btrfs you can create file shares with size quotas. Meaning we can use a single volume, with multiple shares each for a Time Machine. Setup the shares on the “smallish” side at first, and expand as we need independently. Before this would require a dedicated volume, which does not offer the flexibility to grow/shrink on demand.
My homelab setup also gets backed up nightly to the main volume using a plugin for Synology called Active Backup. This is the only not so great moment in my world with Synology. To do this, I had to create a connection to my server (easy), but then I had to create a separate backup task for each VM. Ideally this would just backup all on the server, and/or query my server for available VMs and prompt me for which ones to backup in a quick operation. Active Backup runs nightly at 2am on my NAS.
So all your data is on your NAS. You have a backup of it, so if anything crashes you can recover, and do so quickly. That’s important. We are all humans and the majority of backup requests is not because something failed, but because you as a human fucked up. That’s fine, the backed up copy is right there. And if you are using advanced backup features (ie: Time Machine) you can even go back to a specific point in time.
Then a freak accident happened, and the whole fucking thing burned to the ground.
This is why we do cloud based backups as well. The entire contents of your local NAS should get backed up offsite on a regular (daily) basis. Ideally and because it exists, the offsite backup is the cloud. Now I’m not talking about the consumer cloud backup products. You got this far in a blog, you don’t need to understand that. I mean using an enterprise class cloud based backup setup like Google Cloud Storage or Amazon Web Services S3. Going this route will save you plenty of money and might even be easier to setup.
I’m going to make it easy for you: use Google Cloud Storage. I used to use AWS S3 glacier. It worked. My cost was about $12 / month. But if I need a file… fuck it and wait. Want to clean up something cause you know homelabs and humans and shit. Yeah right. Cleanup is running some script to get your “archive list”, then another arcane script to delete each entry in the list… one at a time. Fuck that noise. Use Google Cloud Storage and be happy.
Today my bill is about $6 / month for GCS, and that includes storage and nightly backups from my NAS. What’s great about GCS is you can also move to different storage classes at will. I use coldline storage classes in GCS for all the backups. Because backups aren’t strictly additive (they also delete/update) this had the best cost ratio of storage classes.
Doing a backup to GCS on Synology is done using the Cloud Sync plugin. Nothing fancy here, just setup a task for each shared folder you want to backup. However I will add an additional caveat to this. Because of Time Machine, and it doing backups once per hour. You would be better served to schedule a narrow time slot in which the Cloud Sync plugin will operate. I do a 4am-7am window. Typically the sync is done within 30 minutes, but just in case I moved lots of large files into my NAS, the extra time will be handy.
I also configured Cloud Sync to sync upto 10 files at a time. This helps alot with bandwidth, especially if you have lots of small files. On initial sync I configured 20 files at a time, and it would saturate my entire 500 MiB/s fiber connection, when moving larger files.
All in all, I continue to recommend a Synology platform for NAS. It may cost a little more than others, but the time saved in setting up the NAS, and adding additional features is well worth it, and paid back dividends many times over.