Tag Archives: synology

Making backups less boring

Backing up data in your home is important. What if your computer crashes, what happens to all those photos on it? Here’s a harder one. What if your home catches fire. What happens to the photos now? Mix in a homelab and this is a big ball of nasty.

If you pay for home fire insurance, you should also pay for a backup solution. If you have a homelab, you need a storage and backup solution cause well, shit will hit the fan with a homelab. Backups don’t need to be boring or annoying.

Doing a home based backup makes a lot of sense. You get the speed and reliability with frequent updates at a low operational cost. Doing a cloud based backup makes a lot of sense in addition to this so you can have the security if things really go bad. Doing both is the best of all worlds.

For home based I run a Synology setup. For the features and functions you get with Synology’s Disk Station Manager UI and plugins, nothing else beats the platform.

I have 2 volumes setup. One large volume formatted using btrfs file format. This format gives lots of great reliability and backup features including snapshots of files. You can also create shared folders in btrfs with size quotas, which MacOS will property detect with Time Machine. So we have multiple systems keep their Time Machine backups on the same volume.

The other and smaller volume is formatted using ext4. This has slightly better write throughput, which I use for persistent storage in my homelab. On this volume I have multiple Kubernetes persistent volumes using iSCSI, and a mounted shared to my vSphere homelab using NFS.

All systems in the home backup regularly to the NAS. As mentioned earlier, when you use Synology btrfs you can create file shares with size quotas. Meaning we can use a single volume, with multiple shares each for a Time Machine. Setup the shares on the “smallish” side at first, and expand as we need independently. Before this would require a dedicated volume, which does not offer the flexibility to grow/shrink on demand.

My homelab setup also gets backed up nightly to the main volume using a plugin for Synology called Active Backup. This is the only not so great moment in my world with Synology. To do this, I had to create a connection to my server (easy), but then I had to create a separate backup task for each VM. Ideally this would just backup all on the server, and/or query my server for available VMs and prompt me for which ones to backup in a quick operation. Active Backup runs nightly at 2am on my NAS.

So all your data is on your NAS. You have a backup of it, so if anything crashes you can recover, and do so quickly. That’s important. We are all humans and the majority of backup requests is not because something failed, but because you as a human fucked up. That’s fine, the backed up copy is right there. And if you are using advanced backup features (ie: Time Machine) you can even go back to a specific point in time.

Then a freak accident happened, and the whole fucking thing burned to the ground.

This is why we do cloud based backups as well. The entire contents of your local NAS should get backed up offsite on a regular (daily) basis. Ideally and because it exists, the offsite backup is the cloud. Now I’m not talking about the consumer cloud backup products. You got this far in a blog, you don’t need to understand that. I mean using an enterprise class cloud based backup setup like Google Cloud Storage or Amazon Web Services S3. Going this route will save you plenty of money and might even be easier to setup.

I’m going to make it easy for you: use Google Cloud Storage. I used to use AWS S3 glacier. It worked. My cost was about $12 / month. But if I need a file… fuck it and wait. Want to clean up something cause you know homelabs and humans and shit. Yeah right. Cleanup is running some script to get your “archive list”, then another arcane script to delete each entry in the list… one at a time. Fuck that noise. Use Google Cloud Storage and be happy.

Today my bill is about $6 / month for GCS, and that includes storage and nightly backups from my NAS. What’s great about GCS is you can also move to different storage classes at will. I use coldline storage classes in GCS for all the backups. Because backups aren’t strictly additive (they also delete/update) this had the best cost ratio of storage classes.

Doing a backup to GCS on Synology is done using the Cloud Sync plugin. Nothing fancy here, just setup a task for each shared folder you want to backup. However I will add an additional caveat to this. Because of Time Machine, and it doing backups once per hour. You would be better served to schedule a narrow time slot in which the Cloud Sync plugin will operate. I do a 4am-7am window. Typically the sync is done within 30 minutes, but just in case I moved lots of large files into my NAS, the extra time will be handy.

I also configured Cloud Sync to sync upto 10 files at a time. This helps alot with bandwidth, especially if you have lots of small files. On initial sync I configured 20 files at a time, and it would saturate my entire 500 MiB/s fiber connection, when moving larger files.

All in all, I continue to recommend a Synology platform for NAS. It may cost a little more than others, but the time saved in setting up the NAS, and adding additional features is well worth it, and paid back dividends many times over.

Making home(lab) storage easy

Building a homelab, you need to store data. Own a home with kids, you need to store data.

I have a homelab, so my storage needs are a little bigger, but no matter what, if you need to store data, you need a solution. You have 3 choices. Build your own, buy a QNAP, or buy a Synology.

  1. Build your own is cheap, can be done with FreeNAS, will likely be a pain in the ass in some way. I don’t recommend this for anyone unless you know what the fuck you are doing.
  2. Buy QNAP. This is a “good” solution. It will work. You can do whatever you want to fucking do with it, but you need to click a bunch of buttons, and install some shit, and configure that thing. If your phone is Android, this is the NAS you want.
  3. Buy Synology. This will just work for 95% of the things you need with 2-3 mouse clicks. If your phone is an iPhone, this is the NAS you want.

So knowing the above, I went Synology. Did I pay a Synology Tax? Yes. But time is also money, Synology made my family and homelab needs easy.

Picked a platform, now to figure out the size, and growth. This is where I failed. I bought a DS218+ instead of a DS718. Maybe at first I didn’t understand or realize why, but most certainly I should of plucked down the few extra bucks and got the 718. I say this because I will be upgrading soon.

Silver lining: my homelab aspirations have grown since I bought the NAS, and I would much prefer an RS819 instead… something I wouldn’t have considered 6 months ago. So all is not a wash, and soon I will move up to a rack mount NAS chassis.

So why don’t I care about the upgrade? More of the Synology magic. Some people call this the Synology tax, but it’s a reason why you drop a few extra dollars on your setup. In my case, my setup is done using Synology SHR across the drives. This allows me to add additional capacity without needing to match drive size. I can grow my RAID cluster without having to keep my drives the same, as my budget allows. With this, I can slowly eliminate smaller drives while growing the entire footprint, without having to take a shock to my budget so I can do a new RAID setup.

With Synology, I get a bunch of easy to use out of the box features, but more importantly, I can go from small, to whatever the fuck size I want, without the need to redo my setup. Just keep adding drives, of any size I want, and like the rest of the Synology experience… it will just work.