I have a home server that I’m using and hosting files on it. I’m worried about it breaking and loosing access to the files. So what method do you use to backup everything?
On hope
The “small to medium business” route I see!
Me too! Actual servers are docker-compose which is on git but the data…yeah that’s on hope hahaha
And using the fact that raid is a backup!
This guy is rawdogging his RPi, just like me
This is the way.
I am lucky enough to have a second physical location to store a second computer, with effectively free internet access (as long as the data volume is low, under about 1TB/month.)
I use the ZFS file system for my storage pool, so backups are as easy as a few commands in a script triggered every few hours, that takes a ZFS snapshot and tosses it to my second computer via SSH.
Backblaze on a B2 account. 0.005$ per gb. You pay for the storage you use. You pay for when you need to download your backup.
On my truenas server, it’s easy as pie to setup and easy as 🥧 to restore a backup when needed.
I’ll add to this that restic works amazingly with Backblaze. Plus a dozen or so other backup options.
Maybe I’m stupid, but what is B2? A Backblaze product?
Yes it’s their cloud storage.
I didn’t realize they did anything other than that!
deleted by creator
I also recommend B2, it’s an S3 compatible service so any backup software/scripts/plugins that work with S3 should work with Backblaze.
You guys back up your server?
If your data is replaceable, there’s not much point unless it’s a long wait or high cost to get it back. It’s why I don’t have many backups.
I run linux for everything, the nice thing is everything is a file so I use rsync to backup all my configs for physical servers. I can do a clean install, run my setup script, then rsync over the config files, reboot and everyone’s happy.
For the actual data I also rsync from my main server to others. Each server has a schedule for when they get rsynced to so I have a history of about 3 weeks.
For virtual servers I just use the proxmox built in backup system which works great.
Very important files get encrypted and sent to the cloud as well, but out of dozens of TB this only accounts for a few gigs.
I’ve also never thrown out a disk or USB stick in my life and use them for archiving, even if the drive is half dead as long as it’ll accept data I shove a copy of something on it, label and document it. There’s so many copies of everything that it can all be rebuild if needed even if half these drives end up not working. I keep most of these off-site. At some point I’ll have to physically destroy the oldest ones like the few 13 GB IDE disks that just make no sense to bother with.
I use Duplicati and backup server to both another PC and the cloud. Unlike a lot of data hoarders I take a pretty minimalist approach to only backing up core (mostly docker) configs and OS installation.
I have media lists but to me all that content is ephemeral and easily re-acquired so I don’t include it.
Duplicati is great in many ways but it’s still considered as being in beta by it’s developers. I would not trust it if the data you back up is extremely important to you.
If you need to back up less than 10 GB, you can back up your data to Backblaze B2 Cloud Storage for free with their
b2 sync
command. I use this in acron
job daily or hourly, depending on the data being backed up.So what method do you use to backup everything?
Depends on what OS that server is running. Windows, Unraid, Linux, NAS (like Synology or QNAP), etc.
There are a bazillion different ways to back up your data but it almost always starts with “how is your data being hosted/served?”
ITT: lots of the usual paranoid overkill. If you do
rsync
with the--backup
switch to a remote box or a VPS, that will cover all bases in the real world. The probability of losing anything is close to 0.The more serious risk is discovering that something broke 3 weeks ago and the backups were not happening. So you need to make sure you are getting some kind of notification when the script completes successfully.
I run everything in docker. I have an ansible playbook that backs up all the docker volumes to a minio server I’m running on a separate machine. I periodically upload backups to idrivee2 with the same playbook
Proxmox Backup Server. It’s life-changing. I back up every night and I can’t tell you the number of times I’ve completely messed something up only to revert it in a matter of minutes to the nightly backup. You need a separate machine running it–something that kept me from doing it for the longest time–but it is 100% worth it.
I back that up to Backblaze B2 (using Duplicati currently, but I’m going to switch to Kopia), but thankfully I haven’t had to use that, yet.
PBS backs up the host as well, right? Shame Veeam won’t add Proxmox support. I really only backup my VMs and some basic configs
PBS only backs up the VMs and containers, not the host. That being said, the Proxmox host is super-easy to install and the VMs and containers all carry over, even if you, for example, botch an upgrade (ask me how I know…)
Then what’s the purpose over just setting up the built in snapshot backup tool, that unlike PBS can natively back up onto an SMB network share?
I’m not super familiar with how snapshots work, but that seems like a good solution. As I remember, what pushed me to PBS was the ability to make incremental backups to keep them from eating up storage space, which I’m not sure is possible with just the snapshots in Proxmox. I could be wrong, though.
You are right about the snapshots yeah. The built in backup doesn’t seem to do incremental backups.
My server is a DiskStation, so I use HyperBackup to do an encrypted backup of the important data to their Synology C2 service every night.
I’m backing up my stuff over to Storj DCS (basically S3 but distributed over several regions) and it’s been working like a charm for the better part of a year. Quite cheap as well, similar to Backblaze.
For me the upside was I could prepay with crypto and not use any credit card.
Kopia to Backblaze B2 is what I generally use for off-site backups of my devices. Borg’s another good option to look at, but not as friction-less in my experience. There are a couple of additional features that are available in Kopia that are nice to have and are not in Borg (i.e. error correction, file de-duplication) from what I recall. edit: borg does do de-duplication
rsync + borg, but looking at bupstash