What do you use for RPi backups?

Just appears to be a Google Drive frontend. Even if I did use Google Drive, it isn’t clear to me it’s going to function any better/faster than something (even rsync) replicating to a local drive. How do you see it working better than dumb copies for backup?

Reply to
Doc O'Leary ,
Loading thread data ...

Am 13.04.2022 um 22:27 schrieb Doc O'Leary:

Once my flat burned down and killed all my backups.

FW

Reply to
F. W.

Do you loopback mount the target images, so they keep their ext4 partition format and bootability? That's quite a neat idea...

(if it were me I'd be tempted to keep a second copy of the files on the host's native FS, outside of the ext4 image. So if the target ext4 got corrupted in some way I could always recover the files)

Theo

Reply to
Theo

That speaks to a need for remote backups, not necessarily using cloud storage, let alone limiting yourself to Google as a sole provider. Again, my aim is to find a way to efficiently and safely warehouse all my data using an RPi. Whether or not that data is then replicated to additional locations or media is a separate solution layer.

Reply to
Doc O'Leary ,

Yes. After setting up a Pi, I take the SD card and make an initial manual copy of the SD card to an image file with dd. Then my nightly backup script loopback mounts the image file and rsync's over ssh to keep the image updated. When there is a failure, I can dd the image file on to a new SD card, and the Pi is up and running with the image from

4am the previous night.

I protect against that by making weekly and monthly compressed copies of the image files. I run zerofree to blank the unused space, and then they compress better using pigz to zip using all the cores. It takes a Pi4B with SSD about 1h30 to do this for the 15 images.

Additionally all the important programs and configuration on the Pi's are in git repos which are pushed on to a NAS drive. So if the worst happened I could burn a completely new Raspbian OS, and set up from that.

---druck

Reply to
druck

Are you backing up a 1GB Pi using rsync *to* a remote system? I use a Pi

4B with 8GB (although it never uses more than 4GB) and a local SSD to backup *from* the smaller 512MB and 1GB Pi's via ssh. That way the system with more performance and memory does all hard work of comparing indexes.

---druck

Reply to
druck

I have a bunch of computers that should all back up to at least one remote system. They include a variety of RPi from 0W up to a 400, but there are also Macs and Windows in the mix.

Ideally, I want maximal redundancy. Everything *should* be able to back up everything else. No, I don’t expect a 0W to be a workhorse, but I don’t see any reason a good solution couldn’t scale down to at least be *functional* on it. If git works fine, and my git-inspired scripts for backup work fine, I don’t see why some larger, more well- supported backup tool wouldn’t be able to function.

But that really doesn’t take all that much CPU or RAM. I mean, having more resources certainly *helps*, but that doesn’t mean a backup system has to be architected in such a way to *require* 4GB of memory (or more) to manage a data warehouse of 500K files totaling 2TB. I wouldn’t even call that big data.

I do also use rsync for replication, but that’s just not the same as having a backup system that archives data in perpetuity. It’s sounding like what I’m looking for doesn’t exist and I should just stick with my scripts.

Reply to
Doc O'Leary ,

On 15/04/2022 21:09, Doc O'Leary wrote: ....

Have you seen duplicity?

I've several desktops and laptops, all running linux, plus a central server running freebsd.

I use duplicity to backup the linux machines onto the server - just the user data, as a fresh install of the OS isn't too outlandish.

Duplicity is highly configurable. It will recover files from a specific date if you need. I'd suggest taking a look.

I just use dump to dump (levels 0 and 3 only) almost the /entire/ fbsd server onto one of the desktops. (which stood me in good stead this week when I completely trashed /var; ooops :-{ )

Reply to
Mike Scott

ZFS snapshots YKIMS.

My NAS runs striped ZFS mirrors, keeps extensive snapshots and replicates (zrepl) to an archive server running RAIDZ. Data loss ? What's that ? I am however looking carefully at TrueNAS Scale - it's not quite as good as OneFS but it's pretty close and free (unlike OneFS).

To be fair ZFS is only the second best snapshot solution I know - the prize for that goes to DragonFlyBSD's HAMMER - everything that hits the disk is a snapshot until it's pruned to reduce the history granularity.

Reply to
Ahem A Rivet's Shot

I had looked at it, but never gave it a try. Part of the issue I’m looking to solve is the fact that the machines I’m backing up share a substantial amount of data (~500GB, which isn’t even all *that* big these days). If they are treated as independent tarballs (and especially if they get encrypted on top of that), it leads to a lot of *unmanaged* duplication.

Maybe I need to rethink my desire to use a single backup solution for all use cases. I could easily see using something like duplicity for one-off projects I’d use a 0W for. But, then, most of my needs in that regard are handled by using git and ansible to set them up, and a simple rsync is generally fine to pull down any generated data I want to archive.

Reply to
Doc O'Leary ,

On this topic, I did look into using a more advanced filesystem that had all the modern bells and whistles built in. In the end I concluded that I really wanted a backup format that I could read/recover from on most any computer I had access to at the time (likely a spare RPi *without* network access).

I do long for the day when all of this is baked into the OS (*every* OS) by default.

Reply to
Doc O'Leary ,

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.