Create NDIF disk image from RasPi SD card

The second, but the process is not interrupted, it simply gives an error message and continues. This happens sometimes with my backups, down in far away hidden program settings directories, where for some reason files changed to directories. Sometimes it bothers me and I manually delete the file from the backup disk and rerun the backup so that the directory may now be copied. Most of the time I can't be bothered. The goal of my backups is not to have a drop-in replacement but to save important data (and apps, settings, etc) which I need to rebuild my system from scratch. If (when...) the system disk fails, I always feel happier to start with a fresh installation and add from there.

rsync never resolves conflicts, it only does what you tell it to do. There is an overwhelming amount of options, see 'man rsync'. In the end though, I only use -au and some excludes:

formatting link
(Since I only use the disks for backups and never edit those files, the -u option is actually unnecessary. Ah well.) If you decide that you need some form of

--delete* option, MAKE SURE TO TEST IT FIRST with the -n option or you could lose important parts of your backup. Happened to me once :( but never again.

Millions of people tested this. rsnapshot is based on rsync. OS X TimeMachine is sort of based on rsnapshot.

Reply to
A. Dumas
Loading thread data ...

I had one that failed immediately, so did its replacement but ITS replacement was fine. Both replacements were made immediately and without a fuss. Before and since then the WD drives I've used have all been good, with the only make to come near then being Fujitsu laptop drives. My last failures were a Fujitsu 2.5" is around 37,000 hours and a WD 3.5" at almost 50,000 hours.

The next rsync run removes the file from the backup disk and backs up the directory and its contents. As you'd expect. rsync looks at dates, file sizes and ownership as well as the filename when deciding what to do and, iirc, is capable of replicating just the altered parts of a very big file, e.g. a database container.

NO. As I said, I haven't used rsnapshot.

Lots of people use rsync. I haven't spotted any problems with it. It deals correctly with being crashed during a backup and restarted.

Indeed, but this is always detectable under Linux because it records both a files creation date and its most recent change date, both accurate to the millisecond.

--
Martin    | martin at 
Gregorie  | gregorie dot org
Reply to
Martin Gregorie

!DirSync can either just use timestamps and lengths (quick), but also has options to compare the file contents if the other information is the same (slower), or for every file (very slow).

The linux rsync command does very much the same job as !DirSync, if you can work out the correct combination from the myriad of options.

---druck

Reply to
druck

Which leaves a tiny window for data loss, if the source drive goes down between the old file being deleted and the directory contents getting across you have lost both. If you're not making multiple backups then a history preserving filesystem at both ends can be a data saver.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:\>WIN                                     | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

On a sunny day (Sun, 12 Aug 2018 10:22:20 +0000 (UTC)) it happened Martin Gregorie wrote in :

Never had real harddisk failures apart from the one I once dropped... Seagate used to work OK, got myself a 1 TB seagate USB (one of the first), had some problem wit the software, contacted their online helpdesk. "We do not support Linux". Told them they had really great drives, and never bought a seagate again.

And everything still works great, latest 1 TB looks better, is smaller and faster AFAICT than that seathing. Normally drive is on 24/7.

There are some utilities for the PC to see how long it was on etc:

---------------------

# smartctl --all /dev/sda smartctl 5.40 2010-10-16 r3189 [i486-slackware-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen,

formatting link

=== START OF INFORMATION SECTION === Device Model: Hitachi HDS721050DLE630 Serial Number: MSK4235H34LZGH Firmware Version: MS1OA610 User Capacity: 500,107,862,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 4 Local Time is: Sun Aug 12 12:41:47 2018 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled

=== START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED

General SMART Values: Offline data collection status: (0x84) Offline data collection activity was suspended by an interrupting command from host. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: (4736) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 79) minutes. SCT capabilities: (0x003d) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported.

SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000b 100 100 016 Pre-fail Always - 0 2 Throughput_Performance 0x0005 136 136 054 Pre-fail Offline - 93 3 Spin_Up_Time 0x0007 125 125 024 Pre-fail Always - 193 (Average 173) 4 Start_Stop_Count 0x0012 100 100 000 Old_age Always - 418 5 Reallocated_Sector_Ct 0x0033 100 100 005 Pre-fail Always - 0 7 Seek_Error_Rate 0x000b 100 100 067 Pre-fail Always - 0 8 Seek_Time_Performance 0x0005 113 113 020 Pre-fail Offline - 35 9 Power_On_Hours 0x0012 093 093 000 Old_age Always - 52023

Reply to
Jan Panteltje

Er, not what I'd expect. If it fails the first time, why should it succeed the second time? What options do you use that makes rsync replace files with directories? (Or do I misunderstand, do you mean that it replaces on the first run?)

If you delete from the destination in order to have a perfect mirror copy, you risk deleting stuff from your backup that you inadvertently deleted from your source disk, i.e. just the thing you'd need a backup for... I mean, it's not wrong per se to do it like so, just know that you lose that potentially important function of your backup.

Reply to
A. Dumas

There was a consumer-grade 3.5" drive that PC World sold here when a 20MB or 40MB drive was still a big deal - I forget the brand name, but remember that it had a 3 year guarantee, but it used to fail at 18 months quite regularly. I had at least three of them. Fortunately they failed by becoming really, really slow, so on the one or two occasions when I didn't have a fully recent backup, you could leave it for an hour or two and the last backup would complete. After a while I became suspicious and started tracking disk life: when I found they failed so regularly and frequently I switched to WD because Seagate already had a somewhat dodgy reputation.

Full agreement about using smartd. Properly installed, it just quietly keeps working in the background: all my Linux systems except the RPi have it installed. On a Linux system smartd is typically run by a pair of cronjobs that do daily short checks and a longer weekly one. You only see a daily report if smartd finds problems. The weekly report is always done. Both are included in the daily logwatch report. NOTE that by default the logwatch reports accumulate under root, but its simple enough to redirect all root mail to your main email address.

--
Martin    | martin at 
Gregorie  | gregorie dot org
Reply to
Martin Gregorie

I think so. When rsync gets to the file, it realises what it last backed up as a file is now a directory, so it replaces the file with the directory and its contents.

There is no communication between backup runs apart from what it gleans by comparing the source disk and the backup one, so rsync does exactly what you'd expect if you keep two or more generations of backups: each backup sweeps up all the changes since the backup disk was last used as the backup store.

Similarly, you can back up several hosts to the same backup disk: for each host you just back up / to /mnt/backupdisk/hostname to get a set of backups, each in the form of a directory tree with 'hostname' as its root.

Apart from that tweak, I never do anything to a backup disk apart from retrieving any lost files from it.

--
Martin    | martin at 
Gregorie  | gregorie dot org
Reply to
Martin Gregorie

Agreed.

I use two generations of backup disk and leave as little running as possible on a host while it is being backed up. For a laptop this means only system processes are running and I'm not using the machine for anything except to control backups. The backup disk is mounted on my house server and I control backups and updates from a laptop downstairs while listening to Radio 4 or some music.

For my house server, its normal servers are running but nothing else except a copy of rsync and I KNOW the databases etc aren't being updated because their regular update runs happen overnight. I initiate offline backups manually, immediately before Linux system updates and I don't use any interactive clients (including mail, web, nntp or Google Earth) while backups are running.

I'm confident that this is safe because the only 'file changed while being backed up' reports I ever see from rsync are for Linux system logs.

--
Martin    | martin at 
Gregorie  | gregorie dot org
Reply to
Martin Gregorie

No, what I'm saying is that normally it doesn't, at least not how I use it, with -au options. For me, that results in an error message "Unable to copy [etc]" or similar and that subtree being skipped. So either you use other (--delete-etc?) options, or maybe with only -a (without -u) rsync will replace instead of skip. Idk, haven't tested the exact implications of rsync options recently.

Reply to
A. Dumas

I use a different set of options, because we have slightly different requirements: I don't want rsync to stop except for serious errors. I'm running with these options:

-avzE --delete --delete-excluded --ignore-errors --log-file=$log

plus several excludes so it doesn't back up stuff pseudo-filestore structures like /run and /proc

--
Martin    | martin at 
Gregorie  | gregorie dot org
Reply to
Martin Gregorie

With my -a or -au, rsync doesn't stop on the "file is now a dir" error, it just gives an error message and continues.

-a implies -p which means -E is ignored.

-z is mostly useless nowadays, in my experience.

Like I said, I would never use --delete on a backup for fear of losing the last copy of a file I accidentally deleted on my source drive. But if you want a 1-to-1 mirror, then yeah. Many subtleties to consider:

formatting link

Reply to
A. Dumas

-x will stop it crossing filesystem boundaries, thus excluding /run /proc /dev /sys (and others) automatically.

--

Chris Elvidge, England
Reply to
Chris Elvidge

rsync is run from a common script that is used to back up several systems which have differing partition schemes (two laptops, one PC running Fedora and a RaspberryPi running Raspbian), so the source filesystem is referenced as "$hostname:/". I think this excludes the possibility of using the -x option.

--
Martin    | martin at 
Gregorie  | gregorie dot org
Reply to
Martin Gregorie

OK

--

Chris Elvidge, England
Reply to
Chris Elvidge

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.