Failure to clone SD card with compression

[...]

You miss the point entirely. Once gzipped, if the file THEN becomes corrupted, you CANNOT RECOVER IT. The decompression algorithm cannot handle errors. The ENTIRE file is lost, rather than just a small portion of it (perhaps a single byte).

We're not talking about compressing already corrupted data - we're talking about the ability to recover corrupted compressed data.

Reply to
CPMDude
Loading thread data ...

But as already mentioned, it is often better to have a lost file and not be able to recover it, than to have a file with a bit error and not knowing where it is (or that it even exists).

Especially when talking about images, program files, etc. (rather than a dissertation typed in plain text)

Reply to
Rob

Indeed.

Reply to
Russell Gadd

But if you use a block compression it will identify the block that is corrupted, but recovery everything else correctly. That can be the difference between recovering a particular file which only exists in that backup, and losing it forever.

---druck

Reply to
druck

Yes it can, and it would be an object lesson in never allowing there to be only a single copy of data that matters.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:>WIN                                      | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

If you have one or fewer backup copies of any file you consider valuable, you probably haven't been around computers long enough to be properly paranoid.

I have two offline uncompressed copies of everything on my computers, made with rsync for speed. These backups are made weekly in sequence, so that even if a nearby lightning strike or house fire should destroy the computers and the backup that's in progress, last week's backup is still safe.

In addition, anything that I'm working on (development projects, web pages) has additional copies. I use source control (cvs or git depending on the project) to keep an immediate, up to date copy on my house server. This server's content is also partially protected by an overnight compressed backup, but this is more for recovery from finger trouble than for disaster recovery because its always online and so subject to destruction by fires or mains spikes. In addition, my external website content all exists as a local copy on the house server and on my web host's system, so it can also be retrieved from the web host.

Although the offline backups include the content of /bin, /usr and /var these are really overkill because you only need to keep backups of /home[*], customised files in /etc, anything in /usr/local and /opt because everything else will be recovered by a clean reinstall of your Linux distro.

[*] I've deliberately excluded the contents of /var although a few packages, e.g. PostgreSQL, may put their data in /var. However, this is easily fixed by moving those directory structures to /home and replacing them with symlinks so they get backed up along with the rest of /home. The only downside to doing this is that SELinux may force you to retag those files and directories and change the SELinux configuration to suit.
--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

+1
--
Outside of a dog, a book is a man's best friend. Inside of a dog it's  
too dark to read. 

Groucho Marx
Reply to
The Natural Philosopher

I wonder how many users of clouid storage fail to keep a local backup? I recall reading about an NZ based cloud storage company that fell foul of the US government, resulting in the legitimate users losing access to their data for a long time - around a year?

--
Alan Adams, from Northamptonshire 
alan@adamshome.org.uk 
http://www.nckc.org.uk/
Reply to
Alan Adams

Yep, for me keeping my data *starts* with ZFS RAIDZ2 with a decent snapshot schedule. Finger trouble - pull from snapshot (I have several months to get round to it), dead drive - replace ASAP while there's still one drive of redundancy, silent corruption - ZFS block checksums catch and repair it, fading data - detected by weekly scrub, if needs be treat as dead drive, disaster - offsite copy or recover from original source (yes double redundant RAID, history and block checksums are not enough protection - usually they are but not always).

I find git really handy for config files, just init a git repository in / and check in every changed config file as soon as you know you want to keep it. Clone each of the repositories to an archive spot (on redundant storage) and run a regular fetch on each one.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:>WIN                                      | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

If you mean MegaUpload, its not clear that even the users who weren't copyright violators have ever got access to their data.

However, similar bad things have happened to users of other clouds. These were users who assumed that their data was replicated to other parts of the cloud to provide automatic failover and/or backup and found out the hard way that neither had been done and that their data had been lost.

--
martin@   | Martin Gregorie 
gregorie. | Essex, UK 
org       |
Reply to
Martin Gregorie

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.