[SOLVED] {dd if=/dev/sdb of=/dev/sda &} duplicates terabyte drives with ease.

[SOLVED] {dd if=/dev/sdb of=/dev/sda &} duplicates terabyte drives.

My life in code and photos recently approached half a Terabyte, the limit of my USB external drive. Now with an HD dashcam the brink was in sight, so I purchased a 3TB drive on sale at Fry's for $80. I wanted a bit for bit duplicate of the old drive on the new one.

NO FEAR. My first concern was the new drive is USB 3.0 and the old drive was 2.0. Not to worry, that just means the 2.0 will be the slowness in the bottleneck factoring. They are perfectly compatible in the eyes of the Pi and most other controller devices. USB 3.0 is an investment in the future. This 3TB drive is my first USB

3.0 device. It powers solely from the Pi3+ and the Pi Zeros will be able to connect with OTB to OTB USB cable (and my Android phone and dashcam dump) according to plan, just not as fast as the drive is able. An other concern was transfer software, what would make an identical copy? Several answers were researched on the Google query "bit for bit disk copy" later refined to "bit for bit disk copy in Linux command line". The simplest and most powerful choice was the linux console command:

dd if=/dev/sdb of=/dev/sda &

See $> dd --help for details dd, disk duplicate if, input file, or other source in this case the unmounted b drive of, output file, or other destination in this case the unmounted a drive the drives were determined by order of attachment and the names were determined by the command line 'fdisk ?l'

Which suited me fine as my Win PC is heavily burdened with other tasks and my two Puppy Linux laptops are busy loaded with hundreds of SeaMonkey browser pages, which tend to spike the processor load, locking up keyboard and mouse and occasionally requiring manual restart - too flaky for the immediate task at hand. However, there are two unburdened Raspberry Pi Zeros waiting for an assignment and a Faspberry Pi 3+ running background CHRON scripts and finishing up an online class 'Teaching Physical Computing with Raspberry Pi and Python' from Raspberry Pi Foundation on FutureLearn.com and SSH in from the Puppy to the Pi3 was the best choice, its quad core was speedier than the Pi0's and the 'dd' command task runs in the background undisturbed at 20% CPU and 0.1% memory. It's merrily zooming along dumping the old drive into the new as we speak.

References: Clone a Hard Drive Using an Ubuntu Live CD

formatting link

11.2 dd: Convert and copy a file
formatting link

How To Clone An Internal Linux Partition To An External USB Disk Using DD

formatting link

The Best Disk Cloning App for Linux

formatting link

g4u - Harddisk Image Cloning for PCs

formatting link

What I learned: dd can be both a powerful and potentially dangerous command when ill considered.

Mistakes I made: The good news was that I made this minor mistake which was quite unrelated to the disk duplication via SSH on a Puppy laptop console window. The same window in which I launched the DD command I wanted to see what so far had been written to the unmounted, unformatted three terabyte external USB spinning hunk of iron. I should have launched another command console window that I could kill without affecting the dd transfer. I typed 'cat /dev/sda' and my window filled with ascii interpretations of machine code. That answered my question - yes, something has now been transfered onto the unmounted, unformatted USB external drive, but it also answered quite loudly, as each '08' byte rang the terminal bell and there were plenty of them. I succeeded in annoying myself. Humorous enough but I couldn't stop the display with control c, z or x and I couuln't figure out how to interrupt it, so I killed the command console process window on the puppy, which also halted the dd transfer. Ah well, live and learn.

--
   All ladders in the Temple of the Forbidden Eye have thirteen steps. 
There are thirteen steps to the gallows, firing squad or any execution. 
  The first step is denial?                           Don't be bamboozled: 
        Secrets of the Temple of the Forbidden Eye revealed! 
           Indiana Jones? Discovers The Jewel of Power! 
          visit ?(o=8> http://disneywizard.com/
Reply to
DisneyWizard the Fantasmic!
Loading thread data ...

It will go much faster with bs=64k added to the arguments.

-- Steve O'Hara-Smith | Directable Mirror Arrays C:>WIN | A better way to focus the sun The computer obeys and wins. | licences available see You lose and Bill collects. |

formatting link

Reply to
Ahem A Rivet's Shot

On Sat, 08 Apr 2017 19:46:53 -0700, "DisneyWizard the Fantasmic!" declaimed the following:

Since the RPi3 is still only USB 2.0 port-wise, data transfer speeds will be about the same -- barring the old drive being very inefficient.

--
	Wulfraed                 Dennis Lee Bieber         AF6VN 
    wlfraed@ix.netcom.com    HTTP://wlfraed.home.netcom.com/
Reply to
Dennis Lee Bieber

Probably even faster with (a little higher) blocks... and using raw devices (/dev/rsd{a,b}) may also help.

Reply to
Raymond Wiker

Not much faster with bigger blocks they tend to get split up to 64k blocks between the driver and the hardware IME.

--
Steve O'Hara-Smith                          |   Directable Mirror Arrays 
C:>WIN                                      | A better way to focus the sun 
The computer obeys and wins.                |    licences available see 
You lose and Bill collects.                 |    http://www.sohara.org/
Reply to
Ahem A Rivet's Shot

ls -l /dev/rs* ls: cannot access /dev/rs*: No such file or directory

In fact e raw devices are IIRC /dev/sd[abcd] etc. The partitions are /dev/sd[abcd][12345...]

--
Future generations will wonder in bemused amazement that the early  
twenty-first century?s developed world went into hysterical panic over a  
globally average temperature increase of a few tenths of a degree, and,  
on the basis of gross exaggerations of highly uncertain computer  
projections combined into implausible chains of inference, proceeded to  
contemplate a rollback of the industrial age. 

Richard Lindzen
Reply to
The Natural Philosopher

For different devices on different interfaces I have found the opposite to be true. Large blocks need to be buffered and written to and retrieved from memory. Small ones may fit into faster cache.

--




/ \  Mail | -- No unannounced, large, binary attachments, please! --
Reply to
Axel Berger

In Linux raw devices means the cache is bypassed; see ?man raw?. For this application I wouldn?t bother.

--
http://www.greenend.org.uk/rjk/
Reply to
Richard Kettlewell

Ah. wasn't aware of that semantic. I've always used 'raw' to refer to 'unpartitioned' on block devices.

--
If you tell a lie big enough and keep repeating it, people will  
eventually come to believe it. The lie can be maintained only for such  
time as the State can shield the people from the political, economic  
and/or military consequences of the lie. It thus becomes vitally  
important for the State to use all of its powers to repress dissent, for  
the truth is the mortal enemy of the lie, and thus by extension, the  
truth is the greatest enemy of the State. 

Joseph Goebbels
Reply to
The Natural Philosopher

Il giorno domenica 9 aprile 2017 04:47:48 UTC+2, DisneyWizard the Fantasmic! ha scritto:

also the most stupid choice in this case. dd will copy EVERYTHING, even empty space. And also partition table etc. so the result will be a 3TB disk with a 500GB partition, that you then have to expand using parted or similar.

Much better (and faster) in this case to mount both HDs and do a cp -a.

Bye Jack

Reply to
jack4747

The drive is nearly full, so there won't be much empty space. If the drive is reading one file at a time it will likely have to perform seeks in a number of fragmented files (the likelihood of this is increased by the fullness of the drive) while dd will read each track consecutively.

But copying the whole disk rather than just individual files you don't need to format the target drive before copying.

I doubt there's much in it - it's possible dd will be faster because of fragmentation.

Reply to
Rob Morley

to:

yes, but you end up with a target that has a partition the same size of the source, even if the target is bigger...

dd-ing will preserve fragmentation, copying will remove fragmentation.

other cons of dd is that if the copy is interrupted for some reason, it nee d to be restarted from the beginning because the target will not be usable until the end. With cp at least the copied files are usable. Better solution would be rsync, so if the copy is interrupted, it will resu me without problem.

Bye Jack

Reply to
jack4747

That's easily fixed. Format before, or re-size after - looks like swings and roundabouts to me.

If fragmentation is a problem in use (not normally with Linux, but sometimes with Windows IME) you can just copy the fragmented files once you have all that lovely extra disk space.

They're copies - you already have the originals on the source drive, why would you need them on the target before the transfer is complete?

You can stop and resume dd quite easily - send SIGUSR1 to show progress, then kill dd. You resume by giving appropriate values to skip and seek using the record count produced by SIGUSR1. Or just send the signal occasionally and redirect output to a file so you have a record of a fallback point if the dd encounters some insurmountable problem like a host crashing.

Reply to
Rob Morley

On Wed, 19 Apr 2017 14:32:42 +0100, Rob Morley declaimed the following:

OTOH: future use of the destination drive may be faster because the file copy command may serve to defragment it

--
	Wulfraed                 Dennis Lee Bieber         AF6VN 
    wlfraed@ix.netcom.com    HTTP://wlfraed.home.netcom.com/
Reply to
Dennis Lee Bieber

Linux doesnt suffer significant degradation from fragmentation if you avoid MSDOS style partition types.

--
The New Left are the people they warned you about.
Reply to
The Natural Philosopher

Innocent question:

Then why is this in the man for mount?

Mount options for btrfs

autodefrag Disable/enable auto defragmentation. Auto defragmentation detects small random writes into files and queues them up for the defrag process. Works best for small files; not well-suited for large database workloads.

Been looking to set up a RAID 1 to keep a pair of 4TB drives sync'd up.

Reply to
Sidney_Kotic

I've seen this in action, I like to dd an uninstalled Raspbian...just in case. dd a 8GB microSD into a iso image on the desktop computer. Which ends up being

8GB in size, although compression usually works well on them. dd that 8GB image to a 32GB and end up with 24GB of unallocated space.

So basically in order to simply use dd the drives should be the same, otherwise there's more involved.

Reply to
Sidney_Kotic

Maybe BTRFS is a clone of Microsoft?

I bet it doesn't say that that with ext 2,3,4

--
You can get much farther with a kind word and a gun than you can with a  
kind word alone. 

Al Capone
Reply to
The Natural Philosopher

Fragmentation of the files isn't the issue. If the smaller drive is nearly full and/or contains lots of small files, using cp -a will be far slower. For each file copied to the disc, the directory, allocation structures and journal logs must be updated so it maintains a valid filing system at all times. This will cause far more writing to the disc than the simple one block written for each block read that dd performs.

---druck

--
This email has been checked for viruses by Avast antivirus software. 
https://www.avast.com/antivirus
Reply to
druck

I forgot to add...I pretty much use rsync. Not the fastest thing but it does work and cares not one whit about drive/microSD size. I run openSUSE on a couple of computers and backup my home directory, plus a couple more, with a nightly cron job onto a 16GB USB thumb drive. I actually use 2 16's and rotate them daily, been known to save my bacon.

Reply to
Sidney_Kotic

ElectronDepot website is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.