I’ve done full on data migrations of some 30 TB in the “recent” past (pre-Covid)…
Did a number of benchmarks and this came out fastest :
I think it worked out about twice as fast as rsync… or maybe 1/3 faster - it was a few years ago now.
Over NFS (one mounted NFS share to another mounted NFS - on different storage platforms - but all on the same host) - no point doing compression - just adds overhead** :
Let’s say Source is /mnt/SHARE1 and Desination is /mnt/SHARE2
tar cvpf - * | ( cd $DEST ; tar xvpf - )
(where " - " represents stdin, and stdout - no need to think about this)
there’s only one other UNIX program I’d trust to preserve ALL file and directory attributes, “cpio” but I haven’t used it in 20 years (note : I believe Debian DEB files are actually CPIO archives).
Where there was a delta to capture anything that changed during the migration - I used rsync - again without compression e.g.
rsync -av $SOURCE/. $DEST/.
Your case may be different. e.g. if this was going over SSH - I might use compression if the ethernet link isn’t a proper “smart” switch that can do compression on the fly… Previously when copying gigabytes or terabytes of data using SSH (i.e. the “scp” program) I’ve specified a “quicker” algorithm that doesn’t encrypt so heavily (thus speeding up the transfer) - I think I used “blowfish” encryption algorithm (read the man pages for openssh, ssh and scp) as it was much lower overhead than the default.
I believe you can pipe the output of tar to SSH - but I haven’t done anything like that in over 10 years…
** per the last paragraph above, and the asterisks at the top - if you’re going “host to host” - then compression may in fact speed up the transfer as it’s compressed locally, and decompressed remotely - less data over your wifi or ethernet.
That is by far the best option.
You need to give the ethernet port a static IP address in each laptop
If you want to use hostnames rather than IP addresses, you need to put entries in /etc/hosts in each laptop.
Then just use any of the copy software suggested by @daniel.m.tripp, or the filezilla option mentioned by @Tech_JA if you want gui.
I would drop the disk out of the first machine into a external disk box,
Connect that through usb
Copy from external drive
Then put the disk back
But I have several external boxes and dropping disks out no big deal for me
Except if it’s a SSD as don’t have connections for that
Many thanks to you both. I had not heard about either NFS or SSH. I know very little about telecommunications. I now have some reading and testing to do. I like the idea @daniel.m.tripp had about using the tar command and thanks @Tech_JA for the detail instructions for installing SSH.
Connect the 2 PC with an Ethernet cable.
Thanks for the advice. I heard this can be done, but never tried it. Again, not knowing much about telecommunications, I will have to do some reading and testing with static IP address on the PC.
I apologize for my question. I know it’s not justified, but I read the post at 5:00am in a hurry before leaving for work.
My mistake, because when I talk about SSD, I always think of SATA SSD.
Thanks for the clarification. I know and work with the other formats, but, for example, I don’t use the terminology M.2 NVMe SSD, it’s my mistake.
My mistake too. When I think of laptops, I think 2.5 disks.
Thank you for the reminder that PC storage have come a long way over the last 35+ years.
My first home PC was an Atari 400 that only had cassette tape as storage. Then I thought I was really with the times when I got a 5 1/4 external disk drive to go with my system.
The copy of the files is finished. I will mark this thread closed by giving Paul (@callpaul.eu) credit for the solution. It was not that I did not want to learn something new, but telecommunications is hard for me to understand for some reason.
Sometimes when you want to get a task done, you go with what you know. Thanks for all the replies (ideas).
I’m glad to hear that you already did the the copy of your files, but now you can calmly try copying, for example, a test file between two PCs with the rsync command mentioned by Daniel.
I’ll try it too, my friend.
Come on, let’s test it, shall we?