Rsync to Data Drive in Home Server

Apparently the drive being mounted on the OS drive is relative since rsync created the backup folder on the root drive of the OS disk, not the disk that is mounted on /run.

That is why I am stumped on this thing. If we were just syncing to the OS drive, we would already have this working. It is since we are trying to use a second ext4 drive without an OS that I think is the issue.

Sheila

What I think I have deduced thus far is we get to the main drive with the ssh IP. We do NOT get to the secondary data drive that is mounted on the main drive.

Main drive: Fedora Linux btrfs
Secondary drive: Server Backups ext4

There has to be a way to tell rsync that secondary drive for target.

Sheila

rsync is smart enough to know you intend to use ssh as the protocol… and you shouldn’t need to worry about the port if you’re using the default…

user@hostname:/path tells rsync you’re using ssh - not sure what other protocols it could use, I’ve only ever used SSH…

Yonks ago - I used to plonk an “-e ssh” in my rsync argument - but that’s redundant in this day and age…

And another thing, on compression, if you’re doing this over gigabit ethernet - I don’t think on the fly compression would save you any time - it might slow things down even… So maybe you don’t need the “z” maybe just “rsync -aAXv” ? If it was going over the internet, or dial up, or maybe shonky slow 3 mbit wifi - compression might be useful… otherwise it probably just slows the whole thing down…

But the article said port 22 is default. I am using 9090 since that was what Cockpit recommended for server control on my Linux Mint machine.

So I was successful in getting rsync to sync onto the root directory of the main drive. I have not been successful in accessing the mounted secondary drive.

Thoughts?

Sheila

Oh, you’re probably right. I just knew some files were large, but the last successful sync (to the wrong target) went in like 3 min for 1000 files.

Thanks,
Sheila

9090 is a web port that cockpit uses…

Surely you’re using port 22 to SSH to that Fedora machine right?

Use the same port for rsync - “rsync -aAXv $source host:$dest” and it will default to port 22…

I support Red Hat systems for my job - I hardly ever use cockpit (and the nagging message about it everytime I connect can get annoying)…

And some of the better gigabit switches - actually compress the data as they’re crunching it anyway… So using “z” is redundant and probably unnecessarily slows the whole thing down :smiley:

Yep. I checked firewall rules and SSH is on 22.
So no need for the redundancy of -e.

As for Cockpit, I could do what I do in there in CLI via ssh, you are right. I think it is just because I have never managed a machine strictly in CLI without being on its desktop.

But we still have to solve the conundrum of how to write the target directory to the mounted drive. And I have been banging my head for other options, but each idea ends with, no, then that would create “this issue” etc.

Thanks,
Sheila

OKAY. We are getting closer. I haven’t made the target shortcut yet, but ran:

myviolinsings@mint-desktop:~$ rsync -aAXv /home/myviolinsings/Documents 'sflanagan@192.168.1.157:/run/media/sflanagan/Server Backups/MSILinuxMint'
sflanagan@192.168.1.157's password: 
sending incremental file list
rsync: [generator] recv_generator: mkdir "/run/media/sflanagan/Server Backups/MSILinuxMint/Documents" failed: Permission denied (13)
*** Skipping any contents from this failed directory ***
Documents/

sent 159,956 bytes  received 713 bytes  5,100.60 bytes/sec
total size is 14,935,291,877  speedup is 92,956.90
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1338) [sender=3.2.7]

So now my error is PERMISSION DENIED? ROFL!

So now I do not have permission to create a directory called /Documents on that secondary drive? Am I correct in saying that it actually tried to create a new directory on that target?

Sheila

So I changed ownership and permissions for /dev/sda1:

root@fedora:/home/sflanagan# ls -l /dev/sda1
brwxrwx---. 1 sflanagan disk 8, 1 Feb 22 19:51 /dev/sda1

But I still get the same Permission Denied error.

So I go to the secondary drive to MAKE the directory /Documents and I see all the files are there under
/run/media/sflanagan/Server Backups/MSILinuxMint.

That means it worked??? I will have to compare items, but the size (14.9 GB) is identical to the /Documents folder size on my LM machine (the source) and it is now sitting on the secondary drive under the correct folder!!

Sheesh. I’ll still have to figure out why it couldn’t “make the directory” /Documents under the folder it did target, but hey…

So to recap,
@nevj was right in that we needed quotes around the entire target.
@pdecker was right in that I needed permission to write files to that mounted drive.
@daniel.m.tripp was correct in removing needless parts of the CLI arguments and simplifying it for me.

Thanks to all of you.

Sheila

I think I would always do rsync as root… gets rid of nagging permission issues. You dont want backups hanging on trivia.

Wow seems like our collective thoughts broke the ice.

I am not sure why you are including username in rsynctarget.
I always just specify the hostname.
Perhaps it is something to do with the way you have setup ssh … rsync uses ssh.

Now that you have changed the directory name to get rid of the blanks, you do not need the quotes.

1 Like

All the articles I read had me setup ssh that way, and all the articles read for rsync used “user@remote” in the command.

This will take some research. Once I changed the label on the secondary disk “Server Backups” to remove space, I saw the mount point name did not change, so I changed it back. So quotes may have helped with that.

But, I do not understand why we took all the spaces out of “MSI Linux Mint” folder, and it is still there, empty; but our success in this endeavor dumped files/folders into “MSI Linux Mint” folder using spaces. This despite me changing the name of my original folder to drop spaces and not using spaces in the command. That needs looking at.

I checked my notes again and remembered that you omit the trailing slash when transferring the actual directory. I kept remembering that slash was important.

I also wanted to note that while I did try to use Grsync for this, I could not see how to setup ssh. I found how to do smb, but substituting the ssh command and entering target gave errors. So I found it easier to just use CLI.

Maybe if @Rosika can help me out with that, I could use it in place of scheduling cron jobs as it is my understanding you save each sync as a set of arguments named as the task and then you can schedule it to run.

Now I need to do this for all the client machines and then on to the next task: getting the backup of these syncs as well as CZ images to the external HDD so that I am covered.

Thanks again,
Sheila

Indeed. That does not add up.
I think I would wipe the lot and start again. You need a clean slate.

In general, spaces in Linux filenames are a pain in the neck.
Use minus or underscore or dot or camelcase.

I had to do that about 10 to 15 years ago when using rsync to upload files to our CDN. We used Edgecast at the time. They’re still around under new ownership, Verizon as I recall.

If you ssh from one machine to another it assumes the same username you’re currently logged in as if you don’t specify. Maybe rsync would do the same thing.

Not sure if you’ve thought about just mounting the remote “share” locally on each machine you want to backup. That way you can rsync “locally” and it’ll end up on the server share. You could mount a folder called backup, for example, and rsync to it.

It does.

You mean use NFS?
The server would have to export the backup
directories.
Yes that would simplify rsync.

I thought because my username was different on the server than my other computers that I needed to add the user. Correct?

Are you talking about smb? I did set that up this week, although since I have never used it before, not sure I understood how it functions. After setting up, my LM machine shows the shares as network locations by their name "homes on fedora.local. But the other machines (Pop OS). you have to see other locations, network, name of each computer and then you see the folders on the share.

It might seem easier to set up a share that way for some, but if I need a document from a different machine, I just send it locally via an app (i.e., LocalSend, Warpinator, etc.) because it is rare that I do not have what I need. Plus it is my understanding that much like rsync, you can copy a file from the server to your current machine in CLI or even ftp.

I may be thinking wrong, but my plan, in order to have the 3-2-1 backups in place was:
Use rsync to sync home folders on each computer to server.
Use timeshift on each machine with snapshots on separate drive from their individual OS disks.
Clonezilla for images (maybe weekly/monthly) of each machine on same separate drive from OS as well as on Server.
All backups, images, etc. reside on server as #2, so adding the ext HDD to back up the server, is my #3.

I do not plan on cloud storage as I found it way too complex to deal with Nextcloud setup. So my offsite storage is old-fashioned data discs made from the server and stored in my fireproof safe. I specifically bought one of those dvd writers for Mdisks and will be using that…when I get around to it.

The distros, installed apps and such could always be redone. My only use for the Mdisk backup would be irreplaceable items like pictures, etc.

Still so much to do, but it will be worth it with everything under my control.

Thanks,
Sheila

Correct

NFS is a mile better than SMB.
Learn to use NFS… you can mount the servers filesystem
and it will look just like it is on your local machine.

1 Like

Okay. I’ll look into that.

Thanks,
Sheila

1 Like

Maybe. I hear that a lot, but I don’t know what it is that makes it better. More overhead with smb so the transfer speed is better with nfs? More flexible permissions? I’m not totally sold.

If you have a mixed environment, like the company where I work, it’s easy to configure the Linux machines to access an smb share than it is to configure the Windows machines to access an nfs share. At least that’s the route we went.

Here is what turned up when I did a search.

  • NFS is faster than SMB when using encryption.
  • SMB is generally faster than NFS, but it is less reliable.
  • NFS is better for transferring small and medium files over the network, while SMB is better for small files that need to be accessed quickly.
  • NFS is more appropriate for Linux users, while SMB is more appropriate for Windows users.

I could agree with the last one maybe. Have not tested the speed. Hard to test reliability.

In your all-linux world at home, it would be ideal.

1 Like