Rsync to Data Drive in Home Server

or you can have both?

I’ve done it before - share the same folder from a UNIX / Linux machine over BOTH NFS and SMB…

I do it on my NAS too - TrueNAS - same data visible via SMB or NFS…

But I vastly prefer NFS to SMB - but sometimes SMB is easier for Macs (and I’ve never found a decent NFS client for Android or iOS).

Same folder in /etc/exports and /etc/samba/smb.conf

2 Likes

So if I understand the brief reading I have done, instead of having to rsync the /home directories/subfolders from each of 4 machines, I would set up an NFS system on the Fedora server and export all the data from all /home directories there, where they would then reside?

Then I could get rid of all data in home directories (except dot folders/files on each), which would save disk space & time, not having to rsync each machine’s personal files?

This sounds like a real benefit. So instead of accessing the preset /Home directories on each computer for, say Documents, is that there would be a folder under Network locations for that? Do you keep the file structure the same?
/Home/UserName/Documents (music, etc.)? (I would not want to be looking for an image amongst documents).

Speed doesn’t seem to be an issue with gigabit home internet. I certainly did not see much lag in SMB.

I would keep downloads on each separate machine.

The only issue I see is:
Fedora OS drive (server) is 500 gb: would I store the personal files on this and assume I would not run out of space? I probably have 300 gb of personal files that would reside on NFS.

Or would I have it on the secondary data drive (2 TB)? Remember that the secondary drive is set up to be the 1st backup of the primary drive. If I use it for NFS, then backup would only reside on ext HDD.

But I do have 500 gb of ProtonDrive cloud space which I am not using because they haven’t made an app for Linux yet. So you have to upload anything you want there manually.

By having everything in one location, I could technically do that from the server, as uploading a lot of data from there would not affect working on the other machines. But then I would have to be keeping track of which files changed to know what to upload.

I have been reading about using rclone for this (when I have time) as I do not know how long it will take them to get the Linux app–they just now released a Windows app after offering ProtonDrive to paid users for almost a year.

So after setting all of this up, I would now rsync from server primary drive to secondary drive and rsync secondary drive to ext HDD.

It’s a lot to take in, but I think it only makes sense.

Thanks,
Sheila

In setting up the NFS, I read:
“Before you start installing the NFS Kernel on your Linux, be mindful that you must have a static IP address so that your clients can find and get associated with your server.”

I have No-ip setup on my internet gateway just for this reason. I have had no issues with ssh to home server. But this is a different setup? I would still use the same IP address for NFS?

One guide mentioned setting a hostname. Is that what @daniel.m.tripp was referring to in simplifying connections? If I create a hostname in rsync config, use that same hostname in NFS? Using a hostname means not having to type out that long target directory?

Also, does it matter which drive I put the NFS share on? How does that affect the mount point? I have seen some using /media and others using /mnt/nfsshare. Since secondary drive is mounted at /run/media, if I decide to store the data on secondary drive–I just add a folder on that drive called “NFS” and from there I can use subdirectories for NFS/Documents, music, pictures, etc.?

Keep in mind that is going to be one long target directory:
/run/media/sflanagan/Server Backups/NFS
And for NFS I do not need the prior designation of sflanagan@192.168.1.157 added in front of the above?

Thanks,
Sheila

You can certainly do that. … keep all data on the server only,
and access it from the other machines by exporting it from the server to each machine. That is a typical use of NFS
but
I thought you were using rsync to the server as a backup?
Did I get that wrong? Were you just trying to populate the
server with data?

So , if I got that wrong, the only disadvantage to keeping the primary copy of all data on the server is that the server has to
be running all the time. Is that OK?

and
How do you backup the server? That is now critical.
You discuss using the cloud… it is either that or an external hdd on the server.

and
NFS will work locally, but I dont think it will work over the internet when you are away from home… check on that?
… so do you need local copies of data on the laptop you use remotely?.. you might have to do that and rsync it to the server when you get home? Did you not have some
setup for accessing the server remotely? Remmina or something?

@nevj maybe the part about “exporting it from server to each machine” eludes me.

I thought NFS was a central location (on server which is on always) where all files reside. To access them from individual machines, I simply navigate to that ntfs-server and open a document from there on my local machine. Is that not right?

Why are we exporting the stuff from the NFS server to my local machine? I already have them there and thought this was a way to move them to the server machine where they would reside.

Sheila

And additionally, I can upload to the ProtonDrive for accessing files remotely.

Sheila

I have Anydesk where I can access the desktops of all home machines and transfer files, if needed. But as I said, might be easier to just upload pertinent files to the ProtonDrive and download from there when away from home.

Sheila

@Sheila_Flanagan I think there is some confusion with terms.

Exporting the share, in NFS terms, is like advertising the share. You export the share so other computers can mount it.

My thought was that you have all data locally on each machine in the home directory. Then you would use a local directory to mount a share on the server for backup purposes.

You work locally like “normal”. Then when you’d like to backup you rsync from your home folder to the directory you mounted. It looks to be local from the workstation but really lives on the server.

Does that make sense?

Okay…whew, I thought we were uploading to this server only to export the stuff back to local machines. LOL

Hmmm. So I am only mounting the data on my local machine to that NFS server where it is centrally "located’? I understand central location for ease of access from multiple computers. What I do not seem to understand is why one article mentioned getting rid of /home directories on the client computer since it will be on the server for NFS.

Maybe you can explain the benefit of keeping everything on my local machines, yet accessing from an NFS server, which is located on home server?

See that was my next step in the process of setting up the NFS on the server: how to get the items from my client machines onto the NFS server which is on the Fedora home server machine.

Does that make sense? LOL

Sheila

It does make sense.

Like @nevj said, it originally sounded like you wanted to use the server as a backup.

  1. You can keep your data locally, like you would without a server, and then back it up to the server to a “backup” share you have mounted on each computer.

  2. You can keep your data on the server and access it using an NFS or SMB share mounted on each computer. Then you’d need to have a backup for that data somewhere since it only lives on the server.

The earlier posts did say what your backup strategy was, but I forget. I was just thinking:

  1. local data
  2. backup to server
  3. then back that up to an external drive or offsite

There would be advantages to mounting your home directory on each computer and having it live on the server. It’d be available no matter which computer you were using. You could arrange to access it remotely too. Although I think I’d rather use VPN to connect to my home network and then access it.

I did. But that was before I knew about NFS.

This is what I understood and would like to do.

Without NFS mounted on server, I would need to backup (rsync) each machine to the server. That was my initial intent.

But if we use the server to house all the documents, images, etc., then it is my understanding: whether editing or creating a new file, it will be saved on the server.

So backup then would be:

server on primary OS is backed up to secondary drive (timeshift & rsync)

secondary drive is backed up to ext HDD (rsync)

local machines would only then need timeshift snapshots (including dot files in /home) & occasional Clonezilla images saved on their own secondary drives.

This would eliminate needing to rsync each individual computer to the server just for /home directories backup.

If I understand correctly, then only the server needs backing up (via 2nd internal drive & ext HDD)

Plus, I can upload all of /documents and /pictures to the cloud drive, which means I can download onto whatever machine I am using via web browser.

Additionally, I do have remote desktop to the server as well as each machine in Anydesk (well NoMachine for Fedora server as I could not get Anydesk to work with Wayland).

This I have yet to tackle. Along with cloud drive, I also have a VPN from Proton; but my current one from ExpressVPN expires in May, and I will not be renewing since I have the Proton VPN included in my annual subscription. But I will figure out how to use that later, as I am not worried about the remote experience yet.

Thanks,
Sheila

Yes

To access them from individual machines, I simply navigate to that nfs-server and open a document from there on my local machine. Is that not right?

No.
Exporting gives permission for a directory on the server to be mounted from another machine
To access files in that exported directory from another machine, you mount that exported directory to a mount point in the
other machine.
You can then see all its files in the mount point. You dont go to the server at all, you can open the files locally
The mount looks like this

mount -t nfs servername:directoryname localmountpoint

or you can put an entry in /etc/fstab
and have it mount every time you boot.

Exporting does not move stuff. It just gives
permission to mount

You would move them to the server once.
Then erase the local copy, and access them via the mount. It would be like having the servers disk on your local machine… it
would do away with having to rsync things regularly.
but
I dont think nfs would work over the internet
So when away from home I think you would have to use rsync remotely from the laptop.

1 Like

@Sheila_Flanagan

Looks like nfs will work over internet , but is not recommended because of security issues.
but
sshfs is mentioned that looks interesting

Maybe nfs over a vpn woukd be OK?

Okay. Now I understand the local network setup. And I don’t need remote access since I have a cloud drive I intend to upload templates and docs for business. I can also remote into any of my machines, including the Server, from RDP apps.

So I will proceed with setting up the NFS. THEN I will have to rework the rsync we used in this thread to back up that NFS server and everything else on the OS drive to the secondary drive on the server, as well as the rsync from secondary drive to ext HDD.

Once I get the ext HDD cleaned up and ready for use, will mount it on the server and hopefully my backups are all good.

I assume I can use rsync for the Clonezilla images stored on local machines to have their backups on the server?

Thanks,
Sheila

That is the article I found from your prior discussion thread, so I was using it. Thanks.

As I told @pdecker, I do have access to a VPN, but that will have to wait–lower priority–since I can use other methods to access files when I am away from home.

Thanks,
Sheila

You get it now

Install the nfs client and nfs server in the server machine. Install the nfs client in the other machines.
Make sure rpcbind is running in all machines
Set up /etc/exports in servers
do exportfs -a
Check with showmount -e in server
Do the mount in each client
You are there

You can actually check the availability of
mounts from a client machine with
showmount -e servername

Have fun

1 Like

I think so.
Why bother? They get out of date soon.
Make some new clonezilla backups where you want them…
clonezilla should be able to write them just about anywhere

Well that settles that then. I will send the image to the server. Guess you have to manually delete older ones after newer ones accumulate.

Sheila

Yes, Clonezilla images have a useby date. Because your OS changes you need to keep making new ones . I do it every couple of months.
Just use rm -r imagename be careful what you delete.
I keep no more than 3.

This isn’t an internet IP address - it’s on your LAN / ethernet / WiFi… Most modern distros support avahi / bonjour / zeroconf - you should be able to reach the NFS server machine via hostname.local (trying pinging it e.g. your Fedora machine hostname is “fedora” - so “ping fedora.local” [note I’ve found avahi a bit dodgy on RPM based distros - like Red Hat EL and Oracle EL - haven’t tried Fedora]).

Another option - create an entry for it in your client machine’s /etc/hosts file.
e.g. /etc/hosts :
192.168.1.157 fedora fedora.local

I’ve found that doesn’t always work - e.g. my Pop!_OS desktop has an entry in /etc/fstab to NFS mount my NAS share - and - but it doesn’t work. I have to do it manually after I boot up (which doesn’t bother me particularly - I hardly ever shutdown / reboot anyway).

I just tried it again on my Pop!_OS desktop with an NFS mount:

baphomet.local:/mnt/BARGEARSE /mnt/BARGEARSE nfs rw,hard,intr,rsize=8192,wsize=8192,timeo=14 0 0

And I can mount and unmount on the fly - let’s see if it works next time I boot up…

1 Like