TimeMachine backups using a Pi5!

OK - most of the threads I could have necroposted in weren’t necessarily related to this.

I just plugged a 6 TB USB 3 drive into my Pi5 (headless server running arm64 debian bullseye 12).

Formatted it as ext4… mounted it…

Created a couple of Samba shares on the 6 TB drive… started Samba…

I can’t browse them (Samba shares) from my Linux machines or my MacBook Pros…

BUT : Apple TimeMachine finds the main samba share! And I’m backing up to it!

I don’t really need SMB to backup Linux machines… NFS just “works” and I don’t need either NFS or SMB to do rsync backups…

I have another identical 6 TB USB3 HDD - but I can only find one power supply - so when the current one fills up - I’ll have to stop samba and NFS - unmount the drive - plug in the other one - format it and mount (same location - it should show up as /dev/sdb1)… and good to go…

Dunno why I waited so long to do this…

And both my daughters can use it to backup their Macs too (youngest daughter has a MacBook and a Mac Mini)… I didn’t need to do any “UNIX” trickery on my MBP to get to the TimeMachine SMB share - it just found it and started backing up to it - plug and play…

2 Likes

Cant Mac use NFS? It is after all Unix… Did they remove NFS?

1 Like

I regularly mount the main (only) NFS share off my TrueNAS to my MacBook…

╭─x@methone.local ~/tmp
╰─➤  uname -a
Darwin methone.local 23.4.0 Darwin Kernel Version 23.4.0: Fri Mar 15 00:12:41 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T8103 arm64
╭─x@methone.local ~/tmp
╰─➤  history |grep sudo\ mount|grep BARGEARSE |tail -1
 7809  sudo mount -t nfs baphomet.local:/mnt/BARGEARSE /private/BARGEARSE
╭─x@methone.local ~/tmp
╰─➤  df -h /private/BARGEARSE
Filesystem                       Size    Used   Avail Capacity iused ifree %iused  Mounted on
baphomet.local:/mnt/BARGEARSE   9.8Ti   8.7Ti   1.1Ti    89%    724k  2.4G    0%   /private/BARGEARSE

However - “TimeMachine” will only use AFP (deprecated - nobody runs this anymore - Apple have ditched it in favour of Samba) or SMB… Don’t ask me why… I don’t know…

Anyway - it took 22 hours for a full backup… it was going over ethernet (I keep WiFi turned off on this Mac - 'cause sometimes Synergy tries to use WiFi and Synergy [kvm] has too much lag on wifi).

Discovered the bottleneck - the two Ethernet over Power (NOT “PoE”) I’m using - while claiming to be 500 mbit - are only getting 100 mbit half duplex (i.e. 50 mbps)… I need to investigate the cause of this. Ran a bunch of iperf3 tests…

It could be as simple as unplugging them and replugging them - or it could be 'cause maybe one of the UTP cables I’m using is only CAT4… Note: swapped out the cable in the kitchen (where the router and Pi5 are located) - no difference… Damn! I used to be able to get 500 mbit to the kitchen and back over these “homeplug” Ethernet over Power devices…


OK - I’ve swapped out all ethernet cables and “home plug” (ethernet over power) devices (I have about 8 - includes the original 100 mbit pair my satellite TV provider way back when supplied cause their device didn’t do wifi) and made no difference… I reckon something’s changed with my household wiring that’s strangling the ability to go over 100 mbit… What changed?

Damn - if I ever get 100+ mbit internet - I’ll have to use WiFi - which I HATE using… I reckon I’ll run a UTP from the front of the house (where my home office is - and my NAS and network printer etc) to the kitchen through the roof…

Note: I get gigabit between the two RPi have in my kitchen (Pi4 and Pi5) next to my router… Whether using the 3 gigabit ethernet switch ports on the router - or an 8 port D-Link gigabit switch I keep in the kitchen (note : they’re several metres away from the “wet stuff” :smiley: )


Another thing I’m going to try - but it will mean losing power to EVERYTHING on my desk and my NAS - is pulling out the double adaptor I have with a powerboard and the HomePlug ethernet device - and plug the HomePlug into the wall and run the powerboard piggybacked on the back of the HomePlug.


OK - took the drastic step of powering off EVERYTHING in my office and work desk… re-jigged the HomePlug plug in my office…

Attempt 1 : NOTHING NADA - not even 1 mbit - I got 0 mbit and 100% ICMP loss when pinging devices in the kitchen (I have a 16 port giga switch on my desk). So - power everything off again - DOH!
Attempt 2 : finally got connectivity to the kitchen - but - still at about 50-75 mbit…

So - I’m going to have to get a long piece of Cat6 and run it through the ceiling (and drill a couple holes in the ceiling)… This is ridiculous in this day and age…

2 Likes

Next stage on this “plan” is a shell script to backup this machine (now a Pi4 with 8 GB of RAM - headless debian bookworm for arm64) to itself.

Note: contrary to the subject title “using a Pi5” - I’m now using a Pi4 (headless debian / raspbian 12).

Testing that now…

rsync (locally) to the 6 TB USB 3 ext4 filesystem…
then use tar to write that content to tar.gz file…

Then put that into cron - how often? I think daily’s overkill… Maybe daily for a few days…

Once that’s sorted and working - modify that script to do remote backup to that Pi4… But I won’t start on that until I’ve got gigabit from the other side of the house to the kitchen… Payday’s still 12 days away (monthly pay sucks) - will order some 30 m cat 6E cables and run a pair (i.e. one as a backup so I don’t have to do it again) through the ceiling and crawlspace from my office to the kitchen…

A backup of my Pi4 (itself) gzip’d tar, uses less than 1 GB…

WARNING - there’s some “potty mouth” words below :

So - crontab on the pi4 :

# m h  dom mon dow   command
# running this daily in the short term - eventually weekly...
0 4 * * *  /usr/local/bin/bk2-rpi.bash > /dev/null 2>&1

Content of bk2-rpi.bash :

───────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────
       │ File: bk2-rpi.bash
───────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────
   1   │ #!/usr/bin/env bash
   2   │ # backup this RPi 
   3   │ # use rsync to dump to /backup, then use tar to archive content of /backup to tgz file in /archive
   4   │ # RUNTIME=$(date +'%Y%m%d-%H%M')
   5   │ PROG=$(basename "$0")
   6   │ BOX=$(uname -n)
   7   │ RUNTIME=$(date +'%Y%m%d')
   8   │ # EXL="/usr/local/bin/exl.txt"
   9   │ EXL="/usr/local/bin/elx2.txt"
  10   │ DEVIANT=/mnt/BUNGER00
  11   │ RSYDIR=/mnt/BUNGER00/backups/$BOX
  12   │ TURDIR=/mnt/BUNGER00/archives
  13   │ # Returdation = 15 days : 
  14   │ RETURD=15
  15   │ if [ ! -d $DEVIANT ] ; then 
  16   │     echo "$DEVIANT doesn't exist and not making it.... exiting..."
  17   │     echo "maybe needs mounting perhaps????"
  18   │     exit 1
  19   │ fi
  20   │ [[ ! -d $RSYDIR ]] && mkdir $RSYDIR
  21   │ [[ ! -d $TURDIR ]] && mkdir $TURDIR
  22   │ # /backup
  23   │ # /home/x/ResilioSync
  24   │ # /proc
  25   │ # /sys
  26   │ # /dev
  27   │ set -vx
  28   │ rsync -av --exclude-from=$EXL / $RSYDIR/.
  29   │ tar czvpf $TURDIR/bkup-$BOX-$RUNTIME.tgz $RSYDIR/*
  30   │ chown backups:backups $TURDIR/bkup-$BOX-$RUNTIME.tgz
  31   │ rm -Rf $RSYSDIR/.
  32   │ # cleanup shite older'n 15 days...  or should we make it 30?
  33   │ # find $TURDIR -type f -iname \*.tgz -ctime +$RETURD -exec rm {} \;
───────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────

“returdation” is essentially “retention”…
I’ve got some commented foldernames - example of what’s excluded in $EXL.

Here’s $EXL (“/usr/local/bin/elx2.txt” - note: this is a symlink to a file on my homedir) :

───────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────
       │ File: elx2.txt
───────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────
   1   │ /mnt/BUNGER00/backups
   2   │ /mnt/BUNGER00/archive
   3   │ /home/x/ResilioSync/Music
   4   │ /home/x/ResilioSync/scripto
   5   │ /home/x/ResilioSync/bigshit
   6   │ /home/x/ResilioSync/xr3t
   7   │ /home/x/ResilioSync/PHOTOS
   8   │ /home/x/ResilioSync/e-book
   9   │ /home/x/.cache
  10   │ /home/x/.config
  11   │ /home/x/.oh-my-zsh
  12   │ /home/x/tmp
  13   │ /proc
  14   │ /sys
  15   │ /dev
  16   │ /mnt
  17   │ /media
  18   │ /tmp
  19   │ /var/spool
  20   │ /var/tmp
  21   │ /boot
───────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────

I think I will just modify that script to be “universal” - i.e. if it’s running on the backup server itself, “frambo”, do a local rsync, if it’s running elsewhere on a NIX like operating system (i.e. Linux) - then use rsync to a host (i.e. frambo, and if necessary - NFS mount from frambo to write a *.tgz file…

And I’ll have to modify that exclusion list for my two x86_64 systems which have multiple gigs of games installed which I don’t really need to backup…

Some time ago I had a shell script (forget its name) to remove the archive.tgz files except for the earliest of each month as EOM (end of month) for the previous month… Can’t find it anywhere…

I’d like to keep a month’s worth for a couple of months (weeklies)… but blow away everything except the first week of the month archive from 2 months earlier…

1 Like

CRAP!

Just noticed - tar is prepending the :
/mnt/BUNGER00/archive

into the tar.gz file…

Somehow I need to strip that out… prefer not to have an interactive "cd /mnt/BUNGER00/archive cmd in my script…

Bugger!


Update - changed :
tar czvpf $TURDIR/bkup-$BOX-$RUNTIME.tgz $RSYDIR/*
to
tar czvpf $TURDIR/bkup-$BOX-$RUNTIME.tgz -C $RSYDIR $RSYDIR/*
and will see how I go after 4:00 am tomorrow…

1 Like

Didnt we deal with that in that other topic. ?
I think the conclusion we arrived at was to cd to the directory to be backed up and to use ‘.’ or ‘./’ as the tar source. That picks up the dot files and avoides prepending anything. Your $RSYSDIR/* misses dot files

1 Like

There are no “.” files in / …

It is grabbing dot files from subfolders…

I’ve changed

tar czvpf $TURDIR/bkup-$BOX-$RUNTIME.tgz -C $RSYDIR $RSYDIR/*

to

cd $RSYDIR ; tar czvpf $TURDIR/bkup-$BOX-$RUNTIME.tgz .

as “-C” didn’t seem to do anything… And yeah - it won’t grab “.xxx” files in the root of the folder (not that there are any) - so I’ve changed “*” to “.” - some implementations of tar I’ve used don’t like “.” - probably Solaris…


Just did a further test - wanted to ensure symlinks aren’t followed - thankfully rsync is smart enough (by default) to copy the link as a link, and not the object it points to… e.g. in my $HOME/Videos folder, I have symlinks to Movies and TVShows mounted via NFS from my NAS…

2 Likes

OK - I’ve just trashed my Pi4 (4 GB model) running Ubuntu 22 - with Debian 12 (raspbian) arm64…

Intention is to use it as a Plex Media server. This will save me running the plugin/jail on my TrueNAS - it’s getting a bit long in the tooth (2011 dual core amd Turion with 16 GB of ECC ram and 4 x 4 TB in RAID5) to run addtional plugins…

And the Pi4 wasn’t really doing much… I could probably run Plex alongside Samba on the 8 GB Pi4… But going to dedicate this 4 GB Pi4 for Plex…

That way my daughters can watch anything that’s stored on my NAS that the Plex server can index - and - I will probably use it too - e.g. I can start watching something on one device - then - resume watching at the same point - on another device.

I’ll still continue to play music from local (or remote NFS) storage (using Sayonara)…


Also just undertook a “seat of your pants” exercise - the Pi5 running Ubu 24.04 - got fed up my main user UID being 1002 and GID 1002 - I prefer to have UID:GID of 1000:1000 - but Ubuntu on Pi4/5 takes 1000 for itself (user “ubuntu”)…

I ran a bunch of usermod and groupmod to switch “ubuntu” user to 2000:2000… But even though I wasn’t logged in it wouldn’t let me do that for my user “x”… I was logged directly as root - so I just used vipw and vigr, then chown in /home… rebooted…

But it worked… The issue I have is my main user on my NAS is UID 1000 - and that plays havoc with NFS file permissions … good so far…


Phew - I realised I’d installed Raspbian Desktop edition - thankfully realised before I went too far - prefer to start with a clean slate… Only realised when I was doing the first apt update && apt upgrade - it was trying to update a bunch of desktop stuff I don’t need or want!
Flashing it again (512 GB T5 USB 3 SSD) now…

2 Likes

Just a further update - this Pi5 running bunty 24 is very snappy.

Just discovered Pi-Apps…

It can install and configure “Better Chromium” - i.e. Chromium that supports sync using your google account! I’m not too fussed about all the telemetry stuff google does / collects etc, just happy to have my browser sync… It also does stuff like apply a global dark theme across Chromium and installs widevine for streaming AV content (e.g. Netflix)…

And once you install it (via Pi-Apps) it suggests installing “More RAM” - truly - you can download and install more RAM :smiley: !!! - only kidding - but - it takes care of ALL the heavy lifting involved in setting up and configuring zram…

(it only concerns itself with armhf / armel / arm64 - on debian / ubuntu - so don’t look here for x86 stuff)…

Damn - I really should have grabbed some sort of snapshot of RAM use before, then after, I installed “More RAM” :smiley:

1 Like

Update : I was getting less than satisfactory performance from Plex via web interface (buffering, stalling, audio cutouts, etc) running on a 4GB Pi4 on Raspbian Bookworm (12).

Realised it was Plex trying to transcode stuff “on the fly” to render in a browser window - the poor Pi4 only has 4 cores, and while adequate for most tasks - just not cutting the mustard…

Also - there’s an ethernet bottleneck - the Pi4 can only get ~50-70 mbps between itself and the NAS (via NFS) the content is stored on… So having to cope with that - and inadequate CPU cores - it was struggling… Might be able to get over that hump by speeding up ethernet between the kitchen and my office to gigabit… still haven’t gotten to the bottom of why my Homeplug gear is throttling me…

Play the content in a dedicated Plex player (like the snap available in Pop!OS 22.04 for x86_64) - and more than adequate… Tried an iOS (iPadOS) player called “MrMC” and again more than acceptable performance and quality… I just need to find something similar for Ubuntu 24.04 on arm64 (the Plex snap doesn’t have an arm64 build that I’m aware of).

When push comes to shove - the content is also available via NFS and SMB from my NAS… But I like the convenience of having something like Netflix on my home network - I can start watching something on one computer - and - take up where I left off, on yet another computer - and - I have a LOT of computers :smiley:

Upshot: if I want stellar performance in a browser window - invest in better hardware to run Plex server…


So much for that - the plex-desktop snap - on my x86_64 desktop running Pop!_OS was working yesterday - but crash / coredumped today… no idea why… removed the snap, re-installed and it continues to crash…

So I removed the snap…

And installed the flatpak and that’s working :

flatpak install flathub tv.plex.PlexDesktop

        ID                                        Branch           Op           Remote            Download
 1. [/] org.freedesktop.Platform.Locale           23.08            i            flathub               1.0 kB / 371.6 MB
        ID                                        Branch           Op           Remote            Download
 1. [✓] org.freedesktop.Platform.Locale           23.08            i            flathub            17.9 kB / 371.6 MB
 2. [✓] org.freedesktop.Platform                  23.08            i            flathub           112.6 MB / 230.9 MB
 3. [✓] tv.plex.PlexDesktop                       stable           i            flathub           150.5 MB / 149.4 MB

flatpak run tv.plex.PlexDesktop

and that works - but for how long? I’ve no idea what / why the snap version was crashing… googled the error (for the snap) and got nothing returned…

╭─x@titan ~  
╰─➤  plex-desktop 
[7993:7993:1008/082952.379331:FATAL:credentials.cc(123)] Check failed: . : Permission denied (13)
Trace/breakpoint trap (core dumped)

But for now I’ll use the flatpak version…

2 Likes

Linux backups to Pi4 (was a Pi5) :
OK - turns out my script (above) works equally well* without modification - when NOT run as root user… So long as the target is mounted via NFS…

So - a Pi4 8 GB RAM running Raspbian Bookworm, booting off a 512 GB USB 3 SSD, and mounting a powered USB3 6 TB HDD - that latter is shared via SMB and NFS.

Just had to modify the rsync exclude list a bit further…

Not quite ready to try it on something with a 1 TB drive just yet (like my x86_64 Pop!_OS desktop or laptop)…

And of course - the one that has Gigabit to the backup target (over NFS) is flying along… But no so much the Pi5 running Ubu 24.04 as it has the ethernet bottleneck of my dodgy homeplug network (which USED to work at full 500 mbit!).

So - I’ve got the bk2-rpi.bash script running as my main “x” user on the Pi5 running Ubu 24.04 and taking a long time - and also as my “x” user on the Pi4 running plexmediaserver (and added plexmedia server to the exclude list - because - but - I may end up un-excluding it - it’s currently about 6 GB - which isn’t huge and hopefully will compress well).

* not quite - the last line changing owner and perms doesn’t work from remote (to NFS mount)… I may need to investigate root_squash or no_root_squash options I think…

1 Like

you need to turn off root-squashing to get root access

1 Like

Not sure if I want that… I did actually try that and my NFS server hung - and my NFS clients crapped out - and I ended up rebooting the pi4 (the “backup server”)…

I’ll think of another way…

Decided my bk2-rpi.bash script won’t cut the mustard… this is ONLY for backing up itself to external storage…

So I’m going to write another script to run from cron to rsync over SSH to the backup server.

Run that from cron (on the client machine)…

Then ~12 hours later - run a script on the backup server to tar gzip those rsync’d folders…

This will use up a little bit more space… but I’ve got 6 TB… So I’ll just rsync to an existing folder (remote rsync over SSH) - that will save time as it will effectively be “incremental” with --ignore-existing rsync switch… And if when I end up having to swap in my other 6 TB USB drive - I’ll have to build that again (i.e. populate the client folder with fresh rsync data).

And there’s my backup solution in a nutshell :

  1. even though RAID is not a backup - I have my huge collection of data (music, videos etc) on RAID5 (RAIDZ1) on my NAS
  2. most of my documents and stuff like that exist on at least half a dozen different computers and sync’d using ResilioSync (which keeps 30 days of changes too) - my self-hosted cloud storage solution…
  3. things that need / should be backed up (like RPi servers) drop their stuff to my backup server.
  4. MacOS TimeMachine backups go to a Samba share on the same server…
2 Likes