100%
No I didn’t - if anything, further decentralized…
The “main” sync, that goes to everything that can run a client, is mostly shell scripts, it’s mostly text, some of those computers running sync clients have as little as 16 GB storage (and I have managed to fill one of them up once, when I forgot to disable the default setting of “keep deleted files”)… There’s 14 devices… The default behaviour on portable devices (Android + iOS) is to “selective sync” (pro feature available for free on portable devices)… i.e. it only writes the file, if you want it (and bad luck if you’re not on the same wifi / LAN - e.g. on the bus or a train), the rest of the time, it’s just a pointer or something (I haven’t browsed the folder tree to see what those look like).
And my main issue, that I’ve caused several times over the last 5 years I’ve been using RSL as my self hosted cloud solution harks back to what @Akito often refers to, the sheer DODGYness and kludge-factor with a considerable proportion of applications written for Linux desktops : and here - with RSL? THERE IS NO DESKTOP APPLICATION!
You have to point a browser at a web server hosted on LOCALHOST (and get this - the DEFAULT in RSL is to only let browsers running on LOCALHOST connect - I have to “hack” my config file EVERY time I install the piece of crap - so that it allows “0.0.0.0/0” instead of restricting it to “127.0.0.1/30”).
As if that wasn’t diabolical enough, to setup a sync folder, I go to the master (running on my NAS), and either use a QR code (e.g. phone or table camera), or grab a hex string key, then paste that into the RSL HTTP/HTTPS instance running on the new machine, by adding a folder… Now this is what I’ve managed to do a couple of times - I’ve COPIED the WRONG hex string to the remote machine, which then asks me what local folder I want at the target, this is where it gets accident prone, I’ve pasted the key for my smaller sync into the folder for my bigger data sync - and guess what? If I’m not paying attention, ResilioSync happily merrily OVERWRITES EVERYTHING - and metastasizes that clusterf–k to everything else, just like I told it to! So - VERY unforgiving - and VERY far from being prime time ready…
The MacOS and Windows clients, are actual applications, and a tad more forgiving than the Linux WEB UI solution, and in both cases, much better integrated (there’s NO desktop integration on Linux!) with the desktop OS, e.g. double click on a sync folder in the Windows or MacOS client, and it will launch file explorer or Finder…
Here’s the breakdown :
- scripts (14 peers - it’s all peer to peer, so there’s no middleman, some are readonly)
- encrypted stuff (only 3 peers host unencrypted, 2 peers readonly, another 2 peers host encrypted copies)
- big stuff (7 targets / peers : e.g. documents, pictures/photos, cross platform binaries for a bunch of stuff - all are read/write)
- Music (4 peers all are read/write) - I actually cleared it out recently (it was 150+GB) and have it down to 60 GB - most of the files on there are FLAC - this is too big to peer to my phone - I use an adb-sync shell script to keep my phone’s music concurrent with my p2p sync solution.
Note : my “main” music collection, hosted on my NAS is actually now about 1 TB - from ~600 GB 5 years ago - I’ve since been replacing much of it with FLAC versions… And in many cases, I have BOTH mp3 and FLAC versions of the same content. I intend to fix this - I have a shell script I can downsample FLAC to mp3 (uses ffmpeg) - so I can get some savings by removing mp3 copies.
In “theory” my NAS is capable of de-dup, ZFS de-dup, but that’s a ONE WAY street and no going back, and I have no idea if my dual AMD Turion will have enough grunt to do that work (it does have 16 GB of ECC RAM) - and - I know for a fact I have some instances of duplicate data, there’s just so much of it - it would be a fulltime job for a fortnight or longer, to do housekeeping on that…