Setting up a home server

Like comparing orange versus pear.
ARM is a completely different architecture.
Intel and AMD are competing manufacturers, but both produce CPU’s for the same architecture.
So they are the same on architecture, and CPU (complete) instruction set (mostly).

ARM on the other hand is a different architecture, it’s a low-powered RISC CPU, I think aimed for mobile devices. RISC stands for Reduced Instructions Set Computer.
Many produce ARM processors: Qualcomm, Broadcom, nVidia, Mediatek, Samsung, just to name a few.

Because of the reduced instruction set the internal logical circuits in the CPU can be much simpler - thus quicker, and the whole CPU can consume less energy, while working at a high speed.
On the other hand a CISC processor can have very complex instructions, like the REP things in intel ( REP/REPE/REPZ/REPNE/REPNZ — Repeat String Operation Prefix ), for this to do a RISC CPU has to be programmed with complete loop.
So in such cases CISC wins especially supporting higher level languages, like C.

Long story short:
ARM can consume much less energy
Intel (and AMD) can be more performant

I don’t have a VPN.

If you mean a static IP, no, it’s not needed. DDNS stands for dynamic DNS, and this the thing to workaround the changing dynamic IP of the home network.

When your home router connects to the internet, it gets an IP from your provider, but that’s changing, it may be different on every reconnect, and it may change from time to time. For example, without reconnecting I have a constant IP for about a week, but then I get another.

The DDNS provider lets you workaround this:
it stores your IP, and assigns a FQDN, and when you try to look it up, the DDNS providers name server responds with your homes current IP.
So you can reach your home always using an FQDN, like
sheilahome.freedns.org (just an example…)
Whenever your IP changes, it will be noticede by ddclient, and it does the work to update the IP at the DDNS provider.
This ddclient instance can run on your home server, or it may be included in your routers firmware. Having it outside the router gives more flexibility, because it’s possible to configure ddclient to any DDNS provider, while routers tend to have only DynDNS, or NoIP as preconfigured providers.
I choosed dynu.com because it allows to add aliases, TXT records, and such goodies, not only an A record.

Great, we use exactly that to keep contacts/calendars synced on Android.

On the computers (desktop/laptop) Thunderbird has native support since a recent upgrade, and Evolution (my choice :smiley: ) has it too.

As I looked quickly at Etesync, it shows up as possibly self-hostable:

So if you start to run your own server, it would be a waste not to self-host such things :slight_smile:

Definitely yes:

But this is for intel architecture.

If you move to an ARM based computer, first look for an Ubuntu-like-image meant for that board.
Like Ubuntu-Minimal for Odroid
https://wiki.odroid.com/getting_started/os_installation_guide#tab__odroid-c4hc4

1 Like

Thanks, @kovacslt this helps a lot. Is there a reason you do not use a VPN? Can you elaborate on why you feel it is or is not necessary for having a home server connected to the internet?

@daniel.m.tripp you mentioned the Raspberry Pi with ARM Linux, since I was looking at the RaspPi for hardware (not researched yet, just know about it) do you only use ARM hardware for your server?

I was looking at the NUC but it is Intel and not sure I want that. I am still looking at the Odroid @kovacslt recommended, but had never heard of it (maybe cause I’m in the USA).

What about RAM? From what I have seen in both of your setups, you don’t have outlandish amounts, but I have read forums where people have like 32 gb or even 64 gb.

Thanks, all.
Sheila

1 Like

I don’t need it.

If that server is a simple file server, and it has to be reachable only in the LAN, then it is not necessary to connect it to the internet.
BUT! If that server has to serve request not just from the LAN, but from anywhere in the world, it has to be connected to the internet, and has to be reachable.

According to my searches a NUC idles around 6W which is just 0.5W above my HC4. It’s unclear now, wether this power is measured without drives, or with some HDD spinning in it, or working on a SSD.
My Odroid idles at 5.5W with 2 harddisks installed.

My Odroid has 4GB RAM + swap and that’s fair enough for that load it takes.

root@ubuserver:~# free -m
               total        used        free      shared  buff/cache   available
Mem:            3666        1314         192           1        2159        2260
Swap:           3904         313        3591
root@ubuserver:~#

My RPI3 in my living room has only 768MB (256MB of the 1GB is dedicated to GPU) and it happily runs Kodi, SANE networked scanner, and pulseaudio network sink.

root@butyok:/home/pi# free -m
              total        used        free      shared  buff/cache   available
Mem:            744         116         371          14         257         565
Swap:            99           0          99
root@butyok:/home/pi#

My desktop and laptop both have 16GB which amount is plenty for everyday tasks on Linux, and it’s enough even when running Davinci Resolve (video editing).
32GB is that plenty, I can’t even magine what I would need it for. Maybe for runnig a heap of virtual machines, but that’s not really my business :slight_smile:
I can boot up at least 4 VM’s on 16GB too.

2 Likes

Don’t forget Apple - they actually have a considerable amount of shares in ARM inc too…

Unless we’re talking about Apple Silicon, the M1, M2 and M2MAX - these can outperform x86_84 and consume less power while doing it.

@Sheila_Flanagan - you don’t “need” FreeNAS or TrueNAS to use Resilio Sync - I just happen to do mine that way…

Resilio Sync (pro - i.e. licensed - it’s a pretty reasonable fee - for “life” pretty much) can do selective Sync - and if syncing to tablets or phones, will do that anyway (whether licensed pro version or free) - but - you have to keep in mind - e.g. every machine (i.e. computers, but not so with tablets / phones) hosting a Resilio Sync share will need to have that disk space available.

e.g. I have three main “shares”
Scripts - about 5 GB max - easy… maybe even 4 GB - it’s mostly just shell scripts and text files
Documents and Pictures - this is large - about 80 GB
Music - this is VERY large - about 200 GB

So - every machine that hosts replicas of those three shares would need ~285 GB space available…

(note also - my music collection, stored on my NAS is actually over 1 TB in size - that Resilio Sync share is a subset of the stuff I play most often)

2 Likes

Thanks for reminding me. I knew my list was not exhaustive anyway :slight_smile:

1 Like

Well I now need to update this thread with what hardware I just purchased and get a few more questions answered as my internal 2TB SSD arrives tomorrow and I am still finding things that I did not know/consider in setting up this home server.

Someone was selling a NIB of this item for about $100 USD less than Amazon, Ebay, etc., so I bought it. Hope I did okay, but I think for my needs that I did.

Beelink SER AMD Ryzen 7 5700U

I was planning to use Fedora Server as the Linux OS as I recently installed workstation 39 on my old Surface Pro 7 and it is working great with only 4 gb RAM (hate the machine, but bought it for field work in my business long ago) and I wanted to expand my knowledge/usage beyond Ubuntu-based systems.

Since this server will be used for a file server, backups of 4 linux machines, and replacing my current cloud solution (Megasync), self-hosting a few things, and a media host (I think) using Kodi on individual machines & TV, but definitely storing all music & DVDs as well as converting old home tapes to digital format. (I bought a OWC Mercury Pro DVD-Writer External 1 x Pack OWCMR3USD24 which can use M-disks for laying the home movies digital format on in Bluray DVDs.)

Now since I will be self-hosting a few things, I thought I had this all figured out until I kept running across articles all talking about VMs, ProxMox, et al. Why do I need a VM on a home server, I thought? If I want a VM, I will put it on the machine to use it on. Right? Now I begin to understand that this is best for isolating these self-hosted apps/services from the server os. Correct?

So now I need to decide on the best method to do this and understand how, if at all, it differs from how I installed W10 in a VM on my Pop OS machine. I used two methods, Oracle Vbox & QEMU/KVM to test which ran better for my needs. Since they are W10, I found neither to be perfect, but on the server, this will be used for the following:

Bitwarden (password manager that I just found out can be self-hosted)
Accounting software (either Akaunting or Odoo)
Etesync/Etebase (WebDAV service I use for syncing my calendars/contacts across all devices, including Android phone)
Nextcloud? if i understand correctly, this is like using Mega but on my own server away from any prying eyes?

In addition, I still need to decide on the backup system (Rsync). Research so far is the 3-2-1 method and that is where I do not understand how I can self-host a cloud and yet have a server backup “in the cloud.”

My Mega subscription ends next week. So I synced everything (1 TB data with some redundancy I need to fix) to a partition on my external HDD so that I have everything at first.

I have a 500 gb internal M.2 SSD to install the server OS, the self-hosted apps (in VMs?) and other apps I may need to use. Plenty of space there. The internal 2 TB SSD is for my files to be shared/accessed (including from outside the home network) and I made sure to get USB 3.2 for connecting one or both of my ext HDDs 5 TB each. Thought to use one for server backups but that leaves the “offsite copy” that I need to know how to proceed.

I do have included in my Protonmail subscription their cloud drive (500 gb) so that is not enough to ensure all my personal files are backed up offsite.

So if I understand, backups include everything on the server (the OS and VMs, etc.) plus all the personal files on the internal 2 TB SSD, and those would be backed up on one of my external HDDs as well as each of my 4 computers having their own backups residing on each machine plus backed up to this drive. That completes the 3 and 2 of the 3-2-1. So I still need to have a cloud service for that last backup protection?

While I have time yet to get this all setup, my main concern was no longer having everything backed up to the cloud, but residing on one of the external HDDs as well as on each machine that I originally synced from.

Lastly (I think…lol), this Raid thing (software, not hardware). Not sure I really need this setup, but since @daniel.m.tripp mentioned his, just need to know how necessary this is considering the 3-2-1. Brand new internal SSDs should not be an issue; backup of server/files on external HDD, might be, but I have 2 of them, so could utilize both if needed.

From my research, external HDDs (via USB) are not easy to include in RAID s/w setup (some of this is old info) ? I do have an older eSATA external HDD that is only 1 TB and could definitely use it if needed.

Then there’s the RAID s/w setup, which I believe I should use the mirrored?

Secondly, I asked myself, “Does this mean I could host my own website?” As I am switching host companies this year after 10 years, the thought crossed my mind. But further research on that subject says this could turn into a full time IT position on running/maintaining my own home server…sheesh, don’t think I have that much time to invest.

Thanks for any input you can provide for getting this right.

Sheila Flanagan

One more item I forgot to mention and need more understanding on:

@nevj says he does not use a VPN. I have Proton VPN I have yet to utilize, as my current ExpressVPN (installed on my router) does not expire for a few months, but I intend to end that subscription and either use Proton or none. Thoughts?

Sheila Flanagan

It depends on your needs.
I do not need to access home computers from outside, or work computers from home. Retirment has its benefits.
I can hide behind NAT and almost ignore security.

If you need any more than that, you need to consider security.
VPN is the modern way of having secure access. Before VPN
it was all about firewalls and monitoring and static IP numbers. Noone does that today.

1 Like

But I will need static ip for outside access, right?

Thanks,
Sheila

For that you either need static IP or a DDNS service.
To be precise, you can have both, but it’s not necessary.

1 Like

You don’t need a VM, just run your own server on bare metal.
This will give you a native performance. Beside my home server I also run a VPS, which is a VM on the hosting computer in a datacenter. That is for isolating my server from other peoples rented server. I can install my own OS onto it, and run the way I like, I can completely ruin my server if I make a mistake, but this will not have any effect on an other VM running on the same host.
I doubt you would want to run many servers with different OS’s on your home server. Yes, you will run multiple server processes, which are basically just programs running on your single server OS.

@kovacslt to be clear, running these self-hosted apps which require updating would never interfere with my own server OS or cause an issue if running outside of a VM? That is what I gleaned from the VM or Container approach on my server. So just like desktops, you only need a VM or container if you are using different operating systems on the server?

Thanks,
Sheila

This is one of several comments who make me think I should use VMs for self-hosted apps:


* Security - if some app I am hosting has a vulnerability (spoilers, everything does) then at least they're isolated to that VM instead of the whole of the server.
* Isolation - apart from security this is about managing software. If something needs a specific version of a php library and something else needs a different version-- this is a non-issue in a VM. It's technically possible in a server but it's HARD! In fairness, containers solve this too! Taking this to an extreme, I've seen software only supported by certain flavors of Linux, and maybe even Windows only--but I admit that's the minority.
* Maintenance - Updates can occur independently. There's also no risk of accidentally stopping a service you didn't intend to. Sure the base OS occasionally needs to be updated/rebooted, but that's rare. And if you want to go down that rabbit hole you can even go with an OS designed to have less reboots/downtime as a host OS. And actually, you don't even REALLY need to restart everything when that happens. VMs let you save the state of a service & restore it after the host is rebooted! This may actually save time over starting services--certianly save hassle!
* Snapshots - Spoilers, things break. When they do I can roll back to a working snapshot without interfering with anything else.
* Hardware independence - When my server dies, it will be a sad enough day as it is. All I have to do now is install the OS & move the VMs over - then they should be good to go! Imagine if I also had to reinstall everything! No thank you!! Again, containers could also help with this.

PS: This is not a stupid question, you're much better off asking & virtualizing early rather than waiting to rebuild everything after a hack/crash/whatever.

Thanks,
Sheila

2 Likes

My own experience , which is limited, is that VM’s are quite easy to setup and maintain, but containers are a challenge. Maybe I am using docker at too low a level?

If you want to isolate an app, for whatever reason, running a whole VM to do it is a bit of an overkill. A container is designed for that. Would a simple container like firejail be enough? Would a snap or flatpak be enough?

I dont fully see these isolation arguments. If you are going to use an isolated app from outside, you have to break the isolation. If you can break it, so can any intruder.
I dont see the firewall argument either. You block all the ports. Great. Then you have to open some ports to use anything. Not so great.
The VPN approach seems to be more logical. You dont have to leave doors open in order to let yourself in. I may be wrong.
I dont know much about VPN.

So how many levels of security do you need.? I would opt for one good level.

Be aware that I am never going to follow the crowd

Yes, I read about containers (docker, etc.) vs VMs, but as you said, that seems complex to me with my limited knowledge of them. The VM was simple to install and I understand just installing that self-hosted app within that system. I understood that what the commenter was talking about was more of my server os being messed up due to some update or other thing related to that software I am hosting. I know…what? I use apps all the time in my various Linux OS desktops and I don’t believe any of them have ever messed up my system.

So if we set that issue aside, the other seems to be snapshots of the VM(s) hosting said app being able to be rolled back before that itself got messed up. Again, really?

As for server dying and having only to reinstall server OS, I would still have had to have those VMs backed up somewhere and reinstall those images, I believe, which to me sounds to be as much trouble as reinstalling server os and associated apps. As long as the data is backed up, these apps are not cumbersome to reinstall.

Again, I am much more concerned with security from hackers since I will occasionally access the server from outside the home network AND the 3-2-1 backups so that I can trust my own cloud over the one I have been paying for.

So now that I feel confident I don’t have to add docker/containers (and learn yet another aspect I am unfamiliar with) the only other question that I thought about later has to do with how large those backups are in relation to my entire server disks overall. Are these compressed archives that when backing up, say 1 TB files/data, only take up about 250 MB of room on the backup disks & cloud?

This will tell me if I need additional drives over the 4 that I intend to use in the new server.

And I am like that as long as I am confident in my decision. You are much more knowledgeable than I on these matters, but once the issue is explained, I use my own logic on which solution will work for me–regardless of the norm. :smiley:

Thanks,
Sheila

Great. That is the ‘F’ bit in ‘FOSS’

 I am much more concerned with security from hackers since I will occasionally access the server from outside the home network

The basic strategy is to be as invisible as possible.
I am not the one to ask about the details

the 3-2-1 backups so that I can trust my own cloud over the one I have been paying for.

You have that totally under your own control. Do it properly, no shortcuts
.

 You are much more knowledgeable than I on these matters

Not really. You have all the outside experience.

I use VMs at home - but - only for testing stuff - I manage some 2000+ linux servers for customers running as VM’s (mostly on VMware ESX - I used to support one environment, believe it or not, a government “entity” that relied on HyperV - unbelievable! And yes - it’s as shonky on servers as it is on you Windows 10 or 11 PC - it’s a piece of crap)…

Any - for “stuff” I want semi-virtualized but running 24x7 (like my NAS) I have these :

run it on a low power ARM SBC :

  • transmission-daemon (for headless management of torrent downloads)
  • TVHeadEnd (have a TV-Tuner “hat” in a Pi3 picking up Free-2-air TV broadcasts)
  • SSH jumpbox to my home network (Pi4 running Ubuntu arm64 headless server)

As a plugin / jail on my NAS (FreeBSD has had containers longer than even SOLARIS!)

  • ResilioSync
  • Plex (transcodes videos so things like AppleTV can use it)

FreeBSD jails are decades more mature than the likes of Docker or Kubernetes (but they’re not “easy” FreeNAS and TrueNAS do 90% of the “heavy lifting”).

If you’re looking at some kinda home storage server - SSD will start to get pretty expensive as your space requirements expand (and they will, and they do!) - I just recently cleared out about 500 GB of ISO images I will never need again (and if I need them again - they’re only as far away as my favourite mirror [aarnet]) - but I should start thinking about incremenetally replacing the 4 x 4TB members of my RAID (RAID is NOT backup!) with maybe 8 TB members…

Do you just use this to sync media files between devices? The free home version or do you pay for the server license?

I cannot seem to find the answer to my question on backups: how much space does, say a full backup and then incremental backups use? If I have 1 TB of data for full backup image but only a few files change daily, how compressed are these backups so that I can ascertain how long that 2 TB internal SSD would work if using it only for backups.

My intent was to do a full backup of the one location where all my Mega (cloud) files reside. But that is just for files.

The backups on each machine would be on the individual pc as well as the server (the 2 TB internal SSD). Then the server OS itself would be backed up on this drive as well as on one of the external USB HDDs of 5 TB.

Depending on which backup method I use, are these compressed files and is there any ratio I can use to deduce space requirements for now?

I understand that I can rotate in and out backups, but in the initial setup, I would like to ensure that I do not have to move & reroute the home backups quickly due to space.

Thanks,
Sheila

I paid for a pro license - I think it’s a lifetime thing - it’s not FOSS - but - they release binaries (RPM and DEB and others - and e.g. portable binaries e.g. “rslsync” for AMD64 on FreeBSD - and all the ARM architectures you can imagine : armel, arm7l, arm64/aarch64, sunxi). Note - ResilioSync is completely peer-2-peer and uses the bitorrent algorithm to sync between devices - “conceptually” I have one source that is the “server” - but it’s “serverless” effectively… if I lost the “server” - I’d still have everything…

But - the way I’m using it - I don’t actually need that pro license.

These are the shares I share across my devices :
scripts
binaries and documents
Music
password databases (encrypted)

And I use it on these (the smaller systems only sync the scripts share):

  • orangePi (sunxi “ARM”)
  • RPi3 (Raspbian Stretch)
  • iPad Pro 12.9 (2018)
  • iPad Mini (5th gen 2019/2020)
  • MacBook Pro M1 (x2)
  • Pop!_OS x86_64 (x2)
  • Red Hat 8 “server” x86_64
  • RPi4 running Kali
  • RPi4 running Pop!_OS 22.04
  • Samsung Galaxy S9+ Android 10
  • RPi Zero 2W (x2 one is on bullseye, one is on bookworm)
  • NTC CHIP on a PocketCHIP chassis running Jessie armhf
  • and - I was sharing/syncing it on a VDI (virtual desktop infrastructure) system running Windows 10 - but - this was for work and the guys that ran it asked me not to use this “thing” as a VDI and only use the “published apps”…

It just works - and when it breaks, it’s nearly always my fault… one of these days - I’m going to make it into a single share - and turn on Selective Sync (only available in Pro) - but that’s too much admin overhead for my - taste…

BTW - I used to use Dropbox in a similar method - it has less supported platforms (e.g. no client binaries for ARM systems).

I don’t concern myself with backups - 'cause my “volatile” data is sync’d (yeah - self hosted cloud is still NOT a backup solution!).

ResilioSync (if you let it) keeps 30 days of file changes (it’s turned ON by default - but I’ve hit walls where I’ve run out of disk space - like on my personal MacBook Pro M1 - and disabled it :


(this is the sync copy running in a FreeBSD jail on my NAS - I ALWAYS leave this on - as it’s the “master” [note : if I lost it - I’d still have all the other copies on all the other machines that sync “Music”]).

Note (to self) : RAID is not a backup solution.

I never bother with system backups… I probably should, but if I have issues with one of my systems (yeah - I have too many!) I just re-install the O/S - 'cause all my volatile data I “need” is sync’d across multiple machines…

And that is what I figured out, but I guess (like you) if I have all my music in so many different machines, and though it would be a PITA to redo, even my movies are all on “disc” so the only other thing would be business files and pics to worry about. After reading more on the P2P method of Resilio and seeing “no sync” except in pro, I am inclined to just do as you have and consider the server having all the files, but they are also located on separate drives & other pcs and use that self-hosted cloud rather than continue paying for cloud service.

I thought about what we all did before “the cloud” and I never lost any files the “old” way. Just have your important items on more than one drive in case of drive failure.

Since I don’t have ARM, and without using RAID, I can still use an app to make an image of the initial server state with ALL files. After that, I will still have the individual machine backups in two places so not gonna spend any more time worrying about that.

I guess if I do the image and get everything set up and find the 2 TB internal SSD is not enough, I will cross that bridge later.

Thanks so much,
Sheila