Proxmox series on linuxhandbook.com

Hey guys, if anyone has any interest in experimenting with a cool hypervisor that runs on just about anything and is free (unless you want to pay for support) … check out my proxmox series on http://www.linuxhandbook.com

Part 3 has been published today, and there’s still a bunch more to come.

It literally runs on just about anything (I’ve not had it refuse to install on anything, unlike certain other hypervisors …I’m looking at you esxi) lol

and if you need support just ask me, don’t worry about paying for it. :smiley:

I want to see @daniel.m.tripp install it on one of his Raspberry Pis. lol

3 Likes

Seems it’s possible :

Both my Pi4B with 8 GB RAM are idle right now… Hmmm…

Might have to head to OfficeWorks and grab some more SD-Cards :smile:

Hmmm - or maybe not even? I could run it how I ran ProxMox previously - booting off CF (Compact Flash) cards (or any SD-Card - got a heap of 32 GB cards I’ll probably never use)… Or even boot off USB 3 (much better i/o on Pi4 using USB 3)… and run my VM’s on NFS (booting off USB device is a bit of a “hack” on the Pi4 - not standard, don’t know if ProxMox would support that - last time I tried Kali - it didn’t support USB 3 boot).

Speaking of ARM - I recently had an “expert” on AWS architecture deploy an ARM64 EC2 instance and expected me to install x86_64 RPMs on it… He’s not that young either - but I suspect he NEVER ever worked with non Intel / AMD CPU instruction sets - probably never entered his head when he selected AMZN Linux aarch64…

I read your linked doc.
Key point of difference is it can make either a VM or a container.
So why would I use promox rather than docker to make a container?
I found using docker to be both a huge learning curve, and an awkward tedious process in practice. It is a real programmers world. Is promax somehow different?

Docker tends to be geared more towards a single software package. Someone writes code geared toward a certain config, and wants to distribute it knowing that whatever funny system someone has, it’ll just work, because it’s encapsulated in that Docker container.

ProxMox is more along the lines of hyper-v and esxi, where you can virtualize entire operating systems as a virtual private server. The main advantage to this is cutting back on footprint … a business or organization, or homelab that had a setup that would usually require 10 physical servers can now have one powerful server, and split that up into 10 virtual machines. Saves money that way. Containers are the same idea, except they use a lot less resources, and can be spun up in less time.

1 Like

Why not have one powerful server that does the work of 10 smaller machines?
What does splitting it up into 10 virtual machines achieve?
Pardon my ignorance

1 Like

You can’t always run multiple pieces of software on one server. Virtual machines are useful for things that wouldn’t benefit from being encapsulated in their own docker container, or aren’t going to be distributed, but need their own environment. Some applications don’t like co-existing with other things on the same server. For example cPanel used to only run on CentOS… (it now runs on Ubuntu as well) and required a clean server to even install. So if all I had is one big server, I wouldn’t be able to do anything else but host websites if I couldn’t use something like Proxmox to make virtual machines.

2 Likes

I had to look it up
My book (yes I still read paper) says ‘containers’ refers to multiple instances of the SAME operating system… in contrast to VM’s which can be different OS’s
Is that the sense you are using?

Not the same as ‘containers’ meaning anything in any type of sandbox.

Language is the source of all manner of misunderstandings.

Thanks, Doc, for the explanation. I have never been in the commercial world.

It should say same TYPE of operating system. Because containers operate at the operating system level, you can’t run Windows in a container, you’d need a full virtual machine for that. But you can run 4 containers, each with a different flavor of Linux.

There’s also a security element at play here. If I have my accounting system on the same hardware as my sales software, and a threat actor compromises that server, he or she would have access to both accounting and sales. If those systems were on their own virtual machines, the threat actor that broke into sales wouldn’t necessarily be able to access accounting. There is a vulnerability where people have “broken out” of their containers, but it’s a lot harder to do than just having access to one big system with everything laid out for you.

2 Likes

Yes, that is one good reason. Even something like firejail, with no OS at all, is a security aid.

It would seem containers lack some of the virtual hardware that a VM can provide.

Exactly. Containers operate at the operating system level, while full VMs are at the hardware level, giving VMs more virtual hardware access than a container can provide.

The last advantage of having virtual machines/containers that I’ll mention is … let’s say you do happen to have 2 physical servers with virtual machines/containers on them. With Proxmox (and with most other type 1 hypervisors) you can set up clusters, so that when you have to do maintenance on one of the physical servers, you can simply move all your VMs over to the other host, without downtime. There is also HA capability, so that if a host goes down, the VMs will automatically move to a host that’s up (generally you need 3 hosts for this though, as they use quorum to determine where the VMs go) … I have a tutorial coming out soon on how to cluster Proxmox nodes.

1 Like

That sort of redundancy is a considerable advantage, but it requires 2 physical machines.

This ‘levels’ jargon has always bothered me… it bothers me with networking too.
It is correct, I believe, to say that a VM emulates hardware
Is it correct to say that a Container emulates an OS?
Docker does not emulate anything… it simply puts a cutdown OS there so some commands can run.

I shall watch for that. May even find out what a Node is
Its a foreign world for us home users.

Not so much emulates, but rather sets aside a set of resources from the operating system (in the container’s case) or from the hardware (virtual machines)

Check this out for more info:
LXC Containers

A node is simply another name for a server computer that Proxmox would be running on. The word “node” is commonly used when referring to one unit in a cluster or network of units.

1 Like

Well said. Its the jargon that obfuscates things. Plain words help.

So my understanding now is…Docker or LXC are reasonable examples of what you mean by a Container in the Promoc context. They simply provide OS resources… by whatever means.

Thanks. I am not feeling quite as out of touch now.

1 Like

Precisely.

1 Like

Firejail is a “container” - FreeBSD (and maybe Net and Open) have had “jails” for decades, and IBM has has had containers even longer, and Solaris has had them since Solaris 10 (maybe circa 2007?)… Pretty sure a “chroot” is a type of container too…

My understanding, having come from primarily Solaris Containers (also called Zones) - in many cases they might be using he same kernel the “global zone” that “booted” the container - unless - you’re using Branded Zones, e.g. Solaris 8 and 9 on Solaris 10, or Solaris 10 containers on Solaris 11. Solaris / Sun / Oracle HyperVisor (hardware based virtualization - like ESX, or ProxMox, or KVM or XEN) runs (or can run) on ALL “T” series Oracle / Sun servers (and blades) - it gets confusing because you can’t always know whether a host is a Solaris zone, a “Global” zone, a dedicated LDOM (hardware virtualized) or running on bare metal…

I think Docker emerged out of LXC / LXD - and in many caes, the containers were using the same kernel as the hosting container daemon… Not sure how closely Docker follows that model, I’d suggest not too closely, as you can e.g. run a Red Hat Linux 8 docker container on Ubuntu 20.04, and don’t think Red Hat would be happy not using an “EL” kernel like on CentOS and Red Hat and Oracle Linux…

In my day to day job, I constantly get people expecting me to be a guru on Docker, but I’m not - the best I can do is maybe prune some orphaned containers… Most times - what’s inside docker containers (in many cases they seem to spawn multiple instances - and I don’t know why) is a mysterious black box and I don’t have the means to poke inside it. It’s all DevOPSy stuff - that “developers” have deployed - but as is often the case, the developers move on (or customer stops paying the developer for ongoing support), don’t provide any documentation, and BAU / Infrastructure get saddled with day to day support.

Connecting from the host or global container on FreeBSD and Solaris is relatively straightforward.

FreeBSD (and FreeNAS / TrueNAS) :

x@baphomet  ~  jls
   JID  IP Address      Hostname                      Path
     1                  rslsync                       /mnt/BARGEARSE/iocage/jails/rslsync/root
x@baphomet  ~  sudo jexec 1 /bin/tcsh
root@rslsync:/ #

On Solaris - maybe something like “zoneadm list -cv”, to get a list of running containers, then (as root) “zlogin $CONTAINER”.

My last “adventure” with ProxMox (well over 5 years ago - maybe even 10?) I tried out their containers - seemed like a nice idea - a Web UI to manage containers…

1 Like

If that is the situation , it is being rather poorly managed. If businesses are accepting software in containers without documentation they are setting themselves up for failures.

Today, noone would accept an OS without documentation, nor would anyone accept a standalone piece of software undocumented, so why are containers getting away with this?
I think it is because there is this mythology that containers are something new and unbreakable. … but , as you say, they are neither new nor without problems.

1 Like

Just seen the new owners (Broadcom - makers of some of the shittiest network equipment ever) of VMware are now going to “enshittify” the product…

Maybe time to start pushing ProxMox as an enterprise solution, again :smiley: (beat my head against the wall last time I tried!)

Because they’re twonks who did some devops shit at Uni and now think they’re full bottle on infrastructure… and they don’t care… their shonky solutions they deployed somewhere else are no longer their concern… ephemeral infrastructure…

1 Like

It goes with ephemeral employment… might even be a direct consequence.

any idea why, in your opinion, proxmox hasn’t been more widely adopted in the enterprise until now?

I never heard of it until a couple years ago.

Here is another possibility you may not have heard of. It’s pretty recent though.