Do we live in a Post-Open world? If so, where do we go from here?

A computing professional is someone who builds or modifies tools… like a toolmaker in a metal workshop.

Those

you mention are artistic professionals but not computer professionals
From a computing point of view they are users.

There is no stigma about being a computer user. All sorts of people use computing tools, and most appreciate them being “user friendly”.
I disagree that Linux tools are less user friendly… they are different and people wrongly equate different with less user friendly.
The main difference is that tools like Gimp are more general, more configurable, and therefore require the user to make more choices. Things like Photoshop are very focussed on one task, and make that one task easy, at the expense of not being suitable for other tasks.
Using Linux requires one to be more adaptable… to be able to adjust a tool to suit your purpose.

I hope you dont feel that I am being difficult. I want you to understand that while using Linux tools can be a bit more tricky , it is definitely not impossible.

2 Likes

Absolutely. Actually you know, Windows made me lazy. It has changed my mindset from a ‘computer professional’ to a ‘user’. I still give support for PCs, I still build custom PCs, but these have become so easy that I forgot how I tackled issues 15 years back. At that time, it was pretty hard and I should confess that I started to fear Linux as one of my build was going to bad. A client asked me whether I will be able to install Linux in the PC which I was assembling for him. Hardware was his, I was just assembling. After I finish assembling, I started installing Ubuntu without knowing anything about how it works back then. I didn’t know anything and thought it was like installing Windows and I was in the verge of crashing the hard disk with a failed partitioning attempt using Ubuntu. Luckily I always kept a DOS 6.2 floppy disk and it was one of the most essential thing in my PC building tool box. You all know, DOS had FDISK built in and that tool saved my life that night. I can remember I was sweating heavily because not only hard disks were very expensive at that time, I had to deliver the machine at the next day morning and the client would go out of station in that very day after getting the PC from me. So, there would be no time to replace the hard disk if it would crash. I was sweating and cursing myself for agreeing to install Linux. That fear was embedded in my mind till date and I am sorry to say, that I had spread that fear to many people who just enquired about Linux. I was going the easy way and it started consuming me… 30 years is a very long time. But, better late than never. From today, I will stop complaining and start to do whatever is needed. I will make my machine work as I want it to. That’s my promise. It may take time, but I will make it work. Thanks @nevj and thanks to all of you to give me such a moral support. :heart: you all.

1 Like

and, dont forget there is always help available here and at other sites.

If you have not mastered partitioning a disk for Linux, that would be a good place to start. Most experienced Linux people avoid doing partitioning during an install… they use a Gparted usb drive or dvd, and setup the partitions first, before attempting the install.
You should learn to use Gparted.

2 Likes

I use System Rescue for most partitioning efforts. It includes many resources for fixing systems, including GPartEd, and test disk.

GPartEd is the GNU Partitioning tool, and it’s very capable. Test disk is a file/directory/partition recovery utility (it a CLI utility, so not as user-friendly as many recovery utils for Windows, but far more capable/powerful).

There is documentation for using System rescue on the website. If you’re new to such work, I strongly suggest reading the documentation before using any similar set of utilities.

Ernie

3 Likes

Nope, not me… Call me experienced or not… I dunno… UNIX user since 1992ish, Linux user since 1995ish…

I NEVER dual boot… I never run a separate partitioner before the installer.
I prefer to have EVERYTHING on a single partition - I hate all this allocate so much for “/”, so much for “/var/”, so much for “/home” and so much for a swap partition blah blah blah…
Gimme :

/boot
/boot/efi
"/"

and that’s all I want - I prefer to have a swapfile over a swap partition…

This is not the same as installing a server O/S…

But in nearly all cases (desktop, or server) - I let the installer do the partitioning - in the case of Server OS - “hardening profiles” usually dictate a bunch of limitations e.g. CIS1 and CIS2 Red Hat hardening profiles insist on separate partitions (nearly ALL mounted with “noexc”) :

/boot
/boot/efi
/home/
/var/
/var/audit
/dev/shm
/var/log
"/"
SWAP

I detest the above layout - it’s a recipe for future constant capacity alerts… But I have to implement that as requirement to satisfy “industry standard” hardening profiles…

3 Likes

I find that installer scripts are often hard to follow at the partitioning step. Installers vary in quality and in the assumptions they make on your behalf.
It is especially important when multi-booting to have control over the partitioning.

I do agree that one can get a satisfactory result letting the installer do everything, and people who have not had experience with partitioning tend to do that, so that they never get improve their knowledge.

I can see that hardening nonsense in OpenBSD. They have it by default. It is like what we used to do when disks were never big enough… What I cant see is how it is supposed to help security.? All those filesystems are mounted anyway… they are accessible to any breakin, regardless of what partition they reside on.
I can see one may want /boot separate so it is not encrypted, but the rest makes no sense to me.

3 Likes

I only use GNU/Linux on the desktop, and I’m an experimenter, so I may be a bit different, but I DO dual-boot (I like experimenting with Windows too), and I use root (/) and home (/home) partitions to separate my user files from the system in GNU/Linux. This way, if I muck things up enough that I have to re-install the system, my user data/files can remain. As an experimenter, I have a greater than average chance of needing to re-install my system from time to time.

I agree that partitioning a drive without a reason is a needless waste of time, and if a newbie wants to dual-boot with Windows (at least for a while), the best way is to choose the ‘Install alongside’ option, and let the installer handle the operation. Most distributions today use the calamares installer, and it does a great job of handling re-partitioning.

My2Cents,

Ernie

4 Likes

Me too. There might be the odd special case.

For our production servers we have a separate disk for /var so if logs fill it up, it doesn’t affect anything. Same for /srv where we put our applications and home directories.

2 Likes

Is this a “hang over” from SUSE?

I remember being perplexed trying to get Apache running on SUSE (Enterprise) - all the howtos referenced /var/www (probably written around Debian or Red Hat based systems).

SUSE (EL) however defaults Apache to /srv/www… Which had me scratching my head for a bit… Got there in the end - setup a “Biker” forum using Dynamic DNS - can’t remember the forum software I used - but it worked… Ran it for about 18 months for local (i.e. my town) bikers to use (by “biker” I mean mostly Harley and Triumph riders - none of this Jap crap [but I did have both a Harley and a Suzuki {GSXR1100} at the time]).

3 Likes

It could be. We had a few SUSE servers. In fact, we just decommissioned the last one this year. It was probably at least 8 or 9 years old at the time. We used CentOS for everything we could. RHEL when required by software vendors. Now we’ve “standardized” on Alma. As I’ve said before, I still favor Ubuntu and sneak a few into the mix due to vendor support (and because I’ve been here for 16 years).

2 Likes

Same here… And Ubuntu server install is easier than anything in the Red Hat family - that text / TTY based “subiquity” installer for Ubuntu is way better than the GUI installer that you have to use (breaking out to the TTY “version” is awful) for RPM based distros like RHEL, OEL or CentOS (haven’t tried Alma).
Most recently I had to deploy a monitoring solution (server) on RHEL9 - ALL the vendor’s doco assumed Ubuntu / Debian - e.g. even down to where Apache should be and what and where the config files are - e.g. :
Ubuntu / Debian - “/etc/apache2”
RHEL : “/etc/httpd”
And the vendor doco for setting up SSL was 100% for Ubuntu - and ZERO info about applying that in RHEL or CentOS…
We’ve been trying to push that customer to a SOE/MOE for Linux servers - and RHEL9 - but I kinda wish we’d pushed for Ubuntu…

3 Likes