Server questions

I wonder whether some of our friends who work with servers could fill us mere mortals in
on what the server world is like today?

When I last had something to do with a server, it was just a SunOS box that our site ran in order to organize site email, and allow ftp access to the internet. That was 25 years ago.
Today , i am vaguely aware that servers are sometimes located off-site and that they do more sophisticated things like host web pages and run databases. I am also vaguely aware that there are specialized servers that just route internet packets and do dns.

Can someone please give us an up to date picture ?

2 Likes

It’s hugely varied… in my experience, some 90% or more is virtualized…
So you have e.g. VMware ESX (using Photon Linux O/S) hypervisors, which also host VMware management VMs to do more sophisticated VMware stuff, like vSphere and/or vCenter (e.g. you can vMotion a virtual server from one ESXi host, to another - very useful when e.g. replacing an ESXi blade in a blade chassis).
Those ESXi hosts, host virtual servers, running x86 and x86_64 with Windows and Linux servers, occasionally BSD and Solaris x86.
Some of those VMs might be RDBMS servers running MS SQL Server, or Oracle RDBMS (usually on Linux, but it is possible to run Oracle DB on Windows servers too).
There are still Solaris out there hosting legacy stuff, I haven’t seen Solaris 8 or 9 recently, but have one customer still running solaris 10, and 11, on Oracle branded Sparc T series systems, and to further confused matters, some of those are running on “metal” and hosting hypervisors (Sun / Oracle T series have built in hypervisors) which then host virtual Solaris servers or "LDOM"s (Logical Domains) - and the further abstracted, when some of these LDOMs then host multiple containers, or Solaris Zones (i.e. kinda sorta like docker).

These are routers, and switches and firewalls… Cisco still the biggest player in that space, AFAIK, their first devices were actually on Sun boxes running UNIX… They’re generally not considered “servers” as such, in many cases, these are also run as “appliances” on a virtualization platform (typically VMware, but also HyperV, KVM, Oracle VM).

Note - there’s also “serverless” computing… e.g. you can deploy a virtual server in Amazon EC2, but it can use a database hosted in Amazon RDS, but you NEVER get access to the server hosting this database, it’s serverless (you can do PostgreSQL, MS SQL, Oracle DB, MariaDB, MySQL etc)… There are many other things that Amazon provide in the serverless space, like messaging, and things like Kafka…


Many organisations are “downsizing” their managed infrastructure - i.e. migrating their stuff to the cloud (mostly Amazon AWS and Microsoft Azure).
But there are still many organizations and companies that have their equipment stored in Data Centres - many will have or use, two data centres. This are massive buildings with all the stuff needed to host servers.
Most customers will rent a certain amount of floorspace in one of these DC’s (Data Centres) - these are often called Data Halls, or “cages” - however some customers with smaller requirements, maybe have to share their “data hall” with the racks used by others…
In nearly all cases, working in these DC’s is A HUGE PITA! I can’t really spend more than a couple of hours at anyone time - because the aircon sucks ALL the moisture from your body, so you have to constantly re-hydrate (but you can’t take ANY liquid containers into the data halls) - so constantly having to locate one of the “Break Out” rooms, that usually have free WiFi and food vending machines, and coffee making facilities…

4 Likes

big noisy rack mounted computers that i enjoy playing with. But other than that, what he said :slight_smile:

Also hey @daniel.m.tripp does anyone use proxmox in the enterprise? just wondering.

1 Like

I’ve never seen it used in anger, or production, never mind dev /test / sandpit…

A shame really… quite a nice product…

I used to sometimes enjoy break fix on server hardware tasks, but not since everyone stopped having their gear “on-prem” and moved it all to dedicated Data Centres, or cloud hosting… These days - even DC hosted stuff is called “on-prem”, to differentiate it from Cloud hosting…

I think my last “on-prem” job was for a government department, I was a contractor, but was there for nearly 4 years, and was nearly ALL Solaris (but Solaris hosted a bunch of infrastructure things, like all the backups for Wintel and UNIX, Proxy Servers, reverse proxy servers etc)… Both their data centres were just computer rooms in buildings they owned, i.e. multifunction buildings, in the CBD, or nearby, and I had easy, 24x7 access to them with my regular building access pass… Most of the big players (e.g. Fujitsu comes to mind) have their big DCs out in the sticks (what we Aussies call “bumf–k”) or ugly industrial areas (also inconveniently located in bumf__k outer suburbs)… and you have to pre-arrange your visit, hand over your photo ID (which they keep hold of till you leave) - it’s all so BOOOOORING!

1 Like

I don’t have years/decades of experience with servers to witness the paradigm shift.

However, ever since I started hosting websites and other properties (like this forum), I have been using ‘cloud servers’.

Cloud server is just glorified term for virtual private servers which is basically running a VM on a server owned by platform provider (like GCP, AWS, Linode, DigitalOcean).

Many of these platforms also provide the cliclk-to-deploy feature where you deploy a new web server preconfigured to run a specific software.

For example, I use DigitalOcean’s one-click deployment for this Discourse forum and Ghost CMS for the main website.

This makes things easier but you can only deploy one service per server. Using Docker compose and reverse proxy enables you to run more services on a single server but there you have to do a lot of things on your own. You learn from it, of course, but you’ll also have to do a lot of maintenance and spend time on it.

5 Likes

I am overwhelmed with the ‘dogs breakfast’ approach. Its like, lets try every possible combination and let them fight it out.
Why the focus on virtualization? Is it just economics? Is it just new technology for its own sake?
There was a stage when the internet had some elements of planning and design.
Its difficult to see what the future holds in that scene.

1 Like

Because people are wanting less “server sprawl” in their data centers, so they focus on one gigundo machine with virtual servers to take the place of what would have been a whole row of em.

3 Likes

My experience has been similar to Dan’s. When I first started, we ran a mainframe. Well, two mainframes. The first network the company used was token ring and had a PC that was dedicated to a file serving task, so it was called a server. (4 years working here)

The next company I worked for used Netware and a dedicated server we upgraded a few times. (7 or 8 years working here).

Then I worked for a company that served as the tech support staff for smaller companies that couldn’t afford their own staff. They all seemed to use one or more servers running Netware and later Windows. (10 years working here)

Now I work for a larger (but not super large) company. Actually, I started 15 years ago today! They had two datacenters with mainly HP equipment for servers, but not so many rack mount servers. They were rack mounted enclosures with 16 blade slots. The network vendor is Cisco for switches, routers, and firewalls. Recently we changed to Fortinet firewalls. We have Pulse VPN concentrators. EMC SAN for storage.

Those two datacenters became one. Then we sold our datacenter and leased space in our former datacenter instead. Accountants, go figure.

Initially 90% of our servers ran on hardware. Eventually that flipped and 90% ran on virtualization platforms. For the most part it was/is ESXi, but also quite a bit of Hyper-V and some OpenStack. A few years ago a hardware failure killed off our OpenStack environment and we decided to simplify our life and get everything we could running on just ESXi. Also, when I first started we were 90% Windows and have flipped that to 90% Linux. We’re a “Red Hat” shop, but 90% of those are really CentOS. Now we’re planning to go to AlmaLinux, unfortunately. I do have a few Ubuntu servers running. Hopefully we can expand that. We’ll see.

After killing off one of our datacenters 5 or 6 years ago we moved 90% of our servers to AWS. There are a few things running on Azure and we also use a service running on OCI.

2 Likes

Thanks to all for contributing your experiences.
It seems centralization is winning over distributed computing, at least for server functions.

No - I’d say the opposite is true…
I’d call software as a service massively decentralized, and distributed…
Most properly architected solutions, e.g. hosted in AWS (I have little experience with others) will be split across availability zones, and maybe even span regions, and maybe fault tolerant and load balanced…
They will also probably use Amazon RDS service, i.e. database as a service…
Many will use things like SNS (I think it’s kinda like MQ - i.e. interapplication messaging) - I’ve lost count of all the products that AWS offers - there’s also Kafka as a service, and things like Lambda (I really don’t know what it does, but it’s widely used).
And Content Delivery Networks (CDN) - that’s MASSIVELY distributed computing!

2 Likes

So Telstra , this week, losing its nameserver and taking out a large part NSW. They could do with a bit more redundancy.

Yeah - there’s all that legacy stuff from the last millennium still hanging around and stinking to high heaven :smiley: - and of course Telstra are a vast gigantic legacy monolith, that still echoes its past when it was part of The Postmaster General! :smiley:

I have one customer that still runs vital infrastructure (DNS) on Solaris… and it’s flaky… and it’s all hideous bind database files… I vastly prefer the way Microsoft does DNS, and also Amazon Route 53…

I cringe whenever I look at some bind zone files…

Oh - yeah - Amazon’s Route53 - yet another “serverless” thing - I’m guessing it probably runs bind on a farm of servers behind it - but the customer / user NEVER sees that deep…

2 Likes

I worked with one guy who did some of the basic dns work … yes on Solaris and SunOS and BSD.
Never touched it myself, I was lucky enough to have an offsider who grappled with low level stuff.

I had to look at my email signature but my title is Enterprise Infrastructure Architect.

Most days I end up looking at our EKS clusters. They run on AWS spot instances across at least three availability zones, but all in the same region. Normally an application will have at least two or three pods running on different nodes in the cluster. EKS detects when pods aren’t healthy and launches new ones. The helm charts used to deploy the applications specify how many pods minimum and maximum. The number varies based on load. We do have some more “sensitive” work loads that are run on On Demand instances so they won’t be interrupted with spot node replacements.

I do like my job almost all the time. First I get to work from home 100% of the time. We have an “in the office” day every two weeks or so, but my manager doesn’t care if I go or not. Usually I’m in the office more times than that just to look at something. There is a wide variety of things I get hit up with each day. That’s good and bad. I don’t get bored, but sometimes get more tasks that I can handle in a timely fashion. We also have applications running on standalone EC2 instances using RDS databases, API Gateways, and Lambdas. One of the newer apps started using Kafka, but it isn’t something I work on much.

The last couple weeks a big project has been underway to decommission a bunch of VMs running on ESXi in our datacenter. I’m archiving a bunch of data to S3 Glacier IR storage. There is also the usual break fix stuff and SSL renewals. Yada, yada.

All Linux under the hood. We do still have quite a few Windows servers too, but they are dropping like flies in the latest rounds of decommissions.

2 Likes

I could not do that. I like to dig into one thing and understand it.