This topic is showing examples of how Linux does not give you the power to do something.
The reason I am creating this topic is because Linux based operating systems are based on the idea that the user is in charge. People often say something like “you can do anything on Linux”, which is in some way true, mostly for bad stuff nobody should do, like never updating, making the system less secure, etc., but not so for other things, which are necessary or good, like for example keeping best practices, staying up to date, etc.
1. Cannot even
kill -9 a process
Today, I encountered an issue with Linux, where it does not give me the power to kill a process stuck in a certain state.
No matter what I do, I simply cannot kill it. There is no way to kill it, as explained in the above links.
The only way to fix it is to reboot the system. Which is a terrible practice for servers that are meant to run 24/7. In my case, I have to reboot a server now, to “fix” this issue…
Thanks Linux, for giving me the power!!! (not)
kill -1 1
Oh, sorry, it is systemd not Linux… you probably cant do that
yeah whoever invented systemd should be bitten by a snake
FYI: The problem described in the original post has zero to do with
hmm… if not an init system issue, then is it a package manager issue?
It’s a Linux issue. I posted two links above, which explain everything already. Just read the answers proposed in those links. They explain how it is a failure in the Linux kernel.
Right, but the solution is simple without systemd
Not sure what communication problem I am trying to fight here, though as this problem is not at all related to
systemd, it is a problem on any system. The problem is, that you cannot kill a program that is stuck. So, again, nothing to do with
Why do you even get the idea of it being related to
systemd, at all?
If you kill the init process, all its children die, only the kernel continues… In my experience even sleeping processes die if you kill their parent.
I am not aware of how to do
kill -1 1 in a system with systemd. Maybe you know, but its not important… lets drop it, its off topic.
Didn’t think I needed to mention that killing everything is not the solution.
Besides that, this topic was mentioned in one of the links I provided above. Why does nobody read them?
It’s not possible to do it, either way, without killing everything.
True, but it is quicker than a reboot because you dont kill the kernel.
Doesn’t everyone say that Linux only takes like 10 to 20 seconds to boot? Or what am I getting wrong here?
Getting out of the misery of having killed everything takes probably much more time than just setting the reboot and forgetting it.
Depend how many things it has to start up.
and on how clever the boot process is… Void uses dash instead of bash while booting and that speeds it up considerably.
Display managers take more time than they are worth
Ssd is faster than hdd.
I can remember when SunOS took half an hour to boot on Vax.
Getting out of the misery of having killed everything
It all starts again automatically… If you kill init the kernal restarts it, and then init restarts all the other daemons.
Precisely. So, everything is killed and every process doing something is killed and restarted.
So, for example, if you run any services through Docker, then all these services are temporarily “under maintenance”, when killed, as described above. In a worse case, they will be permanently “under maintenance”, if things are restarted in the wrong order.
As I already tried to tell you many times in other situations: minimal distributions and doing stuff like this is not as easy to come by, as you have experienced so far.
There are so many catches and gotchas you have never experienced, because your usage of your minimal distributions is truly minimal.
If you would try to do seriously advanced stuff on those distributions, you would very quickly encounter the hard and annoying limitations minimalist software imposes upon the user.
That is both deliberate and my good fortune.
You can have this business af managing complicated servers all to yourself.
There is some law that suggests that complexity is proportional to the square of number of interacting entities. I would think it is probably worse than that
Yes. That’s great. But then you cannot say that people can just all start using minimal distributions to avoid problems, because it simply wouldn’t work.
I think it sounds about right.
First time I’ve heard of that… I know they ported SunOS to run on i386, and 68000, i.e. SUN’s first workstations were Motorola 68000 and one range were i386, but there-after they were all Sparc running SunOS 1, until around 1993/4 when they ported SunOS 2 (Solaris) to Power and i586 as well as their native Sparc implementations… AFAIK Vax hardware only ran VMS or OSF/1 (Digital UNIX). I remember it took like an hour to power down DG AViiON (which we had to get someone offsite to do remotely - but my boss showed me that holding down CTRL, then pressing SHIFT three times - would panic it - on the Wyse ASCII terminal - thus cutting down power off time by 75% ).
Sorry, it was not SunOS, it was Berkeley BSD on the Vax. The Sunos was on a sparc.
We also had one of those 68000 's with BSD. It was faster than the Vax.
Hopefully, harmless snake, not king cobra.
I use Linux exclusively (desktop and laptop; no servers) and my experience is that about once every two months I have to hold down the power button and reboot because something hangs I can’t find a graceful way to deal with it. It would be nice if this could be improved, but I understand that many people working on Linux are unpaid, and … it’s free, and you don’t have to have a Microsoft account. It feels like like a digital version of “living off the grid”, or “wabi sabi computing”. What about Red Hat? They sell Linux to businesses with servers – have they fixed this so servers don’t ever have to reboot?