How to add a kernel module

Wrong in so many ways.

For example, terminal access usually is not related to “code”, as in the operating system’s core code. It’s usually about configuration tools and automation.

Additionally, I already explained that people have a wide range of access to the Windows API’s functionality. It’s just a myth Linux people spread about Windows. You can program really low level stuff in Windows. You don’t need the code of the kernel to do something like that.

As I see you twist my words according to your wish, and we should agree on meaning of every single word in prior…

I quit. Bye.

1 Like

That is part of the code and an important part of open source. Code doesnt work in isolation, its environment matters. It is just as important to have an open environment as to have open code.

Okay, let’s see it from another perspective then. Are you saying, people writing Bash scripts make use of the actual Linux source? Do you think they know parts of the Linux source code for writing scripts?

For a majority of users, it does not make any difference, whether they know the kernel’s or parts of the operating system’s source code, because either way, they won’t look at it or not to mention understand it, since it’s not relevant when writing scripts or even when using the C API. With the Linux C API, you look at the documentation on how to do something. With the Windows C API, you do precisely the same. No difference.

No, but when I edit files in /etc to fix some config, it helps to know what the components of Linux are and how they interact and what their configuration settings mean.
That is all part of and comes with open source. You get to understand the whole.

There is always a chance one can fix issues in Linux, because one has access to the full information. When you ask for help, people explain how it works and why certain changes are needed, as well as how to make the changes. With MS all you get is “use update package No x”

That is where MS and all proprietary systems fall down. Look at how difficult it is to drive nvidia cards… it is because the community does not have access to the full information.

It would help, if you could give a specific example. In Windows, you can usually acquire this knowledge, too. No need to look into the actual kernel source code.

I think you are over estimating the understanding powers of the average Linux users. I have experience with the Linux API, and obviously tons of experience with writing Bash scripts, though I can assure you, that I would barely understand kernel code. It’s way too complicated and way too much!

With normal Linux stuff, but I doubt this works for kernel source code.

Not convinced, as explained above. You can get all necessary information from Windows documentation or fora. There is no need for the actual source code, except you want to work on the actual source code.

It’s harder than manipulating AMD open source elements, however, it’s not that impossible. There is just a lack of general motivation, because of ideological reasons and arguments like “just use AMD, instead”.

If a huge amount of driver responsible developers would suddenly start on NVIDIA drivers, things would start working within a year.

There is more to GNU/Linux than kernel source code.
What is important about FOSS is open access to everything, including an enormous community understanding of how various bits interact.

I dont see that with MS. It is a different sort of community… most of them are consumers, not participants.

Can you give specific examples? My arguments earlier apply to non-kernel source code as well, in most cases.

It’s the same community more or less, except you have to know less, when using Windows, because shit just works. You may know more, but you don’t have to. When using Linux, you have to know more, because there is constantly something breaking.

Additionally, most Linux users are only consumers, as well. It’s a very romantic storytime to imagine, that Linux users are participating and improving things all the time.

For an in-place upgrade I think one has to be conservative and make an image of the os on either another partition or on a backup medium

For a normal clean install upgrade I usually keep the old version and do the clean install on a separate partition.

For a rolling release distro I think one needs some sort of snapshot before every update

I know your reply is about diagnosing the bug, and I think your tracing of it is really very thorough, but I just could not resist the opportunity to prompt people doing upgrades to take precautions.

I have used in-place upgrades with Debian. Only very minor problems like losing scanner drivers which were installed outside the package system. They were easily reinstalled.

Regards
Neville

1 Like

Even those who only do work and report issues are participating.
Free software is one of the greatest cooperative activities on the planet.
There are two ways to get really good software

  • one brilliant individual effort
  • a zillion people all reviewing one another openly and with a common goal
    MS is neither… it is in between

True.

Now, we still have to remember how the usual report goes, when an average user reports an issue.

"I don’t know how this happened, but this is my experience.




"

Response from maintainer.

“This is a duplicate. Closing this issue.”

Great participation, communication and community! :smiley:

I wouldn’t say that’s entirely correct.

Most projects which are successful have a handful of astonishing developers, while the vast majority of product users contributes only bits and pieces here and there, if at all. I have seen tons of projects with people complaining, but rarely actively contributing, not to mention creating pull requests.

Though, even then the effort and results are very limited. They are often not paid and if they are paid, it’s not like they can work full-time on it. Sometimes, they even work full-time on the particular open source projects, however these projects are usually so huge (usually the case with programming languages), that a handful of full-time astonishing developers is still not enough to make it happen, at least quickly enough.

When using the Windows platform, you get tons of great software. It’s just not Microsoft creating it, but third party developers. How is that software not good, then?
A majority of Linux software is not made by Canonical or whichever Linux based company, but by third party developers. The same developers might as well create the same crap for Windows.
There’s literally no difference between a Linux and a Windows developer in that scenario. Except Windows provides a big variety of GUI tools, while Linux just laughs in CLI “wHaT iis gUii ii dOnT KnOw wHaT Guii iiS”. :laughing:

Agreed. I guess by “image” you mean a cloned partition or such?
That’s overkill and space consuming.
I myself use the systemback.sh, which creates (incremental) snapshots of the systems files. It’s size equals to the sum of size of installed system files, logs, setting, etc…
After I got my system in a good working happy state, I take a snapshot. Even major upgrades can be reverted - if needed.
The only catch here is that when there was a kernel update, I need to refresh GRUB after systemback restore.

Exactly. I don’t upgrade too frequent, but when I do, install the new system on a separate partition. Then start to migrate necessary settings from my old system, so I move to the new instance relatively slow.

There were one exception:

On my VPS the hosting did not publish an image for Bullseye. So having the working Buster instance, took a snapshot, and did an inplace upgrade.
It went smooth, there was zero problem, rebooted into Bullseye. But I was ready to revert the upgrade in case something fails - as this VPS is mission critical for me, it must be up and running, can’t afford it being down for many hours.

https://keepass.info/help/kb/trust.html

Is this famous and open enough? It’s built using an official Microsoft product.

Your idea of success is different from mine.

A product is successful, if it works well for humans. Everything else is less successful.
There is a problem. The program is the solution. The program has to be usable by humans. That’s it. If it works in theory, but is unsuable by humans, then the program is literally useless. If you sit 10 years on a program and make the logic work perfectly, it still isn’t worth a single penny, if a human cannot operate it.

Disk space is cheap. I go for something ultra simple so less can go wrong.

Kernel updates in a rolling release lead to the grub refresh problem too, especially if the rolling release os is the one that controls grub.
Snapshots are good for minor upgrades.

2 Likes

Akito, you are sooo full of shit!! If you love Windows so much, enjoy it! This forum is for people who are seeking help for Linux.
If that’s all you can do here, then fuck your “experience” and shut up!

1 Like

Eh, no. It’s for everyone, who needs help. Additionally, this ain’t “It’sLinux”, but “It’sFOSS”. :wink:

You can wish me all the fucks & shut ups that you want, though just as you are permitted to do that, so am I permitted to talk about what I want to say.

If you would’ve spent more than your mental age in seconds reading what I said, then you would’ve noticed the numerous comments, where I explicitly mentioned how I run several Linux servers. :smiley:

As the naive user who initiated this vast debate, allow me to offer insights that might explain the issues in a unifying way.
@Akito says it is GUI per se. @nevj and @kovacslt dig down into each failure and find particularizing details.
Both are OBVIOUSLY correct. The particular details would never occur if there were integrated methods of testing and refining the components. Windows works because an entire building of engineers are there to work on the narrowest of issues.
My attempt to upgrade Mint last year failed because python does not have a phone number, address or team leader. The skills are distributed. A strong plus for the ideologically inclined, and an intrinsic weakness for people advocating for ordinary users.
This kernel module issue I faced arose, as @kovacslt has shown, from a reasonable script detail (blacklisting) applied over-broadly. Had there been one building of programmers testing the code, it would have been caught. Distribute the skills and tasks around the world and a strength becomes a fatal flaw.

2 Likes

@cliffsloane
I love yor summary,
but I feel distributed software development will win in the end
because
it is like evolution,
a large number of niches are tried, and a few succeed
It may not be fun if you are using one of the failed niche efforts
but the sum of all niche efforts will always beat a building full of experts

By the way, you might have achieved the same result with a simple fresh install. Never mind, we learnt how to tune nouveau drivers.
The building full of experts at nvidia did not do much to help you by failing to keep their drivers compatable with Linux.

Regards
Neville