Listing manually post-installed packages in Debian

Hi all, :wave:

it´s not a real problem I´m dealing with here. It´s rather out of curiosity that I ask … :blush:

After installing Ubuntu or one of its derivatives (Lubuntu in my case) I naturally do quite a bit of post-installation work, i.e. I install some of my favourite packages.

To be able to refer to them when another new installation is needed one has to keep track of everything post-installed, e.g. by writing those newly-installed packages into a list.

However: if one neglects this task there´s a fine command listing all manually post-installed packages (those installed with apt) without the respective dependencies (which seems a crucial point to me):

comm -23 <(apt-mark showmanual | sort -u) <(gzip -dc /var/log/installer/initial-status.gz | sed -n 's/^Package: //p' | sort -u)

This has worked great in the past and I´ve already put it to good use. :+1:

Alas this command doesn´t seem to work (at least not the way it should) on Debian (I tried it on Debian 10):
Here it says:

gzip: /var/log/installer/initial-status.gz: No such file or directory

, which is the crucial part, I think. The command lists more than it´s supposed to. :thinking:

So due to the fact that Debian doesn´t have the initial-status file available there´s no way to use it for comparative purposes.

Does anyone of you know of an alternative command for Debian systems :question:
My guess is: the only alternative is to keep track manually of everything post-installed.

Many greetings
Rosika :slightly_smiling_face:

No solution is perfect, because they are all picking up a rock with chopsticks, i.e. they are doing it the wrong way around, because Linux distributions didn’t see it as necessary to include a functionality that automatically tracks user-installed packages. Not all “manually” installed packages are actually user-installed by the actual human.

For example, if I run any of the examples on the Debian in my WSL2 instance, it always lists apt as manually installed. Obviously, I did not manually install apt.

Therefore, one method of getting at least some list having some truth to it is the following.

However, I doubt it will make it as accurate as we would like it to be. It does not for me.

The only way to make an absolutely reliable list is Linux distributions would add that as a feature, as part of the operating system.

1 Like

Hi Rosika,
Not exactly what you want, but
dpkg -l | more
in Debian
will list ALL installed packages, including the ones post-installed

So if you did dpkg -l before installing anything, then again after post- installs, then did a diff of the two, you should get a list of post- installed packages.
Dependencies may interfere with this method.
Regards
Neville

Also

1 Like

Hi all, :wave:

thanks to all of you for your suggestions. :heart:

@Akito:

Thanks for the link. I looked at the page and followed the link to https://gist.githubusercontent.com/UniIsland/8878469/raw/334303bb2c5b94cc5a81cdde438bcd8df278e4ef/list-manually-installed-packages.sh .
I copied the contents into a .sh file and made it executable.

And indeed it worked to a certain extent. :smile:
As a result I got:

./list-manually-installed-packages.sh 
ananicy
apt-transport-https
bat-musl
build-essential
cgroup-tools
codium
command-not-found
containerd.io
cpulimit
dkms
dnsutils
docker-ce
docker-ce-cli
dosbox
evince
exa
gawk
git
hardinfo
i3lock
i3status
i3-wm
inxi
jq
lnav
mathomatic-primes
needrestart
neofetch
python3-pip
python-pip
schedtool
smem
snapd
software-properties-common
suckless-tools
w3m

Comparing that to my list of post-installed packages (I kept track of that in text-file) I found out that there are still some packages missing, like


fish
firejail
debian-goodies
gnupg2
[...]

to name but a few. :thinking:
So you´re perfectly right in your assessment:

(reference to List all manually installed packages on a debian/ubuntu system · GitHub ).

At least this method works to a limited extent, which is more than I was hoping for given the fact Debian has no initial-status file available per default. :wink:

Right.
… or keepeing track of what is post-installed manually. :slightly_smiling_face:

Thanks so much for your help.

@nevj:

Hi Neville,

thanks so much for your suggestion as well.

Yes, but I think this list would show all dependencies (initiated by the installation of packages) as well - which is what the command (fur Ubuntu and derivatives, see above) clearly avoids. :wink:

Of course you´re right with all of your statements.
Thanks also for the link you provided. I´ll look into it.

Many greetings to all of you from Rosika :slightly_smiling_face:

1 Like

Yes, unfortunately.

In Void linux it is easy
The command xbps-query has an option

 
`--list-manual-pkgs`
Lists registered packages in the package database (pkgdb)
 that were installed manually by the user
 (i.e not as dependency of any package).

That is what you need
Not the first time Void has something better than Debian
Cheers
Neville

2 Likes

Hi Neville, :wave:

thanks for the additional information. :+1:

Interesting indeed. I didn´t know there was a distro that sports an built-in functionality like that.

O.K.
I looked it up on DistroWatch.com: Void and it seems to be a good choice for many people I guess.
The download size of 400 - 1000 MB seems reasonable enough.
Alas it´s the XBPS package management (haven´t heard of that until now) and the rolling release model which keeps me from digging deeper. :blush:

Thanks anyway for bringing it up.

Many greetings.
Rosika :slightly_smiling_face:

This would unavoidably introduce human error into the situation. So, it’s just a workaround, at best.

1 Like

What makes you think computers are devoid of human error?
They are very good at copying things and crunching numbers, but it is still possible to introduce errors. It is just more subtle.

You just have to fix it once. That’s it. You can’t fix human, though. We are utterly flawful.

If you fix your program, then it can calculate the same thing trillions and trillions and trillions… times and it will always be correct.

Imagine a human would calculate that much and how much error that would cause.

Well yes and no.
Someone will change something else in the system that it depends on, and it will,either stop working, or produce a flaw.

What you say only applies if my program runs without an operating system and controls the whole computer… like happened 60 years ago with some of the first computers.

We fixed that already. As I said, we just have to fix once and then it works. In this case, we have Docker. Normal non-privileged, self-contained Docker containers work everywhere precisely the same. Write once, run anywhere, except you don’t need the whole Java bloat on top of it. :wink:

It does not have to be without an operating system. A strictly stripped down bare bones, barely functioning, operating system like e.g. Alpine is so minimal, that you can exclude most of crap you get on your average Linux OS. Now, count in aforementioned Docker, which provides declarative reproducibility. Et voilà…

1 Like

Its a nice dream.
The moment operating systems were invented, computers were re-humanised, and that let third party errors sneak in.

Docker is a nice try. i am waiting for someone to fiddle with it and let some instability back in. It is not immune to kernel problems either.

A lot of the posts you see on itsFOSS are about this sort of cross-program interference problem. We cant run everything in docker… we could maybe with full VM.

1 Like

Hi all, :wave:

thanks so much for your latest comments. :heart:

I hear you. One has to be very disciplined to (manually) keep track of everything post-installed without fail, that´s for sure. :wink:

Thanks also for your additional views on computers and human error.

I´ll look into Alpine as I got interested in what I might be able to do with it…

Many greetiongs
Rosika :slightly_smiling_face:

Yes, if someone wants to break something on purpose, then it still can be broken. Nothing to be done about that.

I was talking about real life scenarios. For example, kernel problems affecting a running container nowadays have almost a chance of zero to appear. So, that example is extremely far fetched, too.

In real life, Docker is the solution and not just a “nice dream”.

It’s because the design of Linux is shit in this regard. Actually, the design of all *NIX operating systems is. Actually, all major operating systems are shitty like that. They are all more or less stuck in the 80’s.

Though, it’s also the sort of anti-pragmatic elitists with blinkers, who are the problem in the *NIX world. At least in the Apple and Microsoft world, people finally understand that programs are made for humans, not the other way around.
In the *NIX world, too many elitists still think, we are in 1976 or something.

For example, they just test their software and say that’s enough, because that’s what they develop and nothing else. However, if a 3rd party program is constantly used with the aforementioned piece of software together by the vast majority of users, then they should test compatibility with that software too or maybe even fuse the features in their software. However, that does not happen that often, because “it’s not my problem”, they say.

Another example is making programs for computers instead of for humans. They make everything weird and anti-human and justify it by calling it POSIX compliant or “we have always done it like this” or “everyone does it like this” or “that’s correct according to Computer Science”, instead of forgetting about that crap and letting it be more human.

The ironic thing about POSIX is, that the basic idea of it is great. Create great compatibility, even across operating systems. However, if the rules for creating that compatibility are stuck in the 80’s, then it’s still a shitty specification. POSIX is utter garbage. I am so proud of Linux shitting on POSIX and making Bash better than POSIX shells and allowing to force POSIX mode on Bash, only if needed. POSIX shells are utter crap. Bash is slightly less crappy, because it’s not forced to be POSIX compliant.

Additionally, the problem I already mentioned somewhere else is another cause: any arbitrary set of characters can be the input of your program, when e.g. used in a pipeline on the command line. This is terrible. Absolutely atrocious. It’s a debugging nightmare and you never can prepare for it, because literally any set of characters can mean anything. Great job, UNIX morons!

Why would you say that?

More or less the same result can be achieved with Docker. The difference is that Docker is generally tons more lightweight and easy to handle, while VMs are comparatively extremely heavy.

2 Likes

Well I have just spent a week learning Docker. You will see the result shortly.
At the moment I would not say “easy to handle”, but that is mainly due to the crappy documentation rather than the software design. I may get past that stage.

It seems to be a rule, that people who design and write really good software are incapable of documenting it in a way that newcomers can understand. Julia is another good example.
The docker documentation is comprehensive but incomprehensible. The third party stuff is more helpful but limited. The only way for me to get over hurdles is to guess and try it. I have had some resounding failures, plus a small success.

we cant run everything in docker

I was thinking of the DTE.
I dont see how docker could get out of its window, and run the window system. Maybe you can?

2 Likes

That’s one very simple example of doing it. It’s surely not the only way.

For example, you could probably also connect using SSH and use X forwarding.

However, even if that wouldn’t work, you could just run a server in the container and let it have a Web UI. Et voilà.

2 Likes

@Akito ,
Thats one window, not the whole DTE
but, its great all the same…
I could not see how ti do that, and I still dont understand it, but there is progress

Thanks
Neville

Okay, you are talking about Desktop Environments, i.e. DE.

The initial problem was that too many programs exist and interfere with each other. So, it’s not about having to run every single thing in an isolated environment, but it’s about minimising the variety of program interference, that usually happens without isolation. So, if you have the kernel and DE running without isolation, but all optional high level programs are isolated from each other, that is already a huge boost and you also don’t have to look for so long to find the issue, because there are only a few programs left that could interfere with each other, at all.

That makes a lot of sense.
Also the kernel has a level of isolation anyway…
The DE is a huge part of user space, and the source of lots of issues.

Given what you have just shown me, I might have a try at putting waterfox in a container.

2 Likes

Posix is just one of many antiquated standards.
It will be used until some better standard is written
Most languages have a whole series of standards, so they keep up… Posix seems to be stuck in a dead end.
The open source organizations need to take some action here