@Rosika
Callable
Sorry for the jargon
C and most other languages have libraries of routines for things that are commonly done, eg calculate a logarithm, print a line, get a character from input, … These routines are referred to asfunctions
. When a function is used by a program, the statement which uses the function is referred to as a call
. If a routine can be used in this way it is referred to as callable
Regards
Neville
Hi Neville,
Thank you so much for the explanation. Itt´s highly appreciated.
No need to apologize at all.
In fact I like to learn something new, therefore my “question”.
I see. I think I understand now.
I´m vaguely familiar with the concept of functions
albeit within the framwork of bash-scripting.
Thanks again for your help.
Many greetings
Rosika
P.S.:
In post #3 I pointed out that I wanted to bring down CPU usage when helping a friend with anydesk
.
I fact we had a remote session the other day and I was trying out the cpulimit
-command.
Yet I have to say that it hadn´t the desired effect. To be more precise:
I indeed could bring down CPU usage in the Debian VM (this is where anydesk
is running) by using the command
cpulimit -p [process ID from anydesk] -l 55
which rendered anydesk
unusuable. I guess the value was much too low. So I tried
cpulimit -p [process ID from anydesk] -l 80
but anydesk
lost its responsiveness there as well.
So it seems limiting CPU usage for anydesk
is not the way to go.
Taking a look at the conky
resource monitor on my host I noticed the CPU usage went up to about 36 % when actively running an anydesk
session (i.e. when helping my friend and her desktop is being transmitted).
Simply starting anydesk
without a connection has almost no impact CPU-wise at all.
I almost fear there´s not much (if anything) I can do about that.
Curiously enough I imagine that anydesk
running in my Bodhi VM doesn´t produce such a high CPU load. I think (not entirely sure though) it´s about 25 % (conky readout).
Anyway, thanks so much to you, Neville, and indeed all of you.
Many greetings
Rosika
I think that your high cpu usage is caused by running anydesk
inside virtualbox
. anydesk
would be transferring a lot of data when it is displaying a remote screen, and all of that data has to be shipped from the host operating system into virtualbox
, and that work is all done by the cpu.
So I am saying, it is not anydesk
that is hogging your cpu, it is virtualbox
talking to the host OS.
So, would it be possible for you to install anybox
in the host OS? That would avoid the cpu work done communicating with virtualbox
.
---------------------------------------------------------
I have some success to report. I have installed i3 tiling window manager in Void linux with xfce , in my old 32 bit laptop. I works fine, and the learning curve is only about half an hour. I had a session using R and it is OK for that. Only issue is, when I draw a graph with R, it opens a new window for the graph, and it does not come out square. I may be able to fix that with some config. Thanks for introducing me to i3.
Regards
Neville
Hi again Neville,
thanks a lot for your latest reply.
I see. Well, I have to admit it didn´t occur to me that using a VM might be the “culprit”.
In actual fact I´m using KVM/qemu/virt-manager for Debian but the principle you referred to is certainly the same.
The background of it is that I (pretty long ago though) used to help a friend with teamviewer
. Until now that´s the only programme which couldn´t be sandboxed by firejail
.
This phenomenon was already dicussed here: Profile requests · Issue #825 · netblue30/firejail · GitHub
So as a workaround I installed teamviewer
in a VM to get another form of “sandbox”.
Later we changed to anydesk
and I installed it in a VM as well for “historical” reasons…
In actual fact, yes, that would be possible. Especially in view of the fact that anydesk
in contrast to teamviewer
even has a profile of its own for the use with firejail
:
ll /etc/firejail | grep anydesk
-rw-r--r-- 1 root root 655 Jul 11 2021 anydesk.profile
I would have preferred a VM though. But your proposition is certainly worth contemplating. Tnx again.
Seems nice. Glad you can report a success.
No idea if it helps but did you try:
$mod+Shift+num
?
like
$mod+Shift+2 # for using workspace2, i.e. a new workspace
This should shift the new window (for the graph) to a newly created workspace.
Perhaps it´ll come out more favourably there
Just a guess, don´t know if it will have the desired effect …
reference:
2.7. Moving windows to workspaces
To move a window to another workspace, simply press $mod+Shift+num where num is (like when switching workspaces) the number of the target workspace. Similarly to switching workspaces, the target workspace will be created if it does not yet exist.
from i3: i3 User’s Guide
Many greetings
Rosika
Hi all once again,
in the wake of finding solutions to my “problem” - which in the meantime may be considered as solved - I found another very interesting way of limiting cpu usage of a certain process, namely cgroups.
Accordingly I tried to read up on the topic (at least a bit) and came across some interesting solutions:
- I installed
cgroup-tools
on my Debian VM lscgroup
gave me some insights as to which cgroups are available on my system (just to look around a bit)
I was following two sources:
- (1) Restricting process CPU usage using nice, cpulimit, and cgroups | Scout APM Blog
- (2) How to Limit CPU Usage of a Process on Linux
Both were consistent in the first step:
sudo cgcreate -g cpu:/cpulimit
Here we create a cgroup.
Here, cpulimit is the group name that controls the cpu usage.
That´s pretty clear so far.
After that we have to set a property (or properties) on the cpulimit group.
Yet here sources (1) and (2) seem to follow different paths:
- (1):
The cpu controller has a property known as cpu.shares. It is used by the kernel to determine the share of CPU resources available to each process across the cgroups. The default value is 1024.
To set the cpu.shares to 512 in the cpulimited group, type:
sudo cgset -r cpu.shares=512 cpulimited
Accordingly we could start the matho-primes
process thus:
sudo cgexec -g cpu:cpulimited /usr/local/bin/matho-primes 0 9999999999 > /dev/null &
O.K.; I think that´s clear so far.
But the second source takes a different approach:
- (2):
After
sudo cgcreate -g cpu:/cpulimit
# that´s the same, but then:
they choose different properties: not cpu.shares but rather cpu.cfs_period_us and
cpu.cfs_quota_us .
sudo cgset -r cpu.cfs_period_us=1000000 cpulimit
sudo cgset -r cpu.cfs_quota_us=100000 cpulimit
So after that the command to run would be
sudo cgexec -g cpu:cpulimit YOUR_COMMAND
In theory that should be clear as well, BUT:
Which of the two ways is to be preferred (how can I decide)
And is there any way of getting a list of properties of controllers which can theoretically be set at all
I gather for a layman (or laywoman) it´s not easy to decide how to go about.
But it´s an interesting topic - therefore my asking …
Many thanks in advance for your help …
… and many greetings.
Rosika
UPDATE:
Hi again,
after spending the better part of today´s Sunday-afternoon looking around for futher information I eventually found this (hopefully) good source:
3.2. cpu Red Hat Enterprise Linux 6 | Red Hat Customer Portal .
As for the difference between cpu.shares
and cpu.cfs_period_us
/ cpu.cfs_quota_us
as parameters for the cpu subsystem:
The
cpu
subsystem schedules CPU access to cgroups. Access to CPU resources can be scheduled using two schedulers:Completely Fair Scheduler (CFS)
[… and]
Real-Time scheduler (RT)
For CFS Tunable Parameters they point out:
The following options can be used to configure ceiling enforcement or relative sharing of CPU:
cpu.cfs_period_us:
specifies a period of time in microseconds (µs, represented here as “
us
”) for how regularly a cgroup’s access to CPU resources should be reallocated […]cpu.cfs_quota_us:
specifies the total amount of time in microseconds (µs, represented here as “
us
”) for which all tasks in a cgroup can run during one period (as defined bycpu.cfs_period_us
)
[…]
Those two are applied as “Ceiling Enforcement Tunable Parameters” if I understand it correctly …
… whereas cpu.shares belong to the so-called “Relative Shares Tunable Parameters”.
cpu.shares
contains an integer value that specifies a relative share of CPU time available to the tasks in a cgroup […)
So I´m guessing it´s up to the user/administrator to decide which path to go…
It looks like both ways should work as the two tutorials on the matter (see post #25) seem
to indicate…
I hope I´m not completely mistaken there …
Have a nice Sunday and many greetings from Rosika
Hi all,
just want to let you know how things went after trying both versions of implementing cgroup rules.
The first one (using shares
):
sudo cgset -r cpu.shares=512 cpulimited
actually behaves similarly to nice
.
When starting a process
sudo cgexec -g cpu:cpulimited dosbox
it uses all the cpu computing time it needs - as long as no other resource-hungry process is competing for it.
The author of source_1 points it out as well:
If you run
top
you will see that the process is taking all of the available CPU time.This is because when a single process is running, it uses as much CPU as necessary, regardless of which cgroup it is placed in.
The CPU limitation only comes into effect when two or more processes compete for CPU resources.
As a next step I turned to implementing cgroups with the cpu.cfs_period_us
and cpu.cfs_quota_us
option:
sudo cgset -r cpu.cfs_period_us=1000000 cpulimit
sudo cgset -r cpu.cfs_quota_us=100000 cpulimit
As the author of source_2
points out:
Now whatever process you add to cpulimit CGROUP will use 1/10th (100000/1000000 = 1/10 = 0.1 = 10%) of the total CPU cycles.
And indeed. A process otherwise using up 100% of CPU resources now uses 10%.
So this one is similar to cpulimit
.
Yet:
The advantage of control groups over
nice
orcpulimit
is that the limits are applied to a set of processes, rather than to just one
(source_1 )
which adds an additional benefit over cpulimit
and nice
.
Many greetings from Rosika