Need more memory ... swapping

Have you seen this?

Memory 100% (62.5GiB/62.8GiB)
Swap 19% ((57.3GiB/293.0GiB)

That’s right, I was running an R process multiplying several large matrices. It filled ram and started swapping. Everything slowed.
I had to kill it.
The task manager looked as follows

That was just after I killed it.
The memory was full and the swap space used was slowly growing

So my 64Gb of physical ram was not nearly enough .
Where can I find a computer with maybe 10x or maybe even 100x that amount of ram?

I know, you are going to tell me to make my matrix algebra more efficient. Tried that… I am not clever enough, or it may be impossible.
I am a strong believer in doing numerical calculations by the simplest direct method, but I got caught out this time.

7 Likes

Can you guess how much memory (RAM+swap) would be needed to finish your calculations?
Maybe add even more swap, and let it run overnight.

3 Likes

Yes, I can calculate an estimate from matrix sizes. I was being lazy.
I cant calculate how long it will run, it is iterative.

3 Likes

@nevj
Been fighting lack of ram and # of cpu cores, with LFS!!!
So, in theory, 64G of ram should support 32 cores, do a lscpu and see how many cores the cpu has. I know, Gentoo, you can set # of cores and also # of theards, but if you exceed the ram with # of cores, the cpu will really work hard!!! Maybe you could buy some time with one of NASA super computers!!!

4 Likes

It is an i7 6x… so 6 cores, 12 threads.
It uses all of them. R supports multithreading for matrix operations provided you install the necessary libraries.

I think my best bet is to stop being lazy and look at my matrix algebra and see if I can make it more efficient.

4 Likes

Wish I had that!!!

You got me on the “algebra thing” must have been chasing the hot-girls in high school and skipped math classes!!!

7 Likes

In Python I might try using Polars for the calculations. It does “lazy evaluation” which can speed things up and reduce memory requirements. Your data may not be helped by that though.

You could use a big AWS EC2 instance. :slight_smile:

4 Likes

R does that.
I think sparse matrix storage may help, but it would increase compute time.

3 Likes

Sound a better idea and use of time… but at our age they are just memories

Neville, if it helps I have a couple of 64 mb memory cards you can have … :thinking:

3 Likes

Hi Neville,
I apologize in advance for the question, because I’m totally noob on this subject:
What is the value of the swappiness you’re using?

Jorge

2 Likes

I will have to look it up. Where do I look?
It may not matter, because the process used all of my 64Gb of ram… it had to swap

3 Likes

Hi Neville
you can use the following command:

cat /proc/sys/vm/swappiness

The default value is 60.
I lowered it because I use SSD and later deactivated swap

Jorge

4 Likes

Here is mine in MX

$ cat /proc/sys/vm/swappiness
15

That seems way different. It is set here in MX

nevj@trinity:/etc/sysctl.d
$ cat 99-swappiness_mx.conf
vm.swappiness = 15

In Void it is 60.

When I installed my new SSD, I left the swap spaces on two old HDD’s. My SSD only has Linux filesystems and ESP. I leave my data on the HDD’s.

$ swapon
NAME      TYPE      SIZE USED PRIO
/dev/sda3 partition 293G   0B   -2
/dev/sdg3 partition 293G   0B   -3

Yes, I have a huge swap space.

4 Likes

Hi Neville,
I think the fact that you have swappiness at 15 is the reason why you don’t use swap much
I kept the swap to an HDD and increase the swapiness to 60. I don’t recommend this value for an SSD, because it writes too much to the swap and the SSD has write limitations.

Jorge

4 Likes

Hi Jorge,
I increased it to 50 in MX
We will see how it behaves when I next try a large job
Thank you
Neville

2 Likes

I reran the large job with 1 trait instead of 2… that should use 1/4 the memory, because I am estimating variances.
It used 42Gb… and did not swap with swappiness=50.
I have 64Gb ram, so it did not need to swap.

In R floating point objects use 8 bytes (64 bits), so that is 42/8=5.2 gig of floating point values. Lots of arithmetic.

I estimate the 2 trait job would have taken 42 x 4 = 168Gb… that would have forced a swap or two. It is feasable to do as @kovacslt suggested and run it overnight. I think I will see if the 1 trait iteration converges first.

5 Likes